{"text": "\n*This notebook contains course material from [CBE40455](https://jckantor.github.io/CBE40455) by\nJeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE40455.git).\nThe text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),\nand code is released under the [MIT license](https://opensource.org/licenses/MIT).*\n\n\n< [Risk and Diversification](http://nbviewer.jupyter.org/github/jckantor/CBE40455/blob/master/notebooks/07.00-Risk-and-Diversification.ipynb) | [Contents](toc.ipynb) | [Geometric Brownian Motion](http://nbviewer.jupyter.org/github/jckantor/CBE40455/blob/master/notebooks/07.02-Geometric-Brownian-Motion.ipynb) >

\n\n# Measuring Return\n\nHow much does one earn relative to the amount invested? \n\nThis is the basic concept of return, and one of the fundamental measurements of financial performance. This notebook examines the different ways in which return can be measured.\n\n## Pandas-datareader\n\nAs will be shown below, [pandas-datareader](https://github.com/pydata/pandas-datareader) provides a convenient means access and manipulate financial data using the Pandas library. The pandas-datareader is normally imported separately from pandas. Typical installation is\n\n pip install pandas-datareader\n\nfrom a terminal window, or executing\n\n !pip install pandas-datareader\n\nin a Jupyter notebook cell. Google Colab environment now includes pandas-datareader, so separate installation is required.\n\n## Imports\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\nimport datetime\n\nimport pandas as pd\nimport pandas_datareader as pdr\n```\n\n## Where to get Price Data\n\nThis notebook uses the price of stocks and various commodity goods for the purpose of demonstrating returns. Price data is available from a number of sources. Here we demonstrate the process of obtaining price data on financial goods from [Yahoo Finance](http://finance.yahoo.com/) and downloading price data sets from [Quandl](http://www.quandl.com/). (UPDATE: [Look here for an alternative descripton of how to get live market data from Yahoo Finance](https://towardsdatascience.com/python-how-to-get-live-market-data-less-than-0-1-second-lag-c85ee280ed93).)\n\nThe most comprehensive repositories of financial data are commercial enterprises. Some provide a free tier of service for limited use, typically 50 inquires a day or several hundred a month. Some require registration to access the free tier. These details are a constantly changing. A listing of free services is available from [awesome-quant](https://github.com/wilsonfreitas/awesome-quant#data-sources), but please note that details change quickly. [Another useful collection of stock price data using Python](https://towardsdatascience.com/how-to-get-stock-data-using-python-c0de1df17e75).\n\n### Stock Symbols\n\nStock price data is usually indexed and accessed by stock symbols. Stock symbols are unique identifiers for a stock, commodity, or other financial good on a specific exchanges. For example, [this is a list of symbols for the New York Stock Exchange (NYSE)](http://www.eoddata.com/symbols.aspx?AspxAutoDetectCookieSupport=1) The following function looks up details of stock symbol on yahoo finance..\n\n\n```python\n# python libraray for accessing internet resources\nimport requests\n\ndef lookup_yahoo(symbol):\n \"\"\"Return a list of all matches for a symbol on Yahoo Finance.\"\"\"\n url = f\"http://d.yimg.com/autoc.finance.yahoo.com/autoc?query={symbol}®ion=1&lang=en\"\n return requests.get(url).json()[\"ResultSet\"][\"Result\"]\n\nlookup_yahoo(\"XOM\")\n```\n\n\n\n\n [{'exch': 'NYQ',\n 'exchDisp': 'NYSE',\n 'name': 'Exxon Mobil Corporation',\n 'symbol': 'XOM',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'NMS',\n 'exchDisp': 'NASDAQ',\n 'name': 'XOMA Corporation',\n 'symbol': 'XOMA',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'YHD',\n 'exchDisp': 'Industry',\n 'name': 'Exxon Mobil Corporation',\n 'symbol': 'XOM.BA',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'YHD',\n 'exchDisp': 'Industry',\n 'name': 'Exxon Mobil Corporation',\n 'symbol': 'XOM.MX',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'DUS',\n 'exchDisp': 'Dusseldorf Stock Exchange',\n 'name': 'XOMA CORP. DL -,0005',\n 'symbol': 'X0M1.DU',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'STU',\n 'exchDisp': 'Stuttgart',\n 'name': 'XOMA Corp. Registered Shares DL',\n 'symbol': 'X0M1.SG',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'TLO',\n 'exchDisp': 'TLX Exchange',\n 'name': 'Exxon Mobil Corporation',\n 'symbol': 'XOM-U.TI',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'VIE',\n 'exchDisp': 'Vienna',\n 'name': 'Exxon Mobil Corporation',\n 'symbol': 'XOM.VI',\n 'type': 'S',\n 'typeDisp': 'Equity'},\n {'exch': 'BUE',\n 'exchDisp': 'Buenos Aires',\n 'name': 'EXXON MOBIL CORP',\n 'symbol': 'XOMD.BA',\n 'type': 'S',\n 'typeDisp': 'Equity'}]\n\n\n\n\n```python\ndef get_symbol(symbol):\n \"\"\"Return exact match for a symbol.\"\"\"\n result = [r for r in lookup_yahoo(symbol) if symbol == r['symbol']]\n return result[0] if len(result) > 0 else None\n\nget_symbol('TSLA')\n```\n\n\n\n\n {'exch': 'NMS',\n 'exchDisp': 'NASDAQ',\n 'name': 'Tesla, Inc.',\n 'symbol': 'TSLA',\n 'type': 'S',\n 'typeDisp': 'Equity'}\n\n\n\n### Yahoo Finance\n\n[Yahoo Finance](http://finance.yahoo.com/) provides historical Open, High, Low, Close, and Volume date for quotes on traded securities. In addition, Yahoo Finance provides historical [Adjusted Close](http://marubozu.blogspot.com/2006/09/how-yahoo-calculates-adjusted-closing.html) price data that corrects for splits and dividend distributions. Adjusted Close is a useful tool for computing the return on long-term investments.\n\nThe following cell demonstrates how to download historical Adjusted Close price for a selected security into a pandas DataFrame.\n\n\n```python\nsymbol = 'TSLA'\n\n# get symbol data\nsymbol_data = get_symbol(symbol)\nassert symbol_data, f\"Symbol {symbol} wasn't found.\"\n\n# start and end of a three year interval that ends today\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(3*365)\n\n# get stock price data\nS = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\n\n# plot data\nplt.figure(figsize=(10,4))\ntitle = f\"{symbol_data['name']} ({symbol_data['exchDisp']} {symbol_data['typeDisp']} {symbol_data['symbol']})\"\nS.plot(title=title)\nplt.ylabel('Adjusted Close')\nplt.grid()\n```\n\nNote that `S` is an example of a Pandas time series.\n\n\n```python\nS\n```\n\n\n\n\n Date\n 2017-11-06 60.556000\n 2017-11-07 61.209999\n 2017-11-08 60.877998\n 2017-11-09 60.598000\n 2017-11-10 60.598000\n ... \n 2020-10-28 406.019989\n 2020-10-29 410.829987\n 2020-10-30 388.040009\n 2020-11-02 400.510010\n 2020-11-03 424.140015\n Name: Adj Close, Length: 754, dtype: float64\n\n\n\nPandas time series are indexed by datetime entries. There is a large collection of functions in Pandas for manipulating time series data.\n\n\n```python\nS[\"2018\"].plot()\n```\n\n### Quandl\n\n[Quandl](http://www.quandl.com/) is a searchable source of time-series data on a wide range of commodities, financials, and many other economic and social indicators. Data from Quandl can be downloaded as files in various formats, or accessed directly using the [Quandl API](http://www.quandl.com/help/api) or software-specific package. Here we use demonstrate use of the [Quandl Python package](http://www.quandl.com/help/packages#Python). \n\nThe first step is execute a system command to check that the Quandl package has been installed.\n\nHere are examples of energy datasets. These were found by searching Quandl, then identifying the Quandl code used for accessing the dataset, a description, the name of the field containing the desired price information.\n\n\n```python\n%%capture\ncapture = !pip install quandl\n```\n\n\n```python\ncode = 'CHRIS/MCX_CL1'\ndescription = 'NYMEX Crude Oil Futures, Continuous Contract #1 (CL1) (Front Month)'\nfield = 'Close'\n```\n\n\n```python\nimport quandl\n\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(5*365)\n\ntry:\n S = quandl.get(code, collapse='daily', trim_start=start.isoformat(), trim_end=end.isoformat())[field]\n\n plt.figure(figsize=(10,4))\n S.plot()\n plt.title(description)\n plt.ylabel('Price $/bbl')\n plt.grid()\nexcept:\n pass\n```\n\n## Returns\n\nThe statistical properties of financial series are usually studied in terms of the change in prices. There are several reasons for this, key among them is that the changes can often be closely approximated as stationary random variables whereas prices are generally non-stationary sequences. \n\nA common model is \n\n$$S_{t} = R_{t} S_{t-1}$$\n\nso, recursively,\n\n$$S_{t} = R_{t} R_{t-1} \\cdots R_{0} S_{0}$$\n\nThe gross return $R_t$ is simply the ratio of the current price to the previous, i.e.,\n\n$$R_t = \\frac{S_t}{S_{t-1}}$$\n\n$R_t$ will typically be a number close to one in value. The return is greater than one for an appreciating asset, or less than one for a declining asset.\n\nThe Pandas timeseries `shift()` function is used compute the ratio $\\frac{S_t}{S_{t-1}}$. Shifting a timeseries 1 day forward, i.e, `shift(1)`, shifts $S_{t-1}$ to time $t$. That's why \n\n R = S/S.shift(1)\n\nprovides the correct calculation for the quantities $R_t$.\n\n\n```python\nprint([S, S.shift(1)])\n```\n\n [Date\n 2017-11-06 60.556000\n 2017-11-07 61.209999\n 2017-11-08 60.877998\n 2017-11-09 60.598000\n 2017-11-10 60.598000\n ... \n 2020-10-28 406.019989\n 2020-10-29 410.829987\n 2020-10-30 388.040009\n 2020-11-02 400.510010\n 2020-11-03 424.160004\n Name: Adj Close, Length: 754, dtype: float64, Date\n 2017-11-06 NaN\n 2017-11-07 60.556000\n 2017-11-08 61.209999\n 2017-11-09 60.877998\n 2017-11-10 60.598000\n ... \n 2020-10-28 424.679993\n 2020-10-29 406.019989\n 2020-10-30 410.829987\n 2020-11-02 388.040009\n 2020-11-03 400.510010\n Name: Adj Close, Length: 754, dtype: float64]\n\n\n\n```python\nsymbol = 'TSLA'\n\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(3*365)\n\n# get stock price data\nS = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\nR = S/S.shift(1)\n\n# plot data\nplt.figure(figsize=(10, 5))\nplt.subplot(2, 1, 1)\nS.plot(title=symbol)\nplt.ylabel('Adjusted Close')\nplt.grid()\n\nplt.subplot(2, 1, 2)\nR.plot()\nplt.ylabel('Returns')\nplt.grid()\nplt.tight_layout()\n```\n\n### Linear fractional or Arithmetic Returns\n\nPerhaps the most common way of reporting returns is simply the fractional increase in value of an asset over a period, i.e.,\n\n$$r^{lin}_t = \\frac{S_t - S_{t-1}}{S_{t-1}} = \\frac{S_t}{S_{t-1}} - 1 $$\n\nObviously\n\n$$r^{lin}_t = R_t - 1$$\n\n\n```python\nsymbol = 'TSLA'\n\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(3*365)\n\n# get stock price data\nS = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\nrlin = S/S.shift(1) - 1\n\n# plot data\nplt.figure(figsize=(10,5))\nplt.subplot(2,1,1)\nS.plot(title=symbol)\nplt.ylabel('Adjusted Close')\nplt.grid()\n\nplt.subplot(2,1,2)\nrlin.plot()\nplt.title('Linear Returns (daily)')\nplt.grid()\nplt.tight_layout()\n```\n\n### Linear returns don't tell the whole story.\n\nSuppose you put money in an asset that returns 10% interest in even numbered years, but loses 10% in odd numbered years. Is this a good investment for the long-haul?\n\nIf we look at mean linear return\n\n\\begin{align}\n\\bar{r}^{lin} & = \\frac{1}{T}\\sum_{t=1}{T} r^{lin}_t \\\\\n& = \\frac{1}{T} (0.1 - 0.1 + 0.1 - 0.1 + \\cdots) \\\\\n& = 0\n\\end{align}\n\nwe would conclude this asset, on average, offers zero return. What does a simulation show?\n\n\n```python\nS = 100\nlog = [[0,S]]\nr = 0.10\n\nfor k in range(1,101):\n S = S + r*S\n r = -r\n log.append([k,S])\n \ndf = pd.DataFrame(log,columns = ['k','S'])\nplt.plot(df['k'],df['S'])\nplt.xlabel('Year')\nplt.ylabel('Value')\n```\n\nDespite an average linear return of zero, what we observe over time is an asset declining in price. The reason is pretty obvious --- on average, the years in which the asset loses money have higher balances than years where the asset gains value. Consequently, the losses are somewhat greater than the gains which, over time, leads to a loss of value.\n\nHere's a real-world example of this phenomenon. For a three year period ending October 24, 2017, United States Steel (stock symbol 'X') offers an annualized linear return of 15.9%. Seems like a terrific investment opportunity, doesn't it? Would you be surprised to learn that the actual value of the stock fell 18.3% over that three-year period period?\n\nWhat we can conclude from these examples is that average linear return, by itself, does not provide us with the information needed for long-term investing.\n\n\n```python\nsymbol = 'X'\n\nend = datetime.datetime(2017, 10, 24)\nstart = end-datetime.timedelta(3*365)\n\n# get stock price data\nS = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\nrlin = S/S.shift(1) - 1\nrlog = np.log(S/S.shift(1))\n\nprint('Three year return :', 100*(S[-1]-S[0])/S[0], '%')\n\n# plot data\nplt.figure(figsize=(10,5))\nplt.subplot(2,1,1)\nS.plot(title=symbol)\nplt.ylabel('Adjusted Close')\nplt.grid()\n\nplt.subplot(2,1,2)\nrlog.plot()\nplt.title('Mean Log Returns (annualized) = {0:.2f}%'.format(100*252*rlog.mean()))\nplt.grid()\nplt.tight_layout()\n```\n\n### Compounded Log Returns\n\nCompounded, or log returns, are defined as\n\n$$r^{log}_{t} = \\log R_t = \\log \\frac{S_{t}}{S_{t-1}}$$\n\nThe log returns have a very useful compounding property for aggregating price changes across time\n\n$$ \\log \\frac{S_{t+k}}{S_{t}} = r^{log}_{t+1} + r^{log}_{t+2} + \\cdots + r^{log}_{t+k}$$\n\nIf the compounded returns are statistically independent and identically distributed, then this property provides a means to aggregate returns and develop statistical price projections.\n\n\n```python\nsymbol = 'TSLA'\n\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(3*365)\n\n# get stock price data\nS = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\nrlog = np.log(S/S.shift(1))\n\n# plot data\nplt.figure(figsize=(10,5))\nplt.subplot(2,1,1)\nS.plot(title=symbol)\nplt.ylabel('Adjusted Close')\nplt.grid()\n\nplt.subplot(2,1,2)\nrlin.plot()\nplt.title('Log Returns (daily)')\nplt.grid()\nplt.tight_layout()\n```\n\n### Volatility Drag and the Relationship between Linear and Log Returns\n\nFor long-term financial decision making, it's important to understand the relationship between $r_t^{log}$ and $r_t^{lin}$. Algebraically, the relationships are simple.\n\n$$r^{log}_t = \\log \\left(1+r^{lin}_t\\right)$$\n\n$$r^{lin}_t = e^{r^{log}_t} - 1$$\n\nThe linear return $r_t^{lin}$ is the fraction of value that is earned from an asset in a single period. It is a direct measure of earnings. The average value $\\bar{r}^{lin}$ over many periods this gives the average fractional earnings per period. If you care about consuming the earnings from an asset and not about growth in value, then $\\bar{r}^{lin}$ is the quantity of interest to you.\n\nLog return $r_t^{log}$ is the rate of growth in value of an asset over a single period. When averaged over many periods, $\\bar{r}^{log}$ measures the compounded rate of growth of value. If you care about the growth in value of an asset, then $\\bar{r}^{log}$ is the quantity of interest to you.\n\nThe compounded rate of growth $r_t^{log}$ is generally smaller than average linear return $\\bar{r}^{lin}$ due to the effects of volatility. To see this, consider an asset that has a linear return of -50% in period 1, and +100% in period 2. The average linear return is would be +25%, but the compounded growth in value would be 0%.\n\nA general formula for the relationship between $\\bar{r}^{log}$ and $\\bar{r}^{lin}$ is derived as follows:\n\n$$\\begin{align*}\n\\bar{r}^{log} & = \\frac{1}{T}\\sum_{t=1}^{T} r_t^{log} \\\\\n& = \\frac{1}{T}\\sum_{t=1}^{T} \\log\\left(1+r_t^{lin}\\right) \\\\\n& = \\frac{1}{T}\\sum_{t=1}^{T} \\left(\\log(1) + r_t^{lin} - \\frac{1}{2} (r_t^{lin})^2 + \\cdots\n\\right) \\\\\n& = \\frac{1}{T}\\sum_{t=1}^{T} r_t^{lin} - \\frac{1}{2}\\frac{1}{T}\\sum_{t=1}^{T} (r_t^{lin})^2 + \\cdots \\\\\n& = \\bar{r}^{lin} - \\frac{1}{2}\\left(\\frac{1}{T}\\sum_{t=1}^{T} (r_t^{lin})^2\\right) + \\cdots \\\\\n& = \\bar{r}^{lin} - \\frac{1}{2}\\left((\\bar{r}^{lin})^2 + \\frac{1}{T}\\sum_{t=1}^{T} (r_t^{lin}-\\bar{r}^{lin})^2\\right) + \\cdots\n\\end{align*}$$\n\nFor typical values $\\bar{r}^{lin}$ of and long horizons $T$, this results in a formula\n\n$$\\begin{align*}\n\\bar{r}^{log} & \\approx \\bar{r}^{lin} - \\frac{1}{2} \\left(\\sigma^{lin}\\right)^2\n\\end{align*}$$\n\nwhere $\\sigma^{lin}$ is the standard deviation of linear returns, more commonly called the volatility.\n\nThe difference $- \\frac{1}{2} \\left(\\sigma^{lin}\\right)^2$ is the _volatility drag_ imposed on the compounded growth in value of an asset due to volatility in linear returns. This can be significant and a source of confusion for many investors. \n\nIt's indeed possible to have a positive average linear return, but negative compounded growth. To see this, consider a \\$100 investment which earns 20% on even-numbered years, and loses 18% on odd-numbered years. The average linear return is 1%, and the average log return is -0.81%.\n\n\n\n\n```python\nsymbol = 'TSLA'\n\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(3*365)\n\n# get stock price data\nS = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\nrlin = (S - S.shift(1))/S.shift(1)\nrlog = np.log(S/S.shift(1))\n\n# plot data\nplt.figure(figsize=(10,6))\nplt.subplot(3,1,1)\nS.plot(title=symbol)\nplt.ylabel('Adjusted Close')\nplt.grid()\n\nplt.subplot(3,1,2)\nrlin.plot()\nplt.title('Linear Returns (daily)')\nplt.grid()\nplt.tight_layout()\n\nplt.subplot(3,1,3)\nrlog.plot()\nplt.title('Log Returns (daily)')\nplt.grid()\nplt.tight_layout()\n```\n\n\n```python\nprint(\"Mean Linear Return (rlin) = {0:.7f}\".format(rlin.mean()))\nprint(\"Linear Volatility (sigma) = {0:.7f}\".format(rlin.std()))\nprint(\"Volatility Drag -0.5*sigma**2 = {0:.7f}\".format(-0.5*rlin.std()**2))\nprint(\"rlin - 0.5*vol = {0:.7f}\\n\".format(rlin.mean() - 0.5*rlin.std()**2))\n\nprint(\"Mean Log Return = {0:.7f}\".format(rlog.mean()))\n```\n\n Mean Linear Return (rlin) = 0.0034779\n Linear Volatility (sigma) = 0.0422834\n Volatility Drag -0.5*sigma**2 = -0.0008939\n rlin - 0.5*vol = 0.0025840\n \n Mean Log Return = 0.0025842\n\n\n\n```python\nsymbols = ['AAPL','MSFT','F','XOM','GE','X','TSLA','NIO']\n\nend = datetime.datetime.today().date()\nstart = end - datetime.timedelta(3*365)\n\nrlin = []\nrlog = []\nsigma = []\n\nfor symbol in symbols:\n\n # get stock price data\n S = pdr.data.DataReader(symbol, \"yahoo\", start, end)['Adj Close']\n r = (S - S.shift(1))/S.shift(1)\n rlin.append(r.mean()) \n rlog.append((np.log(S/S.shift(1))).mean())\n sigma.append(r.std())\n \n```\n\n\n```python\nimport seaborn as sns\nN = len(symbols)\nidx = np.arange(N)\nwidth = 0.2\n\nplt.figure(figsize=(12, 6))\n\np0 = plt.bar(2*idx - 1.25*width, rlin, width)\np1 = plt.bar(2*idx, -0.5*np.array(sigma)**2, width, bottom=rlin)\np2 = plt.bar(2*idx + 1.25*width, rlog, width)\n\nfor k in range(0,N):\n plt.plot([2*k - 1.75*width, 2*k + 0.5*width], [rlin[k], rlin[k]], 'k', lw=1)\n plt.plot([2*k - 0.5*width, 2*k + 1.75*width], [rlog[k], rlog[k]], 'k', lw=1)\n \nplt.xticks(2*idx, symbols)\nplt.legend((p0[0], p1[0], p2[0]), ('rlin', '0.5*sigma**2', 'rlog'))\nplt.title('Components of Linear Return')\nplt.ylim(1.1*np.array(plt.ylim()))\nplt.grid()\n```\n\n\n```python\n\n```\n\n\n< [Risk and Diversification](http://nbviewer.jupyter.org/github/jckantor/CBE40455/blob/master/notebooks/07.00-Risk-and-Diversification.ipynb) | [Contents](toc.ipynb) | [Geometric Brownian Motion](http://nbviewer.jupyter.org/github/jckantor/CBE40455/blob/master/notebooks/07.02-Geometric-Brownian-Motion.ipynb) >

\n", "meta": {"hexsha": "88561eb6131635b83c6f1415d8e2e17ea1851f99", "size": 511915, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/07.01-Measuring-Return.ipynb", "max_stars_repo_name": "jckantor/CBE40455-2020", "max_stars_repo_head_hexsha": "318bd71b7259c6f2f810cc55f268e2477c6b8cfd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-08-18T13:22:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-31T23:30:56.000Z", "max_issues_repo_path": "notebooks/07.01-Measuring-Return.ipynb", "max_issues_repo_name": "jckantor/CBE40455-2020", "max_issues_repo_head_hexsha": "318bd71b7259c6f2f810cc55f268e2477c6b8cfd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/07.01-Measuring-Return.ipynb", "max_forks_repo_name": "jckantor/CBE40455-2020", "max_forks_repo_head_hexsha": "318bd71b7259c6f2f810cc55f268e2477c6b8cfd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-11-07T21:36:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-11T20:38:20.000Z", "avg_line_length": 511915.0, "max_line_length": 511915, "alphanum_fraction": 0.9481886641, "converted": true, "num_tokens": 6158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.45713671682749485, "lm_q2_score": 0.32766829425520916, "lm_q1q2_score": 0.14978920824429182}} {"text": "```python\n%%HTML\n\n\n```\n\n\n\n\n\n\n\n```python\n%autosave 0\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import display\nimport ipywidgets as widgets\nfrom matplotlib import animation\nfrom functools import partial\nslider_layout = widgets.Layout(width='600px', height='20px')\nslider_style = {'description_width': 'initial'}\nIntSlider_nice = partial(widgets.IntSlider, style=slider_style, layout=slider_layout, continuous_update=False)\nFloatSlider_nice = partial(widgets.FloatSlider, style=slider_style, layout=slider_layout, continuous_update=False)\nSelSlider_nice = partial(widgets.SelectionSlider, style=slider_style, layout=slider_layout, continuous_update=False)\n```\n\n\n\n Autosave disabled\n\n\n# Aprendizaje Supervisado\n\nEsquema donde se busca aprender un mapeo o funci\u00f3n \n\n$$\nf_{\\theta}: \\mathcal{X} \\rightarrow \\mathcal{Y},\n$$\n\ndonde $\\mathcal{X}$ es el dominio de nuestros datos (entrada) e $\\mathcal{Y}$ es un objetivo (salida)\n\n\nEntrenamos nuestro modelo a partir de un conjunto de $N$ ejemplos:\n\n$$\n\\{(x_1, y_1), (x_2, y_2), \\ldots, (x_N, y_N)\\},\n$$\n\ndonde cada ejemplo es una tupla formada de datos $x_i \\in \\mathcal{X}$ y objetivo $y_i \\in \\mathcal{Y}$\n\nSi la variable objetivo es\n- continua: hablamos de un problema de regresi\u00f3n o aproximaci\u00f3n de funciones\n- categ\u00f3rica: hablamos de un problema de clasificaci\u00f3n\n\nLa naturaleza de los datos depende del problema\n\nLo m\u00e1s com\u00fan es que los datos $x_i$ se estructuren como arreglos de $M$ componentes\n\nA los componentes los llamamos atributos o *features*\n\n\n### Aprendizaje\n\nEl vector $\\theta$ corresponde a los **par\u00e1metros** del modelo\n\n> Aprender o ajustar el modelo corresponde a encontrar el valor \"\u00f3ptimo\" de $\\theta$ \n\nUsamos una **funci\u00f3n de p\u00e9rdida/costo** $L(\\theta)$ para medir el error de nuestro modelo\n\n> **Minimizamos** la funci\u00f3n de costo para encontrar el mejor $\\theta$\n\n\nCuando hablamos de $\\theta$ \"\u00f3ptimo\" lo decimos en el sentido de una funci\u00f3n de costo particular\n\n\n### Optimizaci\u00f3n\n\nQueremos resolver el siguiente problema\n$$\n\\min_\\theta L(\\theta)\n$$\n\nUna opci\u00f3n es evaluar $L()$ en todo el espacio de posibles $\\theta$: Fuerza Bruta\n\nPero en general esto no es computacionalmente posible\n\nPodemos usar **t\u00e9cnicas de optimizaci\u00f3n** para encontrar el mejor $\\theta$\n\nSi $L(\\theta)$ es continua y derivable podemos escribir\n\n$$\n\\nabla_\\theta L(\\theta) = \\vec 0\n$$\n\ne intentar despejar $\\theta~$\n\n## Regresi\u00f3n lineal\n\nModelo para aprender una mapeo entre una o m\u00e1s variables continuas (atributos) hacia una variable continua (objetivo)\n\nEn un esquema con $M$ atributos y $N$ ejemplos tenemos\n\n$$\ny_i = f_\\theta(\\vec x_i) = \\vec w^T \\vec x_i + b = \\sum_{j=1}^M w_j x_{ij} + b, \n$$\n\ndonde $f_\\theta$ es un modelo par\u00e1metrico (hiperplano) con $M+1$ par\u00e1metros \n\n$$\n\\theta= \\begin{pmatrix} b \\\\ w_1 \\\\ w_2 \\\\ \\vdots \\\\ w_M \\end{pmatrix}\n$$\n\nTambi\u00e9n podemos escribir el sistema matricialmente como\n\n$$\nY = X \\theta\n$$\n\ndonde $Y= \\begin{pmatrix} y_1 \\\\ y_2 \\\\ \\vdots \\\\y_N\\end{pmatrix} \\in \\mathbb{R}^N$, $X = \\begin{pmatrix} 1 & x_{11} & x_{12}& \\ldots& x_{1M} \\\\ \\vdots & \\vdots & \\vdots& \\ddots& \\vdots \\\\ 1 & x_{N1} & x_{N2}& \\ldots& x_{NM} \\\\ \\end{pmatrix} \\in \\mathbb{R}^{N\\times M}$ y $\\theta \\in \\mathbb{R}^M$\n\nUna funci\u00f3n de costo razonable es\n\n$$\nL(\\theta) = \\frac{1}{2} (Y - X\\theta)^T (Y - X\\theta)\n$$\n\nque corresponde al cuadrado de los errores \n\nLuego si derivamos e igualamos a cero obtenemos\n\n$$\n\\nabla_\\theta L(\\theta) = -X^T(Y-X\\theta) = 0,\n$$\n\ny despejando\n\n$$\n\\hat \\theta = (X^T X)^{-1} X^T Y\n$$\n\nsiempre y cuando podamos invertir $X^T X$\n\n> Esto se conoce como soluci\u00f3n de m\u00ednimos cuadrados\n\n#### Funciones base\n\nPodemos generalizar el regresor lineal aplicando transformaciones a $X$\n\nPor ejemplo una regresi\u00f3n polinomial de grado $M$ ser\u00eda\n\n$$\ny_i = f_\\theta(x_i) = \\sum_{j=1}^M w_j x_{i}^j + b, \n$$\n\ny su soluci\u00f3n ser\u00eda\n\n$$\n\\hat \\theta = (\\Phi^T \\Phi)^{-1} \\Phi^T Y,\n$$\n\ndonde $\\Phi = \\begin{pmatrix} 1 & x_1 & x_1^2& \\ldots& x_1^M \\\\ \\vdots & \\vdots & \\vdots& \\ddots& \\vdots \\\\ 1 & x_N & x_N^2& \\ldots& x_N^M \\\\ \\end{pmatrix}$\n\n\n```python\nplt.close('all'); fig, ax = plt.subplots(figsize=(6, 4), tight_layout=True)\npoly_basis = lambda x,N : np.vstack([x**k for k in range(N)]).T\ntheta = [10, -2, -0.3, 0.1]\nx = np.linspace(-5, 6, num=21); \nX = poly_basis(x, len(theta))\ny = np.dot(X, theta)\n\nrseed, sigma = 0, 1.\nnp.random.seed(rseed);\nY = y + sigma*np.random.randn(len(x))\nP = np.random.permutation(len(x))\ntrain_idx, valid_idx = P[:len(x)//2], P[len(x)//2:]\n\ndef update_plot(ax, N):\n ax.cla();\n Phi = poly_basis(x, N)\n theta_hat = np.linalg.lstsq(Phi[train_idx, :], Y[train_idx], rcond=None)[0]\n ax.plot(x, y, 'g-', linewidth=2, label='Impl\u00edcito', alpha=0.6, zorder=-100)\n ax.scatter(x[train_idx], Y[train_idx], s=50, label='Entrenamiento')\n ax.scatter(x[valid_idx], Y[valid_idx], s=50, label='Validaci\u00f3n')\n ax.vlines(x[train_idx], np.dot(Phi[train_idx, :], theta_hat), Y[train_idx]) \n ax.vlines(x[valid_idx], np.dot(Phi[valid_idx, :], theta_hat), Y[valid_idx]) \n x_plot = np.linspace(-5, 6, num=100);\n ax.plot(x_plot, np.dot(poly_basis(x_plot, N), theta_hat), 'k-', linewidth=2, label='Modelo')\n ax.set_ylim([-5, 15]); plt.legend()\n \nwidgets.interact(update_plot, ax=widgets.fixed(ax), N=IntSlider_nice(min=1, max=11));\n```\n\n\n \n\n\n\n\n\n\n\n interactive(children=(IntSlider(value=1, continuous_update=False, description='N', layout=Layout(height='20px'\u2026\n\n\n# Complejidad y Sobreajuste\n\nEn el ejemplo anterior vimos que se puede obtener modelos m\u00e1s flexibles si aumentamos el grado del polinomio \n\n> Aumentar la cantidad de par\u00e1metros (grados de libertad) hace al modelo m\u00e1s flexible y m\u00e1s complejo\n\nSi la flexibilidad es excesiva aproximamos los datos con cero error\n\nEsto no es bueno ya que estamos aprendiendo \"de memoria\" los datos y ajustandonos al ruido\n\n> Sobreajuste: Aprender perfectamente los datos usados para entrenar\n\nEl modelo sobreajustado predice muy mal los datos \"que no ha visto\"\n\n> El sobreajuste es inversamente proporcional a la capacidad de generalizaci\u00f3n\n\nPor esta raz\u00f3n usamos conjuntos de validaci\u00f3n\n\n\n\nFigura: https://www.d2l.ai/chapter_multilayer-perceptrons/underfit-overfit.html\n\n# Representatividad y Validaci\u00f3n\n\nEl primer paso para entrenar nuestro modelo es obtener datos (duh)\n\nEs cr\u00edtico que los datos que utilizemos **representen** adecuadamente el problema que queremos resolver\n\nSea un espacio de datos (circulo negro) y muestras (puntos azules), \u00bfque puede decir de los siguientes casos?\n\n\n```python\nfrom matplotlib.patches import Circle\nfig, ax = plt.subplots(1, 2, figsize=(7, 3))\nnp.random.seed(19)\nfor ax_ in ax:\n ax_.axis('off') \n ax_.set_xlim([-3, 3])\n ax_.set_ylim([-3, 3])\nr = 2.5*np.random.rand(100); t = np.random.rand(100);\nax[0].scatter(r*np.cos(2.0*np.pi*t), r*np.sin(2.0*np.pi*t))\np = Circle((0, 0), 3, fill=False, ec='k')\nax[0].add_artist(p)\nr = 2.5*np.random.rand(100); t = np.random.rand(100);\nax[1].scatter(r*np.cos(t), r*np.sin(t))\np = Circle((0, 0), 3, fill=False, ec='k')\nax[1].add_artist(p); \n```\n\n\n \n\n\n\n\n\n\nSiempre que podamos controlar el proceso de muestreo debemos poner atenci\u00f3n a evitar **sesgos**\n\nAsumiendo que nuestro dataset es representativo el siguiente paso es **entrenar**\n\nPara combatir el sobreajuste podemos usar **estrategias de validaci\u00f3n**\n\nConsisten en separar el conjunto en dos o m\u00e1s subconjuntos\n- Holdout: Entrenamiento/Validaci\u00f3n/Prueba\n- K-fold cross-validation y Leave one-out (N-fold) cross-validation\n- Versiones estratificadas/balanceadas de las anteriores\n\nPara que nuestro conjuntos de entrenamiento y validaci\u00f3n sigan siendo representativos del total los **seleccionamos aleatoriamente**\n\nMedimos $L(\\theta)$ en entrenamiento y validaci\u00f3n\n\n> Optimizamos nuestro modelo minimizando el error de entrenamiento\n\n> Seleccionamos los par\u00e1metros e hiper-par\u00e1metros que dan m\u00ednimo error de validaci\u00f3n\n\n> Comparamos distintas familias de modelos con el error de prueba\n\nEn el ejemplo anterior:\n\n\n```python\nfig, ax = plt.subplots(figsize=(6, 4), tight_layout=True)\nN_values = np.arange(1, 11)\nmse = np.zeros(shape=(len(N_values), 2))\nfor i, N in enumerate(N_values):\n Phi = poly_basis(x, N)\n theta_hat = np.linalg.lstsq(Phi[train_idx, :], Y[train_idx], rcond=None)[0]\n mse[i, 0] = np.mean(np.power(Y[train_idx] - np.dot(Phi[train_idx, :], theta_hat), 2))\n mse[i, 1] = np.mean(np.power(Y[valid_idx] - np.dot(Phi[valid_idx, :], theta_hat), 2))\nax.plot(N_values, mse[:, 0], label='Entrenamiento')\nax.plot(N_values, mse[:, 1], label='Validaci\u00f3n')\nidx_best = np.argmin(mse[:, 1])\nax.scatter(N_values[idx_best], mse[idx_best, 1], c='k', s=100)\nplt.legend()\nax.set_ylim([1e-4, 1e+4])\nax.set_yscale('log')\nax.set_xlabel('Grado del polinomio')\nax.set_ylabel('Loss');\n```\n\n\n \n\n\n\n\n\n\nEn resumen\n\n- Bajo error de entrenamiento y de validaci\u00f3n: **Ideal**\n\n- Bajo error de entrenamiento y alto error de validaci\u00f3n: **Modelo sobreajustado**\n\n- Alto error de entrenamiento y de validaci\u00f3n: Considera otro modelo y/o revisa tu c\u00f3digo\n\n\n## Neurona artificial o regresor log\u00edstico\n\nModelo para aprender una mapeo entre una o m\u00e1s variables continuas (atributos) hacia una variable binaria (objetivo)\n\n$$\ny_i \\leftarrow f_\\theta(\\vec x_i) = \\mathcal{S} \\left(\\theta_0 + \\sum_{j=1}^M \\theta_j x_{ij}\\right) \n$$\n\ndonde $\\mathcal{S}(z) = \\frac{1}{1+\\exp(-z)} \\in [0, 1]$ se conoce como funci\u00f3n log\u00edstica o sigmoide\n\n> Modelo de clasificaci\u00f3n binaria (dos clases)\n\nPodemos interpretar la salida del clasificador como una probabilidad\n\n\n\n\n```python\n\n```\n\n\n```python\ndata = np.concatenate((np.random.randn(50, 2), 1.5 + np.random.randn(50, 2)), axis=0)\nlabel = np.array([0]*50 + [1]*50)\nfig, ax = plt.subplots(1, figsize=(9, 4))\nfrom matplotlib import cm\n#fig.colorbar(cm.ScalarMappable(cmap=plt.cm.RdBu_r), ax=ax)\nx_min, x_max = data[:, 0].min() - 0.5, data[:, 0].max() + 0.5\ny_min, y_max = data[:, 1].min() - 0.5, data[:, 1].max() + 0.5\nxx, yy = np.meshgrid(np.arange(x_min, x_max, 0.05), np.arange(y_min, y_max, 0.05))\n\ndef sigmoid(X, w, b):\n Z = np.dot(X, w) + b\n return 1./(1 + np.exp(-Z))\n\ndef update_plot(w1, w2, b): \n ax.cla()\n ax.scatter(data[:50, 0], data[:50, 1], c='k', s=20)\n ax.scatter(data[50:, 0], data[50:, 1], c='k', s=20, marker='x')\n ax.contourf(xx, yy, sigmoid(np.c_[xx.ravel(), yy.ravel()], np.array([w1, w2]), b).reshape(xx.shape), \n cmap=plt.cm.RdBu_r, alpha=0.75)\n\n\nwidgets.interact(update_plot, \n w1=FloatSlider_nice(min=-10, max=10),\n w2=FloatSlider_nice(min=-10, max=10),\n b=FloatSlider_nice(min=-10, max=10));\n```\n\n\n \n\n\n\n\n\n\n\n interactive(children=(FloatSlider(value=0.0, continuous_update=False, description='w1', layout=Layout(height='\u2026\n\n\n\u00bfQu\u00e9 funci\u00f3n de costo es apropiada en este caso?\n\nTipicamente se usa la **Entrop\u00eda Cruzada Binaria**\n\n$$\nL(\\theta) = \\sum_{i=1}^N -y_i \\log( f_\\theta(\\vec x_i) ) - (1-y_i) \\log(1 - f_\\theta(\\vec x_i))\n$$\n\n\u00bfPor qu\u00e9?\n\nCalculemos su gradiente \n$$\n\\begin{align}\n\\frac{d}{d \\theta_j} L(\\theta) &= \\sum_{i=1}^N \\left(-\\frac{y_i}{f_\\theta(\\vec x_i)} + \\frac{1-y_i}{1 - f_\\theta(\\vec x_i)}\\right) \\frac{f_\\theta(\\vec x_i)}{d\\theta_j} \\nonumber \\\\\n&= (y_i - f_\\theta(\\vec x_i)) x_{ij}\n\\end{align}\n$$\n\nPor la no linealidad $f_\\theta(\\vec x_i)$ ya no podemos despejar analiticamente $\\theta$ \n\n> Podemos usar m\u00e9todos de optimizaci\u00f3n iterativos\n\n## Optimizaci\u00f3n: M\u00e9todo de Newton\n\nSea el valor actual del vector de par\u00e1metros $\\theta_t$\n\nQueremos encontrar el mejor \"pr\u00f3ximo valor\" seg\u00fan nuestra funci\u00f3n objetivo\n$$\n\\theta_{t+1} = \\theta_t + \\Delta \\theta\n$$\nConsideremos la aproximaci\u00f3n de Taylor de segundo orden de $f$\n$$\nf(\\theta_{t} + \\Delta \\theta) \\approx f(\\theta_t) + \\nabla f (\\theta_t) \\Delta \\theta + \\frac{1}{2} \\Delta \\theta^T H_f (\\theta_t) \\Delta \\theta \n$$\nDerivando en funci\u00f3n de $\\Delta \\theta$ e igualando a cero tenemos\n$$\n\\begin{align}\n\\nabla f (\\theta_t) + H_f (\\theta_t) \\Delta \\theta &= 0 \\nonumber \\\\\n\\Delta \\theta &= - [H_f (\\theta_t)]^{-1}\\nabla f (\\theta_t) \\nonumber \\\\\n\\theta_{t+1} &= \\theta_{t} - [H_f (\\theta_t)]^{-1}\\nabla f (\\theta_t) \\nonumber \\\\\n\\end{align}\n$$\n\n- Se obtiene una regla iterativa en funci\u00f3n del **Gradiente** y del **Hessiano**\n- La soluci\u00f3n depende de $\\theta_0$\n- \"Asumimos\" que la aproximaci\u00f3n de segundo orden es \"buena\"\n- Si nuestro modelo tiene $N$ par\u00e1metros el Hessiano es de $N\\times N$, \u00bfQu\u00e9 pasa si $N$ es grande?\n\n## Optimizaci\u00f3n: Gradiente descendente\n\nSi el Hessiano es prohibitivo podemos usar una aproximaci\u00f3n de primer orden\n\nEl m\u00e9todo m\u00e1s cl\u00e1sico es el **gradiente descendente**\n$$\n\\theta_{t+1} = \\theta_{t} - \\eta \\nabla f (\\theta_t)\n$$\n\ndonde hemos reemplazado el Hessiano por una constante $\\eta$ llamado \"paso\" o \"tasa de aprendizaje\"\n\n- \u00bfC\u00f3mo cambia la optimizaci\u00f3n con distintos $\\eta$?\n- \u00bfQu\u00e9 ocurre cuando la superficie de error tiene m\u00ednimos locales?\n\n\n```python\nplt.close('all'); fig, ax = plt.subplots(2, figsize=(7, 4), tight_layout=True, sharex=True)\nx = np.linspace(-4, 6, num=100)\nf = lambda theta : 5+ (theta-1.)**2 #+ 10*np.sin(theta)\ndf = lambda theta : 2*(theta -1.) #+ 10*np.cos(theta)\ndf2 = lambda theta : 2 #- 10*np.cos(theta)\n\nt = 10*np.random.rand(10) - 4.\nax[0].plot(x, f(x))\nsc = ax[0].scatter(t, f(t), s=100)\n\nax[1].set_xlabel(r'$\\theta$')\nax[0].set_ylabel(r'$f(\\theta)$')\nax[1].plot(x, -df(x))\nax[1].set_ylabel(r'$-\\nabla f(\\theta)$')\neta = 0.01\n\ndef update(n):\n t = sc.get_offsets()[:, 0]\n t -= eta*df(t)\n #t -= df(t)/(df2(t)+10)\n sc.set_offsets(np.c_[t, f(t)])\n \nanim = animation.FuncAnimation(fig, update, frames=100, interval=200, repeat=False, blit=True)\n```\n\n\n \n\n\n\n\n\n\n# Neurona artificial en [PyTorch](https://pytorch.org/)\n\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset, Subset \n\ntorch_set = TensorDataset(torch.from_numpy(data.astype('float32')), \n torch.from_numpy(label.astype('float32')))\n\nimport sklearn.model_selection\ntrain_idx, test_idx = next(sklearn.model_selection.ShuffleSplit(train_size=0.6).split(data, label))\ntorch_train_loader = DataLoader(Subset(torch_set, train_idx), shuffle=True, batch_size=16)\ntorch_valid_loader = DataLoader(Subset(torch_set, test_idx), shuffle=False, batch_size=256)\n```\n\n /home/hackerter/.local/lib/python3.6/site-packages/sklearn/model_selection/_split.py:1788: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n FutureWarning)\n\n\n\n```python\nclass Neurona(torch.nn.Module):\n\n def __init__(self): \n \n super(Neurona, self).__init__()\n \n self.fc = torch.nn.Linear(in_features=2, out_features=1, bias=True)\n self.activation = torch.nn.Sigmoid()\n \n def forward(self, x):\n return self.activation(self.fc(x))\n```\n\n\n```python\nfig, ax = plt.subplots(1, 2, figsize=(8, 3.5), tight_layout=True)\n\nnet = Neurona()\nn_epochs = 1000\noptimizer = torch.optim.SGD(net.parameters(), lr=1e-2)\ncriterion = torch.nn.BCELoss(reduction='sum') \nrunning_loss = np.zeros(shape=(n_epochs, 2))\n\ndef train_one_epoch(net):\n train_loss, valid_loss = 0.0, 0.0\n for sample_data, sample_label in torch_train_loader:\n output = net(sample_data)\n optimizer.zero_grad() \n loss = criterion(output, sample_label) \n train_loss += loss.item()\n loss.backward()\n optimizer.step()\n for sample_data, sample_label in torch_valid_loader:\n output = net(sample_data)\n loss = criterion(output, sample_label) \n valid_loss += loss.item()\n return train_loss/torch_train_loader.dataset.__len__(), valid_loss/torch_valid_loader.dataset.__len__()\n \ndef update_plot(k):\n global net, running_loss\n [ax_.cla() for ax_ in ax]\n running_loss[k, 0], running_loss[k, 1] = train_one_epoch(net)\n Z = net.forward(torch.from_numpy(np.c_[xx.ravel(), yy.ravel()].astype('float32')))\n Z = Z.detach().numpy().reshape(xx.shape)\n ax[0].contourf(xx, yy, Z, cmap=plt.cm.RdBu_r, alpha=1., vmin=0, vmax=1)\n for i, (marker, name) in enumerate(zip(['o', 'x'], ['Train', 'Test'])):\n ax[0].scatter(data[label==i, 0], data[label==i, 1], color='k', s=10, marker=marker, alpha=0.5)\n ax[1].plot(np.arange(0, k+1, step=1), running_loss[:k+1, i], '-', label=name+\" cost\")\n plt.legend(); ax[1].grid()\n\n#update_plot(0)\nanim = animation.FuncAnimation(fig, update_plot, frames=n_epochs, \n interval=10, repeat=False, blit=False)\n```\n\n\n \n\n\n\n\n\n\n# M\u00e9tricas: Evaluando un clasificador binario\n\nLa salida de este clasificador es un valor en el rango $[0, 1]$\n\nPara tomar un decisi\u00f3n binaria se debe seleccionar un umbral $\\mathcal{T}$ tal que\n\n$$\nd_i = \n\\begin{cases} \n0, & \\text{si } f_\\theta(\\vec x_i) < \\mathcal{T} \\\\ \n1, & \\text{si } f_\\theta(\\vec x_i) \\geq \\mathcal{T}\n\\end{cases}\n$$\n\nUna vez seleccionado el umbral se puede contar la cantidad de \n- **True positives** (TP): Era clase (1) y lo clasifico como (1)\n- **True negative** (TN): Era clase (0) y lo clasifico como (0)\n- **False positives** (FP): Era clase (0) y lo clasifico como (1): Error tipo I\n- **False negative** (FN): Era clase (1) y lo clasifico como (0): Error tipo II\n\nA partir de estas m\u00e9tricas se construye la **tabla de confusi\u00f3n** del clasificador\n\n|Clasificado como/En realidad era|Positivo|Negativo|\n|---|---|---|\n|Positivo:|TP | FP |\n|Negativo:| FN | TN |\n\nEn base a estas m\u00e9tricas se construyen otras \n$$\n\\text{Recall} = \\frac{TP}{TP + FN}\n$$\ntambi\u00e9n conocida como la **Tasa de verdaderos positivos** (TPR) o sensitividad\n\n> TPR: La proporci\u00f3n de positivos correctamente clasificados respecto al total de positivos\n\n$$\n\\text{FPR} = \\frac{FP}{TN + FP} = 1 - \\frac{TN}{TN + FP}\n$$\n\nla **tasa de falsos positivos** (FPR) tambi\u00e9n representada como \"1 - especificidad\"\n\n\n> FPR: La proporci\u00f3n de negativos incorrectamente clasificados respecto al total de negativos\n\n$$\n\\text{Precision} = \\frac{TP}{TP + FP}\n$$\n\ntambi\u00e9n conocido como pureza\n\n> Precision: La proporci\u00f3n de positivos correctamente clasificados respecto a todos los ejemplos clasificados como positivo\n\n$$\n\\text{Accuracy} = \\frac{TP+TN}{TP + FP + FN+ TN}\n$$\n\n> Accuracy: La proporci\u00f3n de ejemplos correctamente clasificados\n\n$$\n\\text{f1-score} = \\frac{2*\\text{Recall}*\\text{Precision}}{\\text{Recall} + \\text{Precision}}\n$$\n\n> f1-score: Media arm\u00f3nica entre Recall y Precision asumiendo igual ponderaci\u00f3n\n\nSi las clases son desbalanceadas entonces f1-score es m\u00e1s aconsejable que accuracy\n\n\n```python\nnet = Neurona()\nprobability = net(torch_set.tensors[0]).detach().numpy()\nimport sklearn.metrics\n\nprint(\"Matriz de confusi\u00f3n:\")\nprint(sklearn.metrics.confusion_matrix(y_true=torch_set.tensors[1].numpy().astype(int), \n y_pred=probability[:, 0] > 0.5))\n\nprint(sklearn.metrics.classification_report(y_true=torch_set.tensors[1].numpy().astype(int), \n y_pred=probability[:, 0] > 0.5))\n```\n\n Matriz de confusi\u00f3n:\n [[40 10]\n [ 9 41]]\n precision recall f1-score support\n \n 0 0.82 0.80 0.81 50\n 1 0.80 0.82 0.81 50\n \n micro avg 0.81 0.81 0.81 100\n macro avg 0.81 0.81 0.81 100\n weighted avg 0.81 0.81 0.81 100\n \n\n\nNotar que a distintos umbrales $\\mathcal{T}$ se obtienen distintas tablas de confusi\u00f3n\n\nSe midemos estas m\u00e9tricas usando distintos umbrales podemos construir una curva de desempe\u00f1o\n\nTipicamente se usan\n- Curva ROC: TPR vs FPR\n- Curva Precision vs Recall\n\n\n```python\nfpr, tpr, th = sklearn.metrics.roc_curve(y_true=torch_set.tensors[1].numpy().astype(int), \n y_score=probability[:, 0])\n\nfig, ax = plt.subplots(figsize=(7, 4))\nax.plot(fpr, tpr);\nax.set_xlabel('Tasa de Falsos positivos')\nax.set_ylabel('Tasa de Verdaderos Positivos (Recall)')\nax.set_title('AUC: %f' %sklearn.metrics.auc(fpr, tpr))\n```\n\n\n \n\n\n\n\n\n\n\n\n\n Text(0.5, 1.0, 'AUC: 0.866000')\n\n\n\n\n```python\nprec, rec, th = sklearn.metrics.precision_recall_curve(y_true=torch_set.tensors[1].numpy().astype(int), \n probas_pred=probability[:, 0])\n\nfig, ax = plt.subplots(figsize=(7, 4))\nax.plot(rec, prec, '-');\nax.set_xlabel('Recall')\nax.set_ylabel('Precision')\n```\n\n\n \n\n\n\n\n\n\n\n\n\n Text(0, 0.5, 'Precision')\n\n\n\n# Ojo con:\n\n\n#### La distribuci\u00f3n de los datos donde se aplicar\u00e1 el clasificador es distinta a la que usaste para entrenar/validar\n\nActualiza tus conjuntos de datos para que sean representativos!\n\n#### Usa los subconjuntos adecuadamente\n\nAjusta los par\u00e1metros con el set de validaci\u00f3n\n\nCompara distintas familias de modelos con el set de prueba\n\n\n#### La m\u00e9trica que usas no es la adecuada para el problema\n\nSi el problema tiene clases desbalanceadas el *accuracy* puede ser muy alto, contrasta usando m\u00e9tricas sencibles al desbalance (e.g. *f1-score*)\n\n#### Mi modelo se sobreajusta de inmediato\n\nPrueba disminuyendo la complejidad/arquitectura del modelo o a\u00f1adiendo **regularizaci\u00f3n**\n\nEsto tambi\u00e9n puede ser se\u00f1al de que necesitas m\u00e1s ejemplos para entrenar\n\nSe pueden usar **T\u00e9cnicas de aumentaci\u00f3n** de datos\n\n#### Mi modelo no est\u00e1 aprendiendo\n\nSi estas seguro que no hay bugs prueba aumentando la complejidad del modelo\n\n#### Estudia los errores de tu modelo para mejorarlo\n\nAnaliza los datos mal clasificados y busca patrones\n\nRevisa que las etiquetas est\u00e9n correctas\n\nRevisa que los atributos est\u00e9n adecuadamente calculados\n\nPropon nuevos atributos que ayuden a clasificador los ejemplos dif\u00edciles\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8b79462057a0638b7c66775ab4ae11c5c166f64c", "size": 701882, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "unidad1/1_fundamentos.ipynb", "max_stars_repo_name": "hackerter/INFO267", "max_stars_repo_head_hexsha": "0382701d4be05fe3533707ebf114214c90e4b3c1", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "unidad1/1_fundamentos.ipynb", "max_issues_repo_name": "hackerter/INFO267", "max_issues_repo_head_hexsha": "0382701d4be05fe3533707ebf114214c90e4b3c1", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "unidad1/1_fundamentos.ipynb", "max_forks_repo_name": "hackerter/INFO267", "max_forks_repo_head_hexsha": "0382701d4be05fe3533707ebf114214c90e4b3c1", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 96.5184268427, "max_line_length": 114172, "alphanum_fraction": 0.7568195224, "converted": true, "num_tokens": 6842, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657965106367595, "lm_q2_score": 0.3140505514119072, "lm_q1q2_score": 0.14967010220824176}} {"text": "```python\n\"\"\"Tutorial: Introduction to the Spin-Orbital Formulation of Post-HF Methods\"\"\"\n\n__author__ = \"Adam S. Abbott\"\n__credit__ = [\"Adam S. Abbott\", \"Justin M. Turney\"]\n\n__copyright__ = \"(c) 2014-2017, The Psi4NumPy Developers\"\n__license__ = \"BSD-3-Clause\"\n__date__ = \"2017-05-23\"\n```\n\n# Introduction to the Spin Orbital Formulation of Post-HF Methods\n## Notation\n\nPost-HF methods such as MPn, coupled cluster theory, and configuration interaction improve the accuracy of our Hartree-Fock wavefunction by including terms corresponding to excitations of electrons from occupied (i, j, k..) to virtual (a, b, c...) orbitals. This recovers some of the dynamic electron correlation previously neglected by Hartree-Fock.\n\nIt is convenient to introduce new notation to succinctly express the complex mathematical expressions encountered in these methods. This tutorial will cover this notation and apply it to a spin orbital formulation of conventional MP2. This code will also serve as a starting template for other tutorials which use a spin-orbital formulation, such as CEPA0, CCD, CIS, and OMP2. \n\n\n\n### I. Physicist's Notation for Two-Electron Integrals\nRecall from previous tutorials the form for the two-electron integrals over spin orbitals ($\\chi$) and spatial orbitals ($\\phi$):\n\\begin{equation}\n [pq|rs] = [\\chi_p\\chi_q|\\chi_r\\chi_s] = \\int dx_{1}dx_2 \\space \\chi^*_p(x_1)\\chi_q(x_1)\\frac{1}{r_{12}}\\chi^*_r(x_2)\\chi_s(x_2) \\\\\n(pq|rs) = (\\phi_p\\phi_q|\\phi_r\\phi_s) = \\int dx_{1}dx_2 \\space \\phi^*_p(x_1)\\phi_q(x_1)\\frac{1}{r_{12}}\\phi^*_r(x_2)\\phi_s(x_2)\n\\end{equation}\n\nAnother form of the spin orbital two electron integrals is known as physicist's notation. By grouping the complex conjugates on the left side, we may express them in Dirac (\"bra-ket\") notation:\n\\begin{equation}\n\\langle pq \\mid rs \\rangle = \\langle \\chi_p \\chi_q \\mid \\chi_r \\chi_s \\rangle = \\int dx_{1}dx_2 \\space \\chi^*_p(x_1)\\chi^*_q(x_2)\\frac{1} {r_{12}}\\chi_r(x_1)\\chi_s(x_2) \n\\end{equation}\n\nThe antisymmetric form of the two-electron integrals in physcist's notation is given by\n\n\\begin{equation}\n\\langle pq \\mid\\mid rs \\rangle = \\langle pq \\mid rs \\rangle - \\langle pq \\mid sr \\rangle\n\\end{equation}\n\n\n### II. Kutzelnigg-Mukherjee Tensor Notation and the Einstein Summation Convention\n\nKutzelnigg-Mukherjee (KM) notation provides an easy way to express and manipulate the tensors (two-electron integrals, $t$-amplitudes, CI coefficients, etc.) encountered in post-HF methods. Indices which appear in the bra are expressed as subscripts, and indices which appear in the ket are expressed as superscripts:\n\\begin{equation}\ng_{pq}^{rs} = \\langle pq \\mid rs \\rangle \\quad \\quad \\quad \\overline{g}_{pq}^{rs} = \\langle pq \\mid\\mid rs \\rangle\n\\end{equation}\n\nThe upper and lower indices allow the use of the Einstein Summation convention. Under this convention, whenever an indice appears in both the upper and lower position in a product, that indice is implicitly summed over. As an example, consider the MP2 energy expression:\n\n\\begin{equation}\nE_{MP2} = \\frac{1}{4} \\sum_{i a j b} \\frac{ [ia \\mid\\mid jb] [ia \\mid\\mid jb]} {\\epsilon_i - \\epsilon_a + \\epsilon_j - \\epsilon_b}\n\\end{equation}\nConverting to physicist's notation:\n\n\\begin{equation}\nE_{MP2} = \\frac{1}{4} \\sum_{i j a b} \\frac{ \\langle ij \\mid\\mid ab \\rangle \\langle ij \\mid \\mid ab \\rangle} {\\epsilon_i - \\epsilon_a + \\epsilon_j - \\epsilon_b}\n\\end{equation}\nKM Notation, taking advantage of the permutational symmetry of $g$:\n\\begin{equation}\nE_{MP2} = \\frac{1}{4} \\overline{g}_{ab}^{ij} \\overline{g}_{ij}^{ab} (\\mathcal{E}_{ab}^{ij})^{-1}\n\\end{equation}\n\nwhere $\\mathcal{E}_{ab}^{ij}$ is the sum of orbital energies $\\epsilon_i - \\epsilon_a + \\epsilon_j - \\epsilon_b$. Upon collecting every possible orbital energy sum into a 4-dimensional tensor, this equation can be solved with a simple tensor-contraction, as done in our MP2 tutorial.\n\nThe notation simplication here is minor, but the value of this notation becomes obvious with more complicated expressions encountered in later tutorials such as CCD. It is also worth noting that KM notation is deeply intertwined with the second quantization and diagrammatic expressions of methods in advanced electronic structure theory. For our purposes, we will shy away from the details and simply use the notation to write out readily-programmable expressions.\n\n\n### III. Coding Spin Orbital Methods Example: MP2\n\nIn the MP2 tutorial, we used spatial orbitals in our two-electron integral tensor, and this appreciably decreased the computational cost. However, this code will only work when using an RHF reference wavefunction. We may generalize our MP2 code (and other post-HF methods) to work with any reference by expressing our integrals, MO coefficients, and orbital energies obtained from Hartree-Fock in a spin orbital formulation. As an example, we will code spin orbital MP2, and this will serve as a foundation for later tutorials.\n\n\n\n### Implementation of Spin Orbital MP2\nAs usual, we import Psi4 and NumPy, and set the appropriate options. However, in this code, we will be free to choose open-shell molecules which require UHF or ROHF references. We will stick to RHF and water for now.\n\n\n```python\n# ==> Import statements & Global Options <==\nimport psi4\nimport numpy as np\n\npsi4.set_memory(int(2e9))\nnumpy_memory = 2\npsi4.core.set_output_file('output.dat', False)\n```\n\n\n```python\n# ==> Molecule & Psi4 Options Definitions <==\nmol = psi4.geometry(\"\"\"\n0 1\nO\nH 1 1.1\nH 1 1.1 2 104\nsymmetry c1\n\"\"\")\n\n\npsi4.set_options({'basis': '6-31g',\n 'scf_type': 'pk',\n 'reference': 'rhf',\n 'mp2_type': 'conv',\n 'e_convergence': 1e-8,\n 'd_convergence': 1e-8})\n```\n\nFor convenience, we let Psi4 take care of the Hartree-Fock procedure, and return the wavefunction object.\n\n\n```python\n# Get the SCF wavefunction & energies\nscf_e, scf_wfn = psi4.energy('scf', return_wfn=True)\n```\n\nWe also need information about the basis set and orbitals, such as the number of basis functions, number of spin orbitals, number of alpha and beta electrons, the number of occupied spin orbitals, and the number of virtual spin orbitals. These can be obtained with MintsHelper and from the wavefunction.\n\n\n```python\nmints = psi4.core.MintsHelper(scf_wfn.basisset())\nnbf = mints.nbf()\nnso = 2 * nbf\nnalpha = scf_wfn.nalpha()\nnbeta = scf_wfn.nbeta()\nnocc = nalpha + nbeta\nnvirt = 2 * nbf - nocc\n```\n\nFor MP2, we need the MO coefficients, the two-electron integral tensor, and the orbital energies. But, since we are using spin orbitals, we have to manipulate this data accordingly. Let's get our MO coefficients in the proper form first. Recall in restricted Hartree-Fock, we obtain one MO coefficient matrix **C**, whose columns are the molecular orbital coefficients, and each row corresponds to a different atomic orbital basis function. But, in unrestricted Hartree-Fock, we obtain separate matrices for the alpha and beta spins, **Ca** and **Cb**. We need a general way to build one **C** matrix regardless of our Hartree-Fock reference. The solution is to put alpha and beta MO coefficients into a block diagonal form:\n\n\n```python\nCa = np.asarray(scf_wfn.Ca())\nCb = np.asarray(scf_wfn.Cb())\nC = np.block([\n [ Ca , np.zeros_like(Cb) ],\n [np.zeros_like(Ca) , Cb ]\n ])\n\n# Result: | Ca 0 |\n# | 0 Cb|\n\n```\n\nIt's worth noting that for RHF and ROHF, the Ca and Cb given by Psi4 are the same.\n\nNow, for this version of MP2, we also need the MO-transformed two-electron integral tensor in physicist's notation. However, Psi4's default two-electron integral tensor is in the AO-basis, is not \"spin-blocked\" (like **C**, above!), and is in chemist's notation, so we have a bit of work to do. \n\nFirst, we will spin-block the two electron integral tensor in the same way that we spin-blocked our MO coefficients above. Unfortunately, this transformation is impossible to visualize for a 4-dimensional array.\n\nNevertheless, the math generalizes and can easily be achieved with NumPy's kronecker product function `np.kron`. Here, we take the 2x2 identity, and place the two electron integral array into the space of the 1's along the diagonal. Then, we transpose the result and do the same. The result doubles the size of each dimension, and we obtain a \"spin-blocked\" two electron integral array.\n\n\n```python\n# Get the two electron integrals using MintsHelper\nI = np.asarray(mints.ao_eri())\n\ndef spin_block_tei(I):\n \"\"\" \n Function that spin blocks two-electron integrals\n Using np.kron, we project I into the space of the 2x2 identity, tranpose the result\n and project into the space of the 2x2 identity again. This doubles the size of each axis.\n The result is our two electron integral tensor in the spin orbital form.\n \"\"\"\n identity = np.eye(2)\n I = np.kron(identity, I)\n return np.kron(identity, I.T)\n\n# Spin-block the two electron integral array\nI_spinblock = spin_block_tei(I)\n\n```\n\nFrom here, converting to antisymmetrized physicists notation is simply:\n\n\n```python\n# Converts chemist's notation to physicist's notation, and antisymmetrize\n# (pq | rs) ---> \n# Physicist's notation\ntmp = I_spinblock.transpose(0, 2, 1, 3)\n# Antisymmetrize:\n# = - \ngao = tmp - tmp.transpose(0, 1, 3, 2)\n\n```\n\nWe also need the orbital energies, and just as with the MO coefficients, we combine alpha and beta together. We also want to ensure that the columns of **C** are sorted in the same order as the corresponding orbital energies.\n\n\n```python\n# Get orbital energies \neps_a = np.asarray(scf_wfn.epsilon_a())\neps_b = np.asarray(scf_wfn.epsilon_b())\neps = np.append(eps_a, eps_b)\n\n# Before sorting the orbital energies, we can use their current arrangement to sort the columns\n# of C. Currently, each element i of eps corresponds to the column i of C, but we want both\n# eps and columns of C to be in increasing order of orbital energies\n\n# Sort the columns of C according to the order of increasing orbital energies \nC = C[:, eps.argsort()] \n\n# Sort orbital energies in increasing order\neps = np.sort(eps) \n\n```\n\nFinally, we transform our two-electron integrals to the MO basis. For the sake of generalizing for other methods, instead of just transforming the MP2 relevant subsection as before:\n~~~python\ntmp = np.einsum('pi,pqrs->iqrs', Cocc, I)\ntmp = np.einsum('qa,iqrs->iars', Cvirt, tmp)\ntmp = np.einsum('iars,rj->iajs', tmp, Cocc)\nI_mo = np.einsum('iajs,sb->iajb', tmp, Cvirt)\n~~~\n\nwe instead transform the full array so it can be used for terms from methods other than MP2. The nested `einsum`'s work the same way as the method above. Here, we denote the integrals as `gmo` to differentiate from the chemist's notation integrals `I_mo`.\n\n\n```python\n# Transform gao, which is the spin-blocked 4d array of physicist's notation, \n# antisymmetric two-electron integrals, into the MO basis using MO coefficients \ngmo = np.einsum('pQRS, pP -> PQRS',\n np.einsum('pqRS, qQ -> pQRS',\n np.einsum('pqrS, rR -> pqRS',\n np.einsum('pqrs, sS -> pqrS', gao, C), C), C), C)\n\n```\n\nAnd just as before, construct the 4-dimensional array of orbital energy denominators. An alternative to the old method:\n~~~python\ne_ij = eps[:nocc]\ne_ab = eps[nocc:]\ne_denom = 1 / (e_ij.reshape(-1, 1, 1, 1) - e_ab.reshape(-1, 1, 1) + e_ij.reshape(-1, 1) - e_ab)\n~~~\nis the following:\n\n\n```python\n# Define slices, create 4 dimensional orbital energy denominator tensor\nn = np.newaxis\no = slice(None, nocc)\nv = slice(nocc, None)\ne_abij = 1 / (-eps[v, n, n, n] - eps[n, v, n, n] + eps[n, n, o, n] + eps[n, n, n, o])\n```\n\nThese slices will also be used to define the occupied and virtual space of our two electron integrals. \n\nFor example, $\\bar{g}_{ab}^{ij}$ can be accessed with `gmo[v, v, o, o]` \n\nWe now have all the pieces we need to compute the MP2 correlation energy. Our energy expression in KM notation is\n\n\\begin{equation}\nE_{MP2} = \\frac{1}{4} \\bar{g}_{ab}^{ij} \\bar{g}_{ij}^{ab} (\\mathcal{E}_{ab}^{ij})^{-1}\n\\end{equation}\n\nwhich may be easily read-off as an einsum in NumPy. Here, for clarity, we choose to read the tensors from left to right (bra to ket). We also are sure to take the appropriate slice of the two-electron integral array:\n\n\n```python\n# Compute MP2 Correlation Energy\nE_MP2_corr = (1 / 4) * np.einsum('abij, ijab, abij ->', gmo[v, v, o, o], gmo[o, o, v, v], e_abij)\n\nE_MP2 = E_MP2_corr + scf_e\n\nprint('MP2 correlation energy: ', E_MP2_corr)\nprint('MP2 total energy: ', E_MP2)\n```\n\n MP2 correlation energy: -0.142119840297\n MP2 total energy: -76.094648885\n\n\nFinally, compare our answer with Psi4:\n\n\n```python\n# ==> Compare to Psi4 <==\npsi4.driver.p4util.compare_values(psi4.energy('mp2'), E_MP2, 6, 'MP2 Energy')\n```\n\n \tMP2 Energy........................................................PASSED\n\n\n\n\n\n True\n\n\n\n## References\n\n1. Notation and Symmetry of Integrals:\n > C. David Sherill, \"Permutational Symmetries of One- and Two-Electron Integrals\" Accessed with http://vergil.chemistry.gatech.edu/notes/permsymm/permsymm.pdf\n2. Useful Notes on Kutzelnigg-Mukherjee Notation: \n > A. V. Copan, \"Kutzelnigg-Mukherjee Tensor Notation\" Accessed with https://github.com/CCQC/chem-8950/tree/master/2017\n\n3. Original paper on MP2: \"Note on an Approximation Treatment for Many-Electron Systems\"\n\t> [[Moller:1934:618](https://journals.aps.org/pr/abstract/10.1103/PhysRev.46.618)] C. M\u00f8ller and M. S. Plesset, *Phys. Rev.* **46**, 618 (1934)\n \n\n\n\n```python\n\n```\n", "meta": {"hexsha": "823f207dc56d5b2a4d708d0e922272438cc93dce", "size": 19530, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Tutorials/08_CEPA0_and_CCD/8a_Intro_to_spin_orbital_postHF.ipynb", "max_stars_repo_name": "loriab/psi4numpy", "max_stars_repo_head_hexsha": "01e5adec766549aeaf9a1c71bbe4129b3b3515d0", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tutorials/08_CEPA0_and_CCD/8a_Intro_to_spin_orbital_postHF.ipynb", "max_issues_repo_name": "loriab/psi4numpy", "max_issues_repo_head_hexsha": "01e5adec766549aeaf9a1c71bbe4129b3b3515d0", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tutorials/08_CEPA0_and_CCD/8a_Intro_to_spin_orbital_postHF.ipynb", "max_forks_repo_name": "loriab/psi4numpy", "max_forks_repo_head_hexsha": "01e5adec766549aeaf9a1c71bbe4129b3b3515d0", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8270377734, "max_line_length": 730, "alphanum_fraction": 0.6032770097, "converted": true, "num_tokens": 3822, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.3140505385717077, "lm_q1q2_score": 0.149670096088864}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n print(f\"heads: {heads}, {N}\")\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, dist.pdf(x, 10, 100))\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\n /Users/jasonbenn/.pyenv/versions/3.6.4/envs/bayesian-methods-for-hackers/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n Multiprocess sampling (4 chains in 4 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2_log__]\n >Metropolis: [lambda_1_log__]\n 77%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 11610/15000 [00:06<00:01, 1713.06it/s]INFO (theano.gof.compilelock): Waiting for existing lock by process '12092' (I am process '12093')\n INFO (theano.gof.compilelock): To manually release the lock, delete /Users/jasonbenn/.theano/compiledir_Darwin-17.4.0-x86_64-i386-64bit-i386-3.6.4-64/lock_dir\n 80%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 12012/15000 [00:06<00:01, 1716.64it/s]INFO (theano.gof.compilelock): Waiting for existing lock by process '12092' (I am process '12094')\n INFO (theano.gof.compilelock): To manually release the lock, delete /Users/jasonbenn/.theano/compiledir_Darwin-17.4.0-x86_64-i386-64bit-i386-3.6.4-64/lock_dir\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15000/15000 [00:08<00:00, 1753.79it/s]\n INFO (theano.gof.compilelock): Waiting for existing lock by process '12093' (I am process '12094')\n INFO (theano.gof.compilelock): To manually release the lock, delete /Users/jasonbenn/.theano/compiledir_Darwin-17.4.0-x86_64-i386-64bit-i386-3.6.4-64/lock_dir\n The estimated number of effective samples is smaller than 200 for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\nprint(lambda_1_samples.mean())\nprint(lambda_2_samples.mean())\n```\n\n 17.81079852458376\n 22.505799206019486\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\nnp.mean(lambda_1_samples/lambda_2_samples)\n```\n\n\n\n\n 0.7994063505697679\n\n\n\n\n```python\nlambda_1_samples.mean()/lambda_2_samples.mean()\n```\n\n\n\n\n 0.7913870714628973\n\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\nlambda_1_samples[tau_samples < 45].mean()\n```\n\n\n\n\n 17.75357924652044\n\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "123e086993d322ae83355228c99dbdd509aee556", "size": 400254, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "JasonBenn/bayesian-methods-for-hackers", "max_stars_repo_head_hexsha": "954f4b53bc953748b72b71e5793d08ee35056303", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "JasonBenn/bayesian-methods-for-hackers", "max_issues_repo_head_hexsha": "954f4b53bc953748b72b71e5793d08ee35056303", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "JasonBenn/bayesian-methods-for-hackers", "max_forks_repo_head_hexsha": "954f4b53bc953748b72b71e5793d08ee35056303", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 319.6916932907, "max_line_length": 88804, "alphanum_fraction": 0.9106217552, "converted": true, "num_tokens": 12386, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49218813572079556, "lm_q2_score": 0.3040416623541848, "lm_q1q2_score": 0.1496456989755578}} {"text": "\n# PHY321: Motion examples, Forces, Newton's Laws and Motion Example\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway\n\nDate: **Jan 25, 2021**\n\nCopyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n\n\n\n## Aims and Overarching Motivation\n\n### Monday\n\nWe try to finalize the discussion we started last Friday on falling objects and numerical aspects thereof.\nIf we get time, we start with a discussion of forces as well.\n\nRecommended reading: Taylor 1.3\n\n### Wednesday\n\nWe revisit Newton's laws and discuss how to analyze a problem.\n\nRecommended reading: Taylor 1.4 and 1.5\n\n### Friday\n\nWe discuss several examples and try to wrap up the discussions on Newton's laws.\n\nRecommended reading: Taylor 1.4-1.6 and 2.1-2.2 as examples of motion problems.\n\n\n\n## Basic Steps of Scientific Investigations\n\nLast week we discussed several basi elements of the scientific method. We repeat them here.\n\nAn overarching aim in this course is to give you a deeper\nunderstanding of the scientific method. The problems we study will all\ninvolve cases where we can apply classical mechanics. In our previous\nmaterial we already assumed that we had a model for the motion of an\nobject. Alternatively we could have data from experiment (like Usain\nBolt's 100m world record run in 2008). Or we could have performed\nourselves an experiment and we want to understand which forces are at\nplay and whether these forces can be understood in terms of\nfundamental forces.\n\nOur first step consists in identifying the problem. What we sketch\nhere may include a mix of experiment and theoretical simulations, or\njust experiment or only theory.\n\n## Identifying our System\n\nHere we can ask questions like\n1. What kind of object is moving\n\n2. What kind of data do we have\n\n3. How do we measure position, velocity, acceleration etc\n\n4. Which initial conditions influence our system\n\n5. Other aspects which allow us to identify the system\n\n## Defining a Model\n\nWith our eventual data and observations we would now like to develop a\nmodel for the system. In the end we want obviously to be able to\nunderstand which forces are at play and how they influence our\nspecific system. That is, can we extract some deeper insights about a\nsystem?\n\nWe need then to\n1. Find the forces that act on our system\n\n2. Introduce models for the forces\n\n3. Identify the equations which can govern the system (Newton's second law for example)\n\n4. More elements we deem important for defining our model\n\n## Solving the Equations\n\nWith the model at hand, we can then solve the equations. In classical mechanics we normally end up with solving sets of coupled ordinary differential equations or partial differential equations.\n1. Using Newton's second law we have equations of the type $\\boldsymbol{F}=m\\boldsymbol{a}=md\\boldsymbol{v}/dt$\n\n2. We need to define the initial conditions (typically the initial velocity and position as functions of time) and/or initial conditions and boundary conditions\n\n3. The solution of the equations give us then the position, the velocity and other time-dependent quantities which may specify the motion of a given object.\n\nWe are not yet done. With our lovely solvers, we need to start thinking.\n\n## Analyze\n\nNow it is time to ask the big questions. What do our results mean? Can we give a simple interpretation in terms of fundamental laws? What do our results mean? Are they correct?\nThus, typical questions we may ask are\n1. Are our results for say $\\boldsymbol{r}(t)$ valid? Do we trust what we did? Can you validate and verify the correctness of your results?\n\n2. Evaluate the answers and their implications\n\n3. Compare with experimental data if possible. Does our model make sense?\n\n4. and obviously many other questions.\n\nThe analysis stage feeds back to the first stage. It may happen that\nthe data we had were not good enough, there could be large statistical\nuncertainties. We may need to collect more data or perhaps we did a\nsloppy job in identifying the degrees of freedom.\n\nAll these steps are essential elements in a scientific\nenquiry. Hopefully, through a mix of numerical simulations, analytical\ncalculations and experiments we may gain a deeper insight about the\nphysics of a specific system.\n\n\n## Falling baseball in one dimension\n\nWe anticipate the mathematical model to come and assume that we have a\nmodel for the motion of a falling baseball without air resistance.\nOur system (the baseball) is at an initial height $y_0$ (which we will\nspecify in the program below) at the initial time $t_0=0$. In our program example here we will plot the position in steps of $\\Delta t$ up to a final time $t_f$. \nThe mathematical formula for the position $y(t)$ as function of time $t$ is\n\n$$\ny(t) = y_0-\\frac{1}{2}gt^2,\n$$\n\nwhere $g=9.80665=0.980655\\times 10^1$m/s${}^2$ is a constant representing the standard acceleration due to gravity.\nWe have here adopted the conventional standard value. This does not take into account other effects, such as buoyancy or drag.\nFurthermore, we stop when the ball hits the ground, which takes place at\n\n$$\ny(t) = 0= y_0-\\frac{1}{2}gt^2,\n$$\n\nwhich gives us a final time $t_f=\\sqrt{2y_0/g}$. \n\nAs of now we simply assume that we know the formula for the falling object. Afterwards, we will derive it.\n\n## Our Python Encounter\n\nWe start with preparing folders for storing our calculations, figures and if needed, specific data files we use as input or output files.\n\n\n```python\n%matplotlib inline\n\n# Common imports\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n#in case we have an input file we wish to read in\n#infile = open(data_path(\"MassEval2016.dat\"),'r')\n```\n\nYou could also define a function for making our plots. You\ncan obviously avoid this and simply set up various **matplotlib**\ncommands every time you need them. You may however find it convenient\nto collect all such commands in one function and simply call this\nfunction.\n\n\n```python\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\ndef MakePlot(x,y, styles, labels, axlabels):\n plt.figure(figsize=(10,6))\n for i in range(len(x)):\n plt.plot(x[i], y[i], styles[i], label = labels[i])\n plt.xlabel(axlabels[0])\n plt.ylabel(axlabels[1])\n plt.legend(loc=0)\n```\n\nThereafter we start setting up the code for the falling object.\n\n\n```python\n%matplotlib inline\nimport matplotlib.patches as mpatches\n\ng = 9.80655 #m/s^2\ny_0 = 10.0 # initial position in meters\nDeltaT = 0.1 # time step\n# final time when y = 0, t = sqrt(2*10/g)\ntfinal = np.sqrt(2.0*y_0/g)\n#set up arrays \nt = np.arange(0,tfinal,DeltaT)\ny =y_0 -g*.5*t**2\n# Then make a nice printout in table form using Pandas\nimport pandas as pd\nfrom IPython.display import display\ndata = {'t[s]': t,\n 'y[m]': y\n }\nRawData = pd.DataFrame(data)\ndisplay(RawData)\nplt.style.use('ggplot')\nplt.figure(figsize=(8,8))\nplt.scatter(t, y, color = 'b')\nblue_patch = mpatches.Patch(color = 'b', label = 'Height y as function of time t')\nplt.legend(handles=[blue_patch])\nplt.xlabel(\"t[s]\")\nplt.ylabel(\"y[m]\")\nsave_fig(\"FallingBaseball\")\nplt.show()\n```\n\nHere we used **pandas** (see below) to systemize the output of the position as function of time.\n\n\n## Average quantities\nWe define now the average velocity as\n\n$$\n\\overline{v}(t) = \\frac{y(t+\\Delta t)-y(t)}{\\Delta t}.\n$$\n\nIn the code we have set the time step $\\Delta t$ to a given value. We could define it in terms of the number of points $n$ as\n\n$$\n\\Delta t = \\frac{t_{\\mathrm{final}-}t_{\\mathrm{initial}}}{n+1}.\n$$\n\nSince we have discretized the variables, we introduce the counter $i$ and let $y(t)\\rightarrow y(t_i)=y_i$ and $t\\rightarrow t_i$\nwith $i=0,1,\\dots, n$. This gives us the following shorthand notations that we will use for the rest of this course. We define\n\n$$\ny_i = y(t_i),\\hspace{0.2cm} i=0,1,2,\\dots,n.\n$$\n\nThis applies to other variables which depend on say time. Examples are the velocities, accelerations, momenta etc.\nFurthermore we use the shorthand\n\n$$\ny_{i\\pm 1} = y(t_i\\pm \\Delta t),\\hspace{0.12cm} i=0,1,2,\\dots,n.\n$$\n\n## Compact equations\nWe can then rewrite in a more compact form the average velocity as\n\n$$\n\\overline{v}_i = \\frac{y_{i+1}-y_{i}}{\\Delta t}.\n$$\n\nThe velocity is defined as the change in position per unit time.\nIn the limit $\\Delta t \\rightarrow 0$ this defines the instantaneous velocity, which is nothing but the slope of the position at a time $t$.\nWe have thus\n\n$$\nv(t) = \\frac{dy}{dt}=\\lim_{\\Delta t \\rightarrow 0}\\frac{y(t+\\Delta t)-y(t)}{\\Delta t}.\n$$\n\nSimilarly, we can define the average acceleration as the change in velocity per unit time as\n\n$$\n\\overline{a}_i = \\frac{v_{i+1}-v_{i}}{\\Delta t},\n$$\n\nresulting in the instantaneous acceleration\n\n$$\na(t) = \\frac{dv}{dt}=\\lim_{\\Delta t\\rightarrow 0}\\frac{v(t+\\Delta t)-v(t)}{\\Delta t}.\n$$\n\n**A note on notations**: When writing for example the velocity as $v(t)$ we are then referring to the continuous and instantaneous value. A subscript like\n$v_i$ refers always to the discretized values.\n\n## A differential equation\nWe can rewrite the instantaneous acceleration as\n\n$$\na(t) = \\frac{dv}{dt}=\\frac{d}{dt}\\frac{dy}{dt}=\\frac{d^2y}{dt^2}.\n$$\n\nThis forms the starting point for our definition of forces later. It is a famous second-order differential equation. If the acceleration is constant we can now recover the formula for the falling ball we started with.\nThe acceleration can depend on the position and the velocity. To be more formal we should then write the above differential equation as\n\n$$\n\\frac{d^2y}{dt^2}=a(t,y(t),\\frac{dy}{dt}).\n$$\n\nWith given initial conditions for $y(t_0)$ and $v(t_0)$ we can then\nintegrate the above equation and find the velocities and positions at\na given time $t$.\n\nIf we multiply with mass, we have one of the famous expressions for Newton's second law,\n\n$$\nF(y,v,t)=m\\frac{d^2y}{dt^2}=ma(t,y(t),\\frac{dy}{dt}),\n$$\n\nwhere $F$ is the force acting on an object with mass $m$. We see that it also has the right dimension, mass times length divided by time squared.\nWe will come back to this soon.\n\n## Integrating our equations\n\nFormally we can then, starting with the acceleration (suppose we have measured it, how could we do that?)\ncompute say the height of a building. To see this we perform the following integrations from an initial time $t_0$ to a given time $t$\n\n$$\n\\int_{t_0}^t dt a(t) = \\int_{t_0}^t dt \\frac{dv}{dt} = v(t)-v(t_0),\n$$\n\nor as\n\n$$\nv(t)=v(t_0)+\\int_{t_0}^t dt a(t).\n$$\n\nWhen we know the velocity as function of time, we can find the position as function of time starting from the defintion of velocity as the derivative with respect to time, that is we have\n\n$$\n\\int_{t_0}^t dt v(t) = \\int_{t_0}^t dt \\frac{dy}{dt} = y(t)-y(t_0),\n$$\n\nor as\n\n$$\ny(t)=y(t_0)+\\int_{t_0}^t dt v(t).\n$$\n\nThese equations define what is called the integration method for\nfinding the position and the velocity as functions of time. There is\nno loss of generality if we extend these equations to more than one\nspatial dimension.\n\n## Constant acceleration case, the velocity\nLet us compute the velocity using the constant value for the acceleration given by $-g$. We have\n\n$$\nv(t)=v(t_0)+\\int_{t_0}^t dt a(t)=v(t_0)+\\int_{t_0}^t dt (-g).\n$$\n\nUsing our initial time as $t_0=0$s and setting the initial velocity $v(t_0)=v_0=0$m/s we get when integrating\n\n$$\nv(t)=-gt.\n$$\n\nThe more general case is\n\n$$\nv(t)=v_0-g(t-t_0).\n$$\n\nWe can then integrate the velocity and obtain the final formula for the position as function of time through\n\n$$\ny(t)=y(t_0)+\\int_{t_0}^t dt v(t)=y_0+\\int_{t_0}^t dt v(t)=y_0+\\int_{t_0}^t dt (-gt),\n$$\n\nWith $y_0=10$m and $t_0=0$s, we obtain the equation we started with\n\n$$\ny(t)=10-\\frac{1}{2}gt^2.\n$$\n\n## Computing the averages\nAfter this mathematical background we are now ready to compute the mean velocity using our data.\n\n\n```python\n# Now we can compute the mean velocity using our data\n# We define first an array Vaverage\nn = np.size(t)\nVaverage = np.zeros(n)\nfor i in range(1,n-1):\n Vaverage[i] = (y[i+1]-y[i])/DeltaT\n# Now we can compute the mean accelearatio using our data\n# We define first an array Aaverage\nn = np.size(t)\nAaverage = np.zeros(n)\nAaverage[0] = -g\nfor i in range(1,n-1):\n Aaverage[i] = (Vaverage[i+1]-Vaverage[i])/DeltaT\ndata = {'t[s]': t,\n 'y[m]': y,\n 'v[m/s]': Vaverage,\n 'a[m/s^2]': Aaverage\n }\nNewData = pd.DataFrame(data)\ndisplay(NewData[0:n-2])\n```\n\nNote that we don't print the last values! \n\n\n\n## Including Air Resistance in our model\n\nIn our discussions till now of the falling baseball, we have ignored\nair resistance and simply assumed that our system is only influenced\nby the gravitational force. We will postpone the derivation of air\nresistance till later, after our discussion of Newton's laws and\nforces.\n\nFor our discussions here it suffices to state that the accelerations is now modified to\n\n$$\n\\boldsymbol{a}(t) = -g +D\\boldsymbol{v}(t)\\vert v(t)\\vert,\n$$\n\nwhere $\\vert v(t)\\vert$ is the absolute value of the velocity and $D$ is a constant which pertains to the specific object we are studying.\nSince we are dealing with motion in one dimension, we can simplify the above to\n\n$$\na(t) = -g +Dv^2(t).\n$$\n\nWe can rewrite this as a differential equation\n\n$$\na(t) = \\frac{dv}{dt}=\\frac{d^2y}{dt^2}= -g +Dv^2(t).\n$$\n\nUsing the integral equations discussed above we can integrate twice\nand obtain first the velocity as function of time and thereafter the\nposition as function of time.\n\nFor this particular case, we can actually obtain an analytical\nsolution for the velocity and for the position. Here we will first\ncompute the solutions analytically, thereafter we will derive Euler's\nmethod for solving these differential equations numerically.\n\n## Analytical solutions\n\nFor simplicity let us just write $v(t)$ as $v$. We have\n\n$$\n\\frac{dv}{dt}= -g +Dv^2(t).\n$$\n\nWe can solve this using the technique of separation of variables. We\nisolate on the left all terms that involve $v$ and on the right all\nterms that involve time. We get then\n\n$$\n\\frac{dv}{g -Dv^2(t) }= -dt,\n$$\n\nWe scale now the equation to the left by introducing a constant\n$v_T=\\sqrt{g/D}$. This constant has dimension length/time. Can you\nshow this?\n\nNext we integrate the left-hand side (lhs) from $v_0=0$ m/s to $v$ and\nthe right-hand side (rhs) from $t_0=0$ to $t$ and obtain\n\n$$\n\\int_{0}^v\\frac{dv}{g -Dv^2(t) }= \\frac{v_T}{g}\\mathrm{arctanh}(\\frac{v}{v_T}) =-\\int_0^tdt = -t.\n$$\n\nWe can reorganize these equations as\n\n$$\nv_T\\mathrm{arctanh}(\\frac{v}{v_T}) =-gt,\n$$\n\nwhich gives us $v$ as function of time\n\n$$\nv(t)=v_T\\tanh{-(\\frac{gt}{v_T})}.\n$$\n\n## Finding the final height\nWith the velocity we can then find the height $y(t)$ by integrating yet another time, that is\n\n$$\ny(t)=y(t_0)+\\int_{t_0}^t dt v(t)=\\int_{0}^t dt[v_T\\tanh{-(\\frac{gt}{v_T})}].\n$$\n\nThis integral is a little bit trickier but we can look it up in a table over \nknown integrals and we get\n\n$$\ny(t)=y(t_0)-\\frac{v_T^2}{g}\\log{[\\cosh{(\\frac{gt}{v_T})}]}.\n$$\n\nAlternatively we could have used the symbolic Python package **Sympy**.\n\nIn most cases however, we need to revert to numerical solutions. \n\n\n## Our first attempt at solving differential equations\n\nHere we will try the simplest possible approach to solving the second-order differential \nequation\n\n$$\na(t) =\\frac{d^2y}{dt^2}= -g +Dv^2(t).\n$$\n\nWe rewrite it as two coupled first-order equations (this is a standard approach)\n\n$$\n\\frac{dy}{dt} = v(t),\n$$\n\nwith initial condition $y(t_0)=y_0$ and\n\n$$\na(t) =\\frac{dv}{dt}= -g +Dv^2(t),\n$$\n\nwith initial condition $v(t_0)=v_0$.\n\nMany of the algorithms for solving differential equations start with simple Taylor equations.\nIf we now Taylor expand $y$ and $v$ around a value $t+\\Delta t$ we have\n\n$$\ny(t+\\Delta t) = y(t)+\\Delta t \\frac{dy}{dt}+\\frac{\\Delta t^2}{2!} \\frac{d^2y}{dt^2}+O(\\Delta t^3),\n$$\n\nand\n\n$$\nv(t+\\Delta t) = v(t)+\\Delta t \\frac{dv}{dt}+\\frac{\\Delta t^2}{2!} \\frac{d^2v}{dt^2}+O(\\Delta t^3).\n$$\n\nUsing the fact that $dy/dt = v$ and $dv/dt=a$ and keeping only terms up to $\\Delta t$ we have\n\n$$\ny(t+\\Delta t) = y(t)+\\Delta t v(t)+O(\\Delta t^2),\n$$\n\nand\n\n$$\nv(t+\\Delta t) = v(t)+\\Delta t a(t)+O(\\Delta t^2).\n$$\n\n## Discretizing our equations\n\nUsing our discretized versions of the equations with for example\n$y_{i}=y(t_i)$ and $y_{i\\pm 1}=y(t_i+\\Delta t)$, we can rewrite the\nabove equations as (and truncating at $\\Delta t$)\n\n$$\ny_{i+1} = y_i+\\Delta t v_i,\n$$\n\nand\n\n$$\nv_{i+1} = v_i+\\Delta t a_i.\n$$\n\nThese are the famous Euler equations (forward Euler).\n\nTo solve these equations numerically we start at a time $t_0$ and simply integrate up these equations to a final time $t_f$,\nThe step size $\\Delta t$ is an input parameter in our code.\nYou can define it directly in the code below as\n\n\n```python\nDeltaT = 0.1\n```\n\nWith a given final time **tfinal** we can then find the number of integration points via the **ceil** function included in the **math** package of Python\nas\n\n\n```python\n#define final time, assuming that initial time is zero\nfrom math import ceil\ntfinal = 0.5\nn = ceil(tfinal/DeltaT)\nprint(n)\n```\n\nThe **ceil** function returns the smallest integer not less than the input in say\n\n\n```python\nx = 21.15\nprint(ceil(x))\n```\n\nwhich in the case here is 22.\n\n\n```python\nx = 21.75\nprint(ceil(x))\n```\n\nwhich also yields 22. The **floor** function in the **math** package\nis used to return the closest integer value which is less than or equal to the specified expression or value.\nCompare the previous result to the usage of **floor**\n\n\n```python\nfrom math import floor\nx = 21.75\nprint(floor(x))\n```\n\nAlternatively, we can define ourselves the number of integration(mesh) points. In this case we could have\n\n\n```python\nn = 10\ntinitial = 0.0\ntfinal = 0.5\nDeltaT = (tfinal-tinitial)/(n)\nprint(DeltaT)\n```\n\nSince we will set up one-dimensional arrays that contain the values of\nvarious variables like time, position, velocity, acceleration etc, we\nneed to know the value of $n$, the number of data points (or\nintegration or mesh points). With $n$ we can initialize a given array\nby setting all elelements to zero, as done here\n\n\n```python\n# define array a\na = np.zeros(n)\nprint(a)\n```\n\n## Code for implementing Euler's method\nIn the code here we implement this simple Eurler scheme choosing a value for $D=0.0245$ m/s.\n\n\n```python\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n\ng = 9.80655 #m/s^2\nD = 0.00245 #m/s\nDeltaT = 0.1\n#set up arrays \ntfinal = 0.5\nn = ceil(tfinal/DeltaT)\n# define scaling constant vT\nvT = sqrt(g/D)\n# set up arrays for t, a, v, and y and we can compare our results with analytical ones\nt = np.zeros(n)\na = np.zeros(n)\nv = np.zeros(n)\ny = np.zeros(n)\nyanalytic = np.zeros(n)\n# Initial conditions\nv[0] = 0.0 #m/s\ny[0] = 10.0 #m\nyanalytic[0] = y[0]\n# Start integrating using Euler's method\nfor i in range(n-1):\n # expression for acceleration\n a[i] = -g + D*v[i]*v[i]\n # update velocity and position\n y[i+1] = y[i] + DeltaT*v[i]\n v[i+1] = v[i] + DeltaT*a[i]\n # update time to next time step and compute analytical answer\n t[i+1] = t[i] + DeltaT\n yanalytic[i+1] = y[0]-(vT*vT/g)*log(cosh(g*t[i+1]/vT))\n if ( y[i+1] < 0.0):\n break\na[n-1] = -g + D*v[n-1]*v[n-1]\ndata = {'t[s]': t,\n 'y[m]': y-yanalytic,\n 'v[m/s]': v,\n 'a[m/s^2]': a\n }\nNewData = pd.DataFrame(data)\ndisplay(NewData)\n#finally we plot the data\nfig, axs = plt.subplots(3, 1)\naxs[0].plot(t, y, t, yanalytic)\naxs[0].set_xlim(0, tfinal)\naxs[0].set_ylabel('y and exact')\naxs[1].plot(t, v)\naxs[1].set_ylabel('v[m/s]')\naxs[2].plot(t, a)\naxs[2].set_xlabel('time[s]')\naxs[2].set_ylabel('a[m/s^2]')\nfig.tight_layout()\nsave_fig(\"EulerIntegration\")\nplt.show()\n```\n\nTry different values for $\\Delta t$ and study the difference between the exact solution and the numerical solution.\n\n## Simple extension, the Euler-Cromer method\n\nThe Euler-Cromer method is a simple variant of the standard Euler\nmethod. We use the newly updated velocity $v_{i+1}$ as an input to the\nnew position, that is, instead of\n\n$$\ny_{i+1} = y_i+\\Delta t v_i,\n$$\n\nand\n\n$$\nv_{i+1} = v_i+\\Delta t a_i,\n$$\n\nwe use now the newly calculate for $v_{i+1}$ as input to $y_{i+1}$, that is \nwe compute first\n\n$$\nv_{i+1} = v_i+\\Delta t a_i,\n$$\n\nand then\n\n$$\ny_{i+1} = y_i+\\Delta t v_{i+1},\n$$\n\nImplementing the Euler-Cromer method yields a simple change to the previous code. We only need to change the following line in the loop over time\nsteps\n\n\n```python\nfor i in range(n-1):\n # more codes in between here\n v[i+1] = v[i] + DeltaT*a[i]\n y[i+1] = y[i] + DeltaT*v[i+1]\n # more code\n```\n\n## Newton's Laws\n\nLet us now remind ourselves of Newton's laws, since these are the laws of motion we will study in this course.\n\n\nWhen analyzing a physical system we normally start with distinguishing between the object we are studying (we will label this in more general terms as our **system**) and how this system interacts with the environment (which often means everything else!)\n\nIn our investigations we will thus analyze a specific physics problem in terms of the system and the environment.\nIn doing so we need to identify the forces that act on the system and assume that the\nforces acting on the system must have a source, an identifiable cause in\nthe environment.\n\nA force acting on for example a falling object must be related to an interaction with something in the environment.\nThis also means that we do not consider internal forces. The latter are forces between\none part of the object and another part. In this course we will mainly focus on external forces.\n\nForces are either contact forces or long-range forces.\n\nContact forces, as evident from the name, are forces that occur at the contact between\nthe system and the environment. Well-known long-range forces are the gravitional force and the electromagnetic force.\n\n\n## Setting up a model for forces acting on an object\n\nIn order to set up the forces which act on an object, the following steps may be useful\n1. Divide the problem into system and environment.\n\n2. Draw a figure of the object and everything in contact with the object.\n\n3. Draw a closed curve around the system.\n\n4. Find contact points\u2014these are the points where contact forces may act.\n\n5. Give names and symbols to all the contact forces.\n\n6. Identify the long-range forces.\n\n7. Make a drawing of the object. Draw the forces as arrows, vectors, starting from where the force is acting. The direction of the vector(s) indicates the (positive) direction of the force. Try to make the length of the arrow indicate the relative magnitude of the forces.\n\n8. Draw in the axes of the coordinate system. It is often convenient to make one axis parallel to the direction of motion. When you choose the direction of the axis you also choose the positive direction for the axis.\n\n## Newton's Laws, the Second one first\n\n\nNewton\u2019s second law of motion: The force $\\boldsymbol{F}$ on an object of inertial mass $m$\nis related to the acceleration a of the object through\n\n$$\n\\boldsymbol{F} = m\\boldsymbol{a},\n$$\n\nwhere $\\boldsymbol{a}$ is the acceleration.\n\nNewton\u2019s laws of motion are laws of nature that have been found by experimental\ninvestigations and have been shown to hold up to continued experimental investigations.\nNewton\u2019s laws are valid over a wide range of length- and time-scales. We\nuse Newton\u2019s laws of motion to describe everything from the motion of atoms to the\nmotion of galaxies.\n\nThe second law is a vector equation with the acceleration having the same\ndirection as the force. The acceleration is proportional to the force via the mass $m$ of the system under study.\n\n\nNewton\u2019s second law introduces a new property of an object, the so-called \ninertial mass $m$. We determine the inertial mass of an object by measuring the\nacceleration for a given applied force.\n\n\n## Then the First Law\n\n\nWhat happens if the net external force on a body is zero? Applying Newton\u2019s second\nlaw, we find:\n\n$$\n\\boldsymbol{F} = 0 = m\\boldsymbol{a},\n$$\n\nwhich gives using the definition of the acceleration\n\n$$\n\\boldsymbol{a} = \\frac{d\\boldsymbol{v}}{dt}=0.\n$$\n\nThe acceleration is zero, which means that the velocity of the object is constant. This\nis often referred to as Newton\u2019s first law. An object in a state of uniform motion tends to remain in\nthat state unless an external force changes its state of motion.\nWhy do we need a separate law for this? Is it not simply a special case of Newton\u2019s\nsecond law? Yes, Newton\u2019s first law can be deduced from the second law as we have\nillustrated. However, the first law is often used for a different purpose: Newton\u2019s\nFirst Law tells us about the limit of applicability of Newton\u2019s Second law. Newton\u2019s\nSecond law can only be used in reference systems where the First law is obeyed. But\nis not the First law always valid? No! The First law is only valid in reference systems\nthat are not accelerated. If you observe the motion of a ball from an accelerating\ncar, the ball will appear to accelerate even if there are no forces acting on it. We call\nsystems that are not accelerating inertial systems, and Newton\u2019s first law is often\ncalled the law of inertia. Newton\u2019s first and second laws of motion are only valid in\ninertial systems. \n\nA system is an inertial system if it is not accelerated. It means that the reference system\nmust not be accelerating linearly or rotating. Unfortunately, this means that most\nsystems we know are not really inertial systems. For example, the surface of the\nEarth is clearly not an inertial system, because the Earth is rotating. The Earth is also\nnot an inertial system, because it ismoving in a curved path around the Sun. However,\neven if the surface of the Earth is not strictly an inertial system, it may be considered\nto be approximately an inertial system for many laboratory-size experiments.\n\n## And finally the Third Law\n\n\nIf there is a force from object A on object B, there is also a force from object B on object A.\nThis fundamental principle of interactions is called Newton\u2019s third law. We do not\nknow of any force that do not obey this law: All forces appear in pairs. Newton\u2019s\nthird law is usually formulated as: For every action there is an equal and opposite\nreaction.\n\n\n## Motion of a Single Object\n\nHere we consider the motion of a single particle moving under\nthe influence of some set of forces. We will consider some problems where\nthe force does not depend on the position. In that case Newton's law\n$m\\dot{\\boldsymbol{v}}=\\boldsymbol{F}(\\boldsymbol{v})$ is a first-order differential\nequation and one solves for $\\boldsymbol{v}(t)$, then moves on to integrate\n$\\boldsymbol{v}$ to get the position. In essentially all of these cases we cna find an analytical solution.\n\n\n## Air Resistance in One Dimension\n\nAir resistance tends to scale as the square of the velocity. This is\nin contrast to many problems chosen for textbooks, where it is linear\nin the velocity. The choice of a linear dependence is motivated by\nmathematical simplicity (it keeps the differential equation linear)\nrather than by physics. One can see that the force should be quadratic\nin velocity by considering the momentum imparted on the air\nmolecules. If an object sweeps through a volume $dV$ of air in time\n$dt$, the momentum imparted on the air is\n\n\n

\n\n$$\n\\begin{equation}\ndP=\\rho_m dV v,\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nwhere $v$ is the velocity of the object and $\\rho_m$ is the mass\ndensity of the air. If the molecules bounce back as opposed to stop\nyou would double the size of the term. The opposite value of the\nmomentum is imparted onto the object itself. Geometrically, the\ndifferential volume is\n\n\n
\n\n$$\n\\begin{equation}\ndV=Avdt,\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nwhere $A$ is the cross-sectional area and $vdt$ is the distance the\nobject moved in time $dt$.\n\n## Resulting Acceleration\nPlugging this into the expression above,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dP}{dt}=-\\rho_m A v^2.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nThis is the force felt by the particle, and is opposite to its\ndirection of motion. Now, because air doesn't stop when it hits an\nobject, but flows around the best it can, the actual force is reduced\nby a dimensionless factor $c_W$, called the drag coefficient.\n\n\n
\n\n$$\n\\begin{equation}\nF_{\\rm drag}=-c_W\\rho_m Av^2,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the acceleration is\n\n$$\n\\begin{eqnarray}\n\\frac{dv}{dt}=-\\frac{c_W\\rho_mA}{m}v^2.\n\\end{eqnarray}\n$$\n\nFor a particle with initial velocity $v_0$, one can separate the $dt$\nto one side of the equation, and move everything with $v$s to the\nother side. We did this in our discussion of simple motion and will not repeat it here.\n\nOn more general terms,\nfor many systems, e.g. an automobile, there are multiple sources of\nresistance. In addition to wind resistance, where the force is\nproportional to $v^2$, there are dissipative effects of the tires on\nthe pavement, and in the axel and drive train. These other forces can\nhave components that scale proportional to $v$, and components that\nare independent of $v$. Those independent of $v$, e.g. the usual\n$f=\\mu_K N$ frictional force you consider in your first Physics courses, only set in\nonce the object is actually moving. As speeds become higher, the $v^2$\ncomponents begin to dominate relative to the others. For automobiles\nat freeway speeds, the $v^2$ terms are largely responsible for the\nloss of efficiency. To travel a distance $L$ at fixed speed $v$, the\nenergy/work required to overcome the dissipative forces are $fL$,\nwhich for a force of the form $f=\\alpha v^n$ becomes\n\n$$\n\\begin{eqnarray}\nW=\\int dx~f=\\alpha v^n L.\n\\end{eqnarray}\n$$\n\nFor $n=0$ the work is\nindependent of speed, but for the wind resistance, where $n=2$,\nslowing down is essential if one wishes to reduce fuel consumption. It\nis also important to consider that engines are designed to be most\nefficient at a chosen range of power output. Thus, some cars will get\nbetter mileage at higher speeds (They perform better at 50 mph than at\n5 mph) despite the considerations mentioned above.\n\n## Going Ballistic, Projectile Motion or a Softer Approach, Falling Raindrops\n\n\nAs an example of Newton's Laws we consider projectile motion (or a\nfalling raindrop or a ball we throw up in the air) with a drag force. Even though air resistance is\nlargely proportional to the square of the velocity, we will consider\nthe drag force to be linear to the velocity, $\\boldsymbol{F}=-m\\gamma\\boldsymbol{v}$,\nfor the purposes of this exercise. The acceleration for a projectile moving upwards,\n$\\boldsymbol{a}=\\boldsymbol{F}/m$, becomes\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{dt}=-\\gamma v_x,\\\\\n\\nonumber\n\\frac{dv_y}{dt}=-\\gamma v_y-g,\n\\end{eqnarray}\n$$\n\nand $\\gamma$ has dimensions of inverse time. \n\nIf you on the other hand have a falling raindrop, how do these equations change? See for example Figure 2.1 in Taylor.\nLet us stay with a ball which is thrown up in the air at $t=0$. \n\n## Ways of solving these equations\n\nWe will go over two different ways to solve this equation. The first\nby direct integration, and the second as a differential equation. To\ndo this by direct integration, one simply multiplies both sides of the\nequations above by $dt$, then divide by the appropriate factors so\nthat the $v$s are all on one side of the equation and the $dt$ is on\nthe other. For the $x$ motion one finds an easily integrable equation,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{v_x}&=&-\\gamma dt,\\\\\n\\nonumber\n\\int_{v_{0x}}^{v_{x}}\\frac{dv_x}{v_x}&=&-\\gamma\\int_0^{t}dt,\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{x}}{v_{0x}}\\right)&=&-\\gamma t,\\\\\n\\nonumber\nv_{x}(t)&=&v_{0x}e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nThis is very much the result you would have written down\nby inspection. For the $y$-component of the velocity,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_y}{v_y+g/\\gamma}&=&-\\gamma dt\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{y}+g/\\gamma}{v_{0y}-g/\\gamma}\\right)&=&-\\gamma t_f,\\\\\n\\nonumber\nv_{fy}&=&-\\frac{g}{\\gamma}+\\left(v_{0y}+\\frac{g}{\\gamma}\\right)e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nWhereas $v_x$ starts at some value and decays\nexponentially to zero, $v_y$ decays exponentially to the terminal\nvelocity, $v_t=-g/\\gamma$.\n\n## Solving as differential equations\n\nAlthough this direct integration is simpler than the method we invoke\nbelow, the method below will come in useful for some slightly more\ndifficult differential equations in the future. The differential\nequation for $v_x$ is straight-forward to solve. Because it is first\norder there is one arbitrary constant, $A$, and by inspection the\nsolution is\n\n\n
\n\n$$\n\\begin{equation}\nv_x=Ae^{-\\gamma t}.\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nThe arbitrary constants for equations of motion are usually determined\nby the initial conditions, or more generally boundary conditions. By\ninspection $A=v_{0x}$, the initial $x$ component of the velocity.\n\n\n## Differential Equations, contn\nThe differential equation for $v_y$ is a bit more complicated due to\nthe presence of $g$. Differential equations where all the terms are\nlinearly proportional to a function, in this case $v_y$, or to\nderivatives of the function, e.g., $v_y$, $dv_y/dt$,\n$d^2v_y/dt^2\\cdots$, are called linear differential equations. If\nthere are terms proportional to $v^2$, as would happen if the drag\nforce were proportional to the square of the velocity, the\ndifferential equation is not longer linear. Because this expression\nhas only one derivative in $v$ it is a first-order linear differential\nequation. If a term were added proportional to $d^2v/dt^2$ it would be\na second-order differential equation. In this case we have a term\ncompletely independent of $v$, the gravitational acceleration $g$, and\nthe usual strategy is to first rewrite the equation with all the\nlinear terms on one side of the equal sign,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_y}{dt}+\\gamma v_y=-g.\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\n## Splitting into two parts\n\nNow, the solution to the equation can be broken into two\nparts. Because this is a first-order differential equation we know\nthat there will be one arbitrary constant. Physically, the arbitrary\nconstant will be determined by setting the initial velocity, though it\ncould be determined by setting the velocity at any given time. Like\nmost differential equations, solutions are not \"solved\". Instead,\none guesses at a form, then shows the guess is correct. For these\ntypes of equations, one first tries to find a single solution,\ni.e. one with no arbitrary constants. This is called the {\\it\nparticular} solution, $y_p(t)$, though it should really be called\n\"a\" particular solution because there are an infinite number of such\nsolutions. One then finds a solution to the {\\it homogenous} equation,\nwhich is the equation with zero on the right-hand side,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,h}}{dt}+\\gamma v_{y,h}=0.\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nHomogenous solutions will have arbitrary constants. \n\nThe particular solution will solve the same equation as the original\ngeneral equation\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,p}}{dt}+\\gamma v_{y,p}=-g.\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nHowever, we don't need find one with arbitrary constants. Hence, it is\ncalled a **particular** solution.\n\nThe sum of the two,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=v_{y,p}+v_{y,h},\n\\label{_auto9} \\tag{9}\n\\end{equation}\n$$\n\nis a solution of the total equation because of the linear nature of\nthe differential equation. One has now found a *general* solution\nencompassing all solutions, because it both satisfies the general\nequation (like the particular solution), and has an arbitrary constant\nthat can be adjusted to fit any initial condition (like the homogneous\nsolution). If the equation were not linear, e.g if there were a term\nsuch as $v_y^2$ or $v_y\\dot{v}_y$, this technique would not work.\n\n## More details\n\nReturning to the example above, the homogenous solution is the same as\nthat for $v_x$, because there was no gravitational acceleration in\nthat case,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,h}=Be^{-\\gamma t}.\n\\label{_auto10} \\tag{10}\n\\end{equation}\n$$\n\nIn this case a particular solution is one with constant velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,p}=-g/\\gamma.\n\\label{_auto11} \\tag{11}\n\\end{equation}\n$$\n\nNote that this is the terminal velocity of a particle falling from a\ngreat height. The general solution is thus,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=Be^{-\\gamma t}-g/\\gamma,\n\\label{_auto12} \\tag{12}\n\\end{equation}\n$$\n\nand one can find $B$ from the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{0y}=B-g/\\gamma,~~~B=v_{0y}+g/\\gamma.\n\\label{_auto13} \\tag{13}\n\\end{equation}\n$$\n\nPlugging in the expression for $B$ gives the $y$ motion given the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=(v_{0y}+g/\\gamma)e^{-\\gamma t}-g/\\gamma.\n\\label{_auto14} \\tag{14}\n\\end{equation}\n$$\n\nIt is easy to see that this solution has $v_y=v_{0y}$ when $t=0$ and\n$v_y=-g/\\gamma$ when $t\\rightarrow\\infty$.\n\nOne can also integrate the two equations to find the coordinates $x$\nand $y$ as functions of $t$,\n\n$$\n\\begin{eqnarray}\nx&=&\\int_0^t dt'~v_{0x}(t')=\\frac{v_{0x}}{\\gamma}\\left(1-e^{-\\gamma t}\\right),\\\\\n\\nonumber\ny&=&\\int_0^t dt'~v_{0y}(t')=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\end{eqnarray}\n$$\n\nIf the question was to find the position at a time $t$, we would be\nfinished. However, the more common goal in a projectile equation\nproblem is to find the range, i.e. the distance $x$ at which $y$\nreturns to zero. For the case without a drag force this was much\nsimpler. The solution for the $y$ coordinate would have been\n$y=v_{0y}t-gt^2/2$. One would solve for $t$ to make $y=0$, which would\nbe $t=2v_{0y}/g$, then plug that value for $t$ into $x=v_{0x}t$ to\nfind $x=2v_{0x}v_{0y}/g=v_0\\sin(2\\theta_0)/g$. One follows the same\nsteps here, except that the expression for $y(t)$ is more\ncomplicated. Searching for the time where $y=0$, and we get\n\n\n
\n\n$$\n\\begin{equation}\n0=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\label{_auto15} \\tag{15}\n\\end{equation}\n$$\n\nThis cannot be inverted into a simple expression $t=\\cdots$. Such\nexpressions are known as \"transcendental equations\", and are not the\nrare instance, but are the norm. In the days before computers, one\nmight plot the right-hand side of the above graphically as\na function of time, then find the point where it crosses zero.\n\nNow, the most common way to solve for an equation of the above type\nwould be to apply Newton's method numerically. This involves the\nfollowing algorithm for finding solutions of some equation $F(t)=0$.\n\n1. First guess a value for the time, $t_{\\rm guess}$.\n\n2. Calculate $F$ and its derivative, $F(t_{\\rm guess})$ and $F'(t_{\\rm guess})$. \n\n3. Unless you guessed perfectly, $F\\ne 0$, and assuming that $\\Delta F\\approx F'\\Delta t$, one would choose \n\n4. $\\Delta t=-F(t_{\\rm guess})/F'(t_{\\rm guess})$.\n\n5. Now repeat step 1, but with $t_{\\rm guess}\\rightarrow t_{\\rm guess}+\\Delta t$.\n\nIf the $F(t)$ were perfectly linear in $t$, one would find $t$ in one\nstep. Instead, one typically finds a value of $t$ that is closer to\nthe final answer than $t_{\\rm guess}$. One breaks the loop once one\nfinds $F$ within some acceptable tolerance of zero. A program to do\nthis will be added shortly.\n\n## Motion in a Magnetic Field\n\n\nAnother example of a velocity-dependent force is magnetism,\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{F}&=&q\\boldsymbol{v}\\times\\boldsymbol{B},\\\\\n\\nonumber\nF_i&=&q\\sum_{jk}\\epsilon_{ijk}v_jB_k.\n\\end{eqnarray}\n$$\n\nFor a uniform field in the $z$ direction $\\boldsymbol{B}=B\\hat{z}$, the force can only have $x$ and $y$ components,\n\n$$\n\\begin{eqnarray}\nF_x&=&qBv_y\\\\\n\\nonumber\nF_y&=&-qBv_x.\n\\end{eqnarray}\n$$\n\nThe differential equations are\n\n$$\n\\begin{eqnarray}\n\\dot{v}_x&=&\\omega_c v_y,\\omega_c= qB/m\\\\\n\\nonumber\n\\dot{v}_y&=&-\\omega_c v_x.\n\\end{eqnarray}\n$$\n\nOne can solve the equations by taking time derivatives of either equation, then substituting into the other equation,\n\n$$\n\\begin{eqnarray}\n\\ddot{v}_x=\\omega_c\\dot{v_y}=-\\omega_c^2v_x,\\\\\n\\nonumber\n\\ddot{v}_y&=&-\\omega_c\\dot{v}_x=-\\omega_cv_y.\n\\end{eqnarray}\n$$\n\nThe solution to these equations can be seen by inspection,\n\n$$\n\\begin{eqnarray}\nv_x&=&A\\sin(\\omega_ct+\\phi),\\\\\n\\nonumber\nv_y&=&A\\cos(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nOne can integrate the equations to find the positions as a function of time,\n\n$$\n\\begin{eqnarray}\nx-x_0&=&\\int_{x_0}^x dx=\\int_0^t dt v(t)\\\\\n\\nonumber\n&=&\\frac{-A}{\\omega_c}\\cos(\\omega_ct+\\phi),\\\\\n\\nonumber\ny-y_0&=&\\frac{A}{\\omega_c}\\sin(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nThe trajectory is a circle centered at $x_0,y_0$ with amplitude $A$ rotating in the clockwise direction.\n\nThe equations of motion for the $z$ motion are\n\n\n
\n\n$$\n\\begin{equation}\n\\dot{v_z}=0,\n\\label{_auto16} \\tag{16}\n\\end{equation}\n$$\n\nwhich leads to\n\n\n
\n\n$$\n\\begin{equation}\nz-z_0=V_zt.\n\\label{_auto17} \\tag{17}\n\\end{equation}\n$$\n\nAdded onto the circle, the motion is helical.\n\nNote that the kinetic energy,\n\n\n
\n\n$$\n\\begin{equation}\nT=\\frac{1}{2}m(v_x^2+v_y^2+v_z^2)=\\frac{1}{2}m(\\omega_c^2A^2+V_z^2),\n\\label{_auto18} \\tag{18}\n\\end{equation}\n$$\n\nis constant. This is because the force is perpendicular to the\nvelocity, so that in any differential time element $dt$ the work done\non the particle $\\boldsymbol{F}\\cdot{dr}=dt\\boldsymbol{F}\\cdot{v}=0$.\n\nOne should think about the implications of a velocity dependent\nforce. Suppose one had a constant magnetic field in deep space. If a\nparticle came through with velocity $v_0$, it would undergo cyclotron\nmotion with radius $R=v_0/\\omega_c$. However, if it were still its\nmotion would remain fixed. Now, suppose an observer looked at the\nparticle in one reference frame where the particle was moving, then\nchanged their velocity so that the particle's velocity appeared to be\nzero. The motion would change from circular to fixed. Is this\npossible?\n\nThe solution to the puzzle above relies on understanding\nrelativity. Imagine that the first observer believes $\\boldsymbol{B}\\ne 0$ and\nthat the electric field $\\boldsymbol{E}=0$. If the observer then changes\nreference frames by accelerating to a velocity $\\boldsymbol{v}$, in the new\nframe $\\boldsymbol{B}$ and $\\boldsymbol{E}$ both change. If the observer moved to the\nframe where the charge, originally moving with a small velocity $v$,\nis now at rest, the new electric field is indeed $\\boldsymbol{v}\\times\\boldsymbol{B}$,\nwhich then leads to the same acceleration as one had before. If the\nvelocity is not small compared to the speed of light, additional\n$\\gamma$ factors come into play,\n$\\gamma=1/\\sqrt{1-(v/c)^2}$. Relativistic motion will not be\nconsidered in this course.\n\n\n\n## Sliding Block tied to a Wall\n\nAnother classical case is that of simple harmonic oscillations, here represented by a block sliding on a horizontal frictionless surface. The block is tied to a wall with a spring. If the spring is not compressed or stretched too far, the force on the block at a given position $x$ is\n\n$$\nF=-kx.\n$$\n\nThe negative sign means that the force acts to restore the object to an equilibrium position. Newton's equation of motion for this idealized system is then\n\n$$\nm\\frac{d^2x}{dt^2}=-kx,\n$$\n\nor we could rephrase it as\n\n\n
\n\n$$\n\\frac{d^2x}{dt^2}=-\\frac{k}{m}x=-\\omega_0^2x,\n\\label{eq:newton1} \\tag{19}\n$$\n\nwith the angular frequency $\\omega_0^2=k/m$. \n\nThe above differential equation has the advantage that it can be solved analytically with solutions on the form\n\n$$\nx(t)=Acos(\\omega_0t+\\nu),\n$$\n\nwhere $A$ is the amplitude and $\\nu$ the phase constant. This provides in turn an important test for the numerical\nsolution and the development of a program for more complicated cases which cannot be solved analytically. \n\n\n\n## Simple Example, Block tied to a Wall\n\nWith the position $x(t)$ and the velocity $v(t)=dx/dt$ we can reformulate Newton's equation in the following way\n\n$$\n\\frac{dx(t)}{dt}=v(t),\n$$\n\nand\n\n$$\n\\frac{dv(t)}{dt}=-\\omega_0^2x(t).\n$$\n\nWe are now going to solve these equations using first the standard forward Euler method. Later we will try to improve upon this.\n\n\n## Simple Example, Block tied to a Wall\n\nBefore proceeding however, it is important to note that in addition to the exact solution, we have at least two further tests which can be used to check our solution. \n\nSince functions like $cos$ are periodic with a period $2\\pi$, then the solution $x(t)$ has also to be periodic. This means that\n\n$$\nx(t+T)=x(t),\n$$\n\nwith $T$ the period defined as\n\n$$\nT=\\frac{2\\pi}{\\omega_0}=\\frac{2\\pi}{\\sqrt{k/m}}.\n$$\n\nObserve that $T$ depends only on $k/m$ and not on the amplitude of the solution. \n\n\n## Simple Example, Block tied to a Wall\n\nIn addition to the periodicity test, the total energy has also to be conserved. \n\nSuppose we choose the initial conditions\n\n$$\nx(t=0)=1\\hspace{0.1cm} \\mathrm{m}\\hspace{1cm} v(t=0)=0\\hspace{0.1cm}\\mathrm{m/s},\n$$\n\nmeaning that block is at rest at $t=0$ but with a potential energy\n\n$$\nE_0=\\frac{1}{2}kx(t=0)^2=\\frac{1}{2}k.\n$$\n\nThe total energy at any time $t$ has however to be conserved, meaning that our solution has to fulfil the condition\n\n$$\nE_0=\\frac{1}{2}kx(t)^2+\\frac{1}{2}mv(t)^2.\n$$\n\nWe will derive this equation in our discussion on [energy conservation](https://mhjensen.github.io/Physics321/doc/pub/energyconserv/html/energyconserv.html).\n\n## Simple Example, Block tied to a Wall\n\nAn algorithm which implements these equations is included below.\n * Choose the initial position and speed, with the most common choice $v(t=0)=0$ and some fixed value for the position. \n\n * Choose the method you wish to employ in solving the problem.\n\n * Subdivide the time interval $[t_i,t_f] $ into a grid with step size\n\n$$\nh=\\frac{t_f-t_i}{N},\n$$\n\nwhere $N$ is the number of mesh points. \n * Calculate now the total energy given by\n\n$$\nE_0=\\frac{1}{2}kx(t=0)^2=\\frac{1}{2}k.\n$$\n\n* Choose ODE solver to obtain $x_{i+1}$ and $v_{i+1}$ starting from the previous values $x_i$ and $v_i$.\n\n * When we have computed $x(v)_{i+1}$ we upgrade $t_{i+1}=t_i+h$.\n\n * This iterative process continues till we reach the maximum time $t_f$.\n\n * The results are checked against the exact solution. Furthermore, one has to check the stability of the numerical solution against the chosen number of mesh points $N$. \n\n## Simple Example, Block tied to a Wall, python code\n\nThe following python program ( code will be added shortly)\n\n\n```python\n#\n# This program solves Newtons equation for a block sliding on\n# an horizontal frictionless surface.\n# The block is tied to the wall with a spring, so N's eq takes the form:\n#\n# m d^2x/dt^2 = - kx\n#\n# In order to make the solution dimless, we set k/m = 1.\n# This results in two coupled diff. eq's that may be written as:\n#\n# dx/dt = v\n# dv/dt = -x\n#\n# The user has to specify the initial velocity and position,\n# and the number of steps. The time interval is fixed to\n# t \\in [0, 4\\pi) (two periods)\n#\n```\n", "meta": {"hexsha": "0f5f34813d77ef081785854eb0dc344aff9c1bbd", "size": 78369, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week4/ipynb/.ipynb_checkpoints/week4-checkpoint.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/pub/week4/ipynb/.ipynb_checkpoints/week4-checkpoint.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/pub/week4/ipynb/.ipynb_checkpoints/week4-checkpoint.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 29.395723931, "max_line_length": 290, "alphanum_fraction": 0.5649810512, "converted": true, "num_tokens": 13929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.3593641451601019, "lm_q2_score": 0.41489884579676883, "lm_q1q2_score": 0.14909976904766878}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n Multiprocess sampling (4 chains in 4 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n Sampling 4 chains: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 60000/60000 [00:08<00:00, 7111.54draws/s] \n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n### Appendix 1.6.2 Extending to Two Switchpoints\n\nReaders might be interested in how the previous model can be extended to more than a single switchpoint, or may question the assumption of only one switchpoint. We'll start with extending the model to consider two switchpoints (which implies 3 $\\lambda_i$ parameters). The model looks vert similar to the previous:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau_1 \\cr\n\\lambda_2 & \\text{if } \\tau_1 \\le t \\lt \\tau_2 \\cr\n\\lambda_3 & \\text{if } t \\ge \\tau_2\n\\end{cases}\n$$\n\nwhere\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_3 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\nand\n\n\\begin{align}\n& \\tau_1 \\sim \\text{DiscreteUniform(1,69) } \\\\\\\n& \\tau_2 \\sim \\text{DiscreteUniform}(\\tau_1,70) \\\\\\\n\\end{align}\n\nLet's code this model up, which looks very similar to our previous code:\n\n\n```python\nwith pm.Model() as model:\n \n alpha = 1/count_data.mean()\n \n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n lambda_3 = pm.Exponential(\"lambda_3\", alpha)\n \n tau_1 = pm.DiscreteUniform(\"tau_1\", lower=0, upper=n_count_data-1)\n tau_2 = pm.DiscreteUniform(\"tau_2\", lower=tau_1, upper=n_count_data)\n \nwith model:\n \n idx = np.arange(n_count_data)\n lambda_ = pm.math.switch(idx < tau_2,\n pm.math.switch(idx < tau_1,\n lambda_1,\n lambda_2),\n lambda_3)\n observation = pm.Poisson('obs', lambda_, observed=count_data)\n \n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000, step=step)\n```\n\n Multiprocess sampling (4 chains in 4 jobs)\n CompoundStep\n >Metropolis: [tau_2]\n >Metropolis: [tau_1]\n >Metropolis: [lambda_3]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n Sampling 4 chains: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 60000/60000 [00:08<00:00, 7225.63draws/s]\n The number of effective samples is smaller than 10% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\nlambda_3_samples = trace['lambda_3']\ntau_1_samples = trace['tau_1']\ntau_2_samples = trace['tau_2']\n```\n\n\n```python\nplt.rcParams['figure.figsize'] = (12.5, 10)\n\n# lambda_1\nax = plt.subplot(511)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples,\n histtype='stepfilled',\n bins=100,\n alpha=0.85,\n label=r\"Posterior of $\\lambda_1$\",\n color='#A60628',\n density=True)\nplt.legend(loc='upper left')\nplt.ylabel('Density')\nplt.xlabel(r\"$\\lambda_1$ value\")\nplt.title(r\"Posterior Distribution of the variables $\\lambda_1, \\lambda_2, \\lambda_3, \\tau_1 ~&~ \\tau_2$\")\nplt.xlim([15, 30])\n\n# lambda_2\nax = plt.subplot(512)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_2_samples,\n histtype='stepfilled',\n bins=100,\n alpha=0.85,\n label=r\"Posterior of $\\lambda_2$\",\n color='#7A68A6',\n density=True)\nplt.legend(loc='upper left')\nplt.ylabel('Density')\nplt.xlim([30, 90])\nplt.xlabel(r\"$\\lambda_2$ value\")\n\n# lambda_3\nax = plt.subplot(513)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_3_samples,\n histtype='stepfilled',\n bins=100,\n alpha=0.85,\n label=r\"Posterior of $\\lambda_3$\",\n color='#3A68A6',\n density=True)\nplt.legend(loc='upper left')\nplt.ylabel('Density')\nplt.xlim([15, 30])\nplt.xlabel(r\"$\\lambda_3$ value\")\n\n# tau_1\nplt.subplot(514)\nw = 1.0 / tau_1_samples.shape[0] * np.ones_like(tau_1_samples)\nplt.hist(tau_1_samples,\n bins=n_count_data,\n alpha=1,\n label=r'Posterior of $\\tau_1$',\n color='#467821',\n weights=w,\n rwidth=2)\nplt.xticks(np.arange(n_count_data))\nplt.legend(loc='upper left')\nplt.ylim([0, 0.75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r'$\\tau_1$ (in days)')\nplt.ylabel('Probability')\n\n# tau_2\nplt.subplot(515)\nw = 1.0 / tau_2_samples.shape[0] * np.ones_like(tau_2_samples)\nplt.hist(tau_2_samples,\n bins=n_count_data,\n alpha=1,\n label=r'Posterior of $\\tau_2$',\n color='#767821',\n weights=w,\n rwidth=2)\nplt.xticks(np.arange(n_count_data))\nplt.legend(loc='upper left')\nplt.ylim([0, 0.75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r'$\\tau_2$ (in days)')\nplt.ylabel('Probability')\n```\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\nprint(lambda_1_samples.mean())\nprint(lambda_2_samples.mean())\n```\n\n 17.758062445442253\n 58.36165697369309\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n# 2\npct_inc_post = (lambda_2_samples - lambda_1_samples)/lambda_2_samples\n\n# frequentist \"estimate/answer\"\nfreq_est = pct_inc_post.mean()\n\n# posterior of expected percentage increase\nfig, ax = plt.subplots(figsize=(12, 9))\n\nax.hist(pct_inc_post,\n bins=100,\n color='#348ABD',\n density=True,\n label='Bayesian Posterior')\nplt.axvline(freq_est,\n color=\"#A60628\",\n label='Frequentist Estimate')\nplt.title(\"Posterior Distribution of the Expected Percentage Increase\")\nplt.xlabel('Percentage Increase')\nplt.ylabel('Density')\nplt.legend(loc='upper left')\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n# knowing that we are \"before\" the switchpoint, we are in the realm\n# of the first lambda (lambda_1), so the expected message rate is the mean\n# of that distribution:\nprint(lambda_1_samples[:45].mean()) \n```\n\n 17.882761522264225\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "4d8ead0e067415682b330ee863036b930ee093f2", "size": 368860, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "marshallm94/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "36b453db56691cff2307a83becfaa9e838cdb401", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "marshallm94/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "36b453db56691cff2307a83becfaa9e838cdb401", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "marshallm94/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "36b453db56691cff2307a83becfaa9e838cdb401", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 264.2263610315, "max_line_length": 89600, "alphanum_fraction": 0.9011874424, "converted": true, "num_tokens": 13004, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295350395727, "lm_q2_score": 0.33458943461801643, "lm_q1q2_score": 0.14906947523451838}} {"text": "\n# Infinite matter, from the electron gas to nuclear matter\n\n \n**[Morten Hjorth-Jensen](http://computationalphysics.no), National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA & Department of Physics, University of Oslo, Oslo, Norway**\n\nDate: **July 2015**\n\n## Introduction to studies of infinite matter\n\n\nStudies of infinite nuclear matter play an important role in nuclear physics. The aim of this part of the lectures is to provide the necessary ingredients for perfoming studies of neutron star matter (or matter in $\\beta$-equilibrium) and symmetric nuclear matter. We start however with the electron gas in two and three dimensions for both historical and pedagogical reasons. Since there are several benchmark calculations for the electron gas, this small detour will allow us to establish the necessary formalism. Thereafter we will study infinite nuclear matter \n* at the Hartree-Fock with realistic nuclear forces and\n\n* using many-body methods like coupled-cluster theory or in-medium SRG as discussed in our previous sections.\n\n## The infinite electron gas\n\nThe electron gas is perhaps the only realistic model of a \nsystem of many interacting particles that allows for a solution\nof the Hartree-Fock equations on a closed form. Furthermore, to first order in the interaction, one can also\ncompute on a closed form the total energy and several other properties of a many-particle systems. \nThe model gives a very good approximation to the properties of valence electrons in metals.\nThe assumptions are\n\n * System of electrons that is not influenced by external forces except by an attraction provided by a uniform background of ions. These ions give rise to a uniform background charge. The ions are stationary.\n\n * The system as a whole is neutral.\n\n * We assume we have $N_e$ electrons in a cubic box of length $L$ and volume $\\Omega=L^3$. This volume contains also a uniform distribution of positive charge with density $N_ee/\\Omega$. \n\nThe homogeneous electron gas is one of the few examples of a system of many\ninteracting particles that allows for a solution of the mean-field\nHartree-Fock equations on a closed form. To first order in the\nelectron-electron interaction, this applies to ground state properties\nlike the energy and its pertinent equation of state as well. The\nhomogeneus electron gas is a system of electrons that is not\ninfluenced by external forces except by an attraction provided by a\nuniform background of ions. These ions give rise to a uniform\nbackground charge. The ions are stationary and the system as a whole\nis neutral.\nIrrespective of this simplicity, this system, in both two and\nthree-dimensions, has eluded a proper description of correlations in\nterms of various first principle methods, except perhaps for quantum\nMonte Carlo methods. In particular, the diffusion Monte Carlo\ncalculations of [Ceperley](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.45.566) \nand [Ceperley and Tanatar](http://journals.aps.org/prb/abstract/10.1103/PhysRevB.39.5005) \nare presently still considered as the\nbest possible benchmarks for the two- and three-dimensional electron\ngas. \n\n\n\nThe electron gas, in \ntwo or three dimensions is thus interesting as a test-bed for \nelectron-electron correlations. The three-dimensional \nelectron gas is particularly important as a cornerstone \nof the local-density approximation in density-functional \ntheory. In the physical world, systems \nsimilar to the three-dimensional electron gas can be \nfound in, for example, alkali metals and doped \nsemiconductors. Two-dimensional electron fluids are \nobserved on metal and liquid-helium surfaces, as well as \nat metal-oxide-semiconductor interfaces. However, the Coulomb \ninteraction has an infinite range, and therefore \nlong-range correlations play an essential role in the\nelectron gas. \n\n\n\n\nAt low densities, the electrons become \nlocalized and form a lattice. This so-called Wigner \ncrystallization is a direct consequence \nof the long-ranged repulsive interaction. At higher\ndensities, the electron gas is better described as a\nliquid.\nWhen using, for example, Monte Carlo methods the electron gas must be approximated \nby a finite system. The long-range Coulomb interaction \nin the electron gas causes additional finite-size effects that are not\npresent in other infinite systems like nuclear matter or neutron star matter.\nThis poses additional challenges to many-body methods when applied \nto the electron gas.\n\n\n\n\n\n## The infinite electron gas as a homogenous system\n\nThis is a homogeneous system and the one-particle wave functions are given by plane wave functions normalized to a volume $\\Omega$ \nfor a box with length $L$ (the limit $L\\rightarrow \\infty$ is to be taken after we have computed various expectation values)\n\n$$\n\\psi_{\\mathbf{k}\\sigma}(\\mathbf{r})= \\frac{1}{\\sqrt{\\Omega}}\\exp{(i\\mathbf{kr})}\\xi_{\\sigma}\n$$\n\nwhere $\\mathbf{k}$ is the wave number and $\\xi_{\\sigma}$ is a spin function for either spin up or down\n\n$$\n\\xi_{\\sigma=+1/2}=\\left(\\begin{array}{c} 1 \\\\ 0 \\end{array}\\right) \\hspace{0.5cm}\n\\xi_{\\sigma=-1/2}=\\left(\\begin{array}{c} 0 \\\\ 1 \\end{array}\\right).\n$$\n\n## Periodic boundary conditions\n\n\nWe assume that we have periodic boundary conditions which limit the allowed wave numbers to\n\n$$\nk_i=\\frac{2\\pi n_i}{L}\\hspace{0.5cm} i=x,y,z \\hspace{0.5cm} n_i=0,\\pm 1,\\pm 2, \\dots\n$$\n\nWe assume first that the electrons interact via a central, symmetric and translationally invariant\ninteraction $V(r_{12})$ with\n$r_{12}=|\\mathbf{r}_1-\\mathbf{r}_2|$. The interaction is spin independent.\n\nThe total Hamiltonian consists then of kinetic and potential energy\n\n$$\n\\hat{H} = \\hat{T}+\\hat{V}.\n$$\n\nThe operator for the kinetic energy can be written as\n\n$$\n\\hat{T}=\\sum_{\\mathbf{k}\\sigma}\\frac{\\hbar^2k^2}{2m}a_{\\mathbf{k}\\sigma}^{\\dagger}a_{\\mathbf{k}\\sigma}.\n$$\n\n## Defining the Hamiltonian operator\n\nThe Hamiltonian operator is given by\n\n$$\n\\hat{H}=\\hat{H}_{el}+\\hat{H}_{b}+\\hat{H}_{el-b},\n$$\n\nwith the electronic part\n\n$$\n\\hat{H}_{el}=\\sum_{i=1}^N\\frac{p_i^2}{2m}+\\frac{e^2}{2}\\sum_{i\\ne j}\\frac{e^{-\\mu |\\mathbf{r}_i-\\mathbf{r}_j|}}{|\\mathbf{r}_i-\\mathbf{r}_j|},\n$$\n\nwhere we have introduced an explicit convergence factor\n(the limit $\\mu\\rightarrow 0$ is performed after having calculated the various integrals).\nCorrespondingly, we have\n\n$$\n\\hat{H}_{b}=\\frac{e^2}{2}\\int\\int d\\mathbf{r}d\\mathbf{r}'\\frac{n(\\mathbf{r})n(\\mathbf{r}')e^{-\\mu |\\mathbf{r}-\\mathbf{r}'|}}{|\\mathbf{r}-\\mathbf{r}'|},\n$$\n\nwhich is the energy contribution from the positive background charge with density\n$n(\\mathbf{r})=N/\\Omega$. Finally,\n\n$$\n\\hat{H}_{el-b}=-\\frac{e^2}{2}\\sum_{i=1}^N\\int d\\mathbf{r}\\frac{n(\\mathbf{r})e^{-\\mu |\\mathbf{r}-\\mathbf{x}_i|}}{|\\mathbf{r}-\\mathbf{x}_i|},\n$$\n\nis the interaction between the electrons and the positive background.\n\n\n\n## Single-particle Hartree-Fock energy\n\nIn the first exercise below we show that the Hartree-Fock energy can be written as\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m_e}-\\frac{e^{2}}\n{\\Omega^{2}}\\sum_{k'\\leq\nk_{F}}\\int d\\mathbf{r}e^{i(\\mathbf{k}'-\\mathbf{k})\\mathbf{r}}\\int\nd\\mathbf{r'}\\frac{e^{i(\\mathbf{k}-\\mathbf{k}')\\mathbf{r}'}}\n{\\vert\\mathbf{r}-\\mathbf{r}'\\vert}\n$$\n\nresulting in\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m_e}-\\frac{e^{2}\nk_{F}}{2\\pi}\n\\left[\n2+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right]\n$$\n\nThe previous result can be rewritten in terms of the density\n\n$$\nn= \\frac{k_F^3}{3\\pi^2}=\\frac{3}{4\\pi r_s^3},\n$$\n\nwhere $n=N_e/\\Omega$, $N_e$ being the number of electrons, and $r_s$ is the radius of a sphere which represents the volum per conducting electron. \nIt can be convenient to use the Bohr radius $a_0=\\hbar^2/e^2m_e$.\nFor most metals we have a relation $r_s/a_0\\sim 2-6$. The quantity $r_s$ is dimensionless.\n\n\nIn the second exercise below we find that\nthe total energy\n$E_0/N_e=\\langle\\Phi_{0}|\\hat{H}|\\Phi_{0}\\rangle/N_e$ for\nfor this system to first order in the interaction is given as\n\n$$\nE_0/N_e=\\frac{e^2}{2a_0}\\left[\\frac{2.21}{r_s^2}-\\frac{0.916}{r_s}\\right].\n$$\n\n\n\n## Exercise 1: Hartree-Fock single-particle solution for the electron gas\n\nThe electron gas model allows closed form solutions for quantities like the \nsingle-particle Hartree-Fock energy. The latter quantity is given by the following expression\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}}\n{V^{2}}\\sum_{k'\\leq\nk_{F}}\\int d\\mathbf{r}e^{i(\\mathbf{k'}-\\mathbf{k})\\mathbf{r}}\\int\nd\\mathbf{r}'\\frac{e^{i(\\mathbf{k}-\\mathbf{k'})\\mathbf{r}'}}\n{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\n$$\n\n**a)**\nShow first that\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}\nk_{F}}{2\\pi}\n\\left[\n2+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right]\n$$\n\n\n\n**Hint.**\nHint: Introduce the convergence factor \n$e^{-\\mu\\vert\\mathbf{r}-\\mathbf{r}'\\vert}$\nin the potential and use $\\sum_{\\mathbf{k}}\\rightarrow\n\\frac{V}{(2\\pi)^{3}}\\int d\\mathbf{k}$\n\n\n\n\n\n**Solution.**\nWe want to show that, given the Hartree-Fock equation for the electron gas\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}}\n{V^{2}}\\sum_{p\\leq\nk_{F}}\\int d\\mathbf{r}\\exp{(i(\\mathbf{p}-\\mathbf{k})\\mathbf{r})}\\int\nd\\mathbf{r}'\\frac{\\exp{(i(\\mathbf{k}-\\mathbf{p})\\mathbf{r}'})}\n{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\n$$\n\nthe single-particle energy can be written as\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}\nk_{F}}{2\\pi}\n\\left[\n2+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right].\n$$\n\nWe introduce the convergence factor \n$e^{-\\mu\\vert\\mathbf{r}-\\mathbf{r}'\\vert}$\nin the potential and use $\\sum_{\\mathbf{k}}\\rightarrow\n\\frac{V}{(2\\pi)^{3}}\\int d\\mathbf{k}$. We can then rewrite the integral as\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{e^{2}}\n{V^{2}}\\sum_{k'\\leq\nk_{F}}\\int d\\mathbf{r}\\exp{(i(\\mathbf{k'}-\\mathbf{k})\\mathbf{r})}\\int\nd\\mathbf{r}'\\frac{\\exp{(i(\\mathbf{k}-\\mathbf{p})\\mathbf{r}'})}\n{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}= \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{r}\\int\n\\frac{d\\mathbf{r}'}{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\\exp{(-i\\mathbf{k}(\\mathbf{r}-\\mathbf{r}'))}\\int d\\mathbf{p}\\exp{(i\\mathbf{p}(\\mathbf{r}-\\mathbf{r}'))},\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nand introducing the abovementioned convergence factor we have\n\n\n
\n\n$$\n\\begin{equation}\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{r}\\int d\\mathbf{r}'\\frac{\\exp{(-\\mu\\vert\\mathbf{r}-\\mathbf{r}'\\vert})}{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\\int d\\mathbf{p}\\exp{(i(\\mathbf{p}-\\mathbf{k})(\\mathbf{r}-\\mathbf{r}'))}.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nWith a change variables to $\\mathbf{x} = \\mathbf{r}-\\mathbf{r}'$ and $\\mathbf{y}=\\mathbf{r}'$ we rewrite the last integral as\n\n$$\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{p}\\int d\\mathbf{y}\\int d\\mathbf{x}\\exp{(i(\\mathbf{p}-\\mathbf{k})\\mathbf{x})}\\frac{\\exp{(-\\mu\\vert\\mathbf{x}\\vert})}{\\vert\\mathbf{x}\\vert}.\n$$\n\nThe integration over $\\mathbf{x}$ can be performed using spherical coordinates, resulting in (with $x=\\vert \\mathbf{x}\\vert$)\n\n$$\n\\int d\\mathbf{x}\\exp{(i(\\mathbf{p}-\\mathbf{k})\\mathbf{x})}\\frac{\\exp{(-\\mu\\vert\\mathbf{x}\\vert})}{\\vert\\mathbf{x}\\vert}=\\int x^2 dx d\\phi d\\cos{(\\theta)}\\exp{(i(\\mathbf{p}-\\mathbf{k})x\\cos{(\\theta))}}\\frac{\\exp{(-\\mu x)}}{x}.\n$$\n\nWe obtain\n\n\n
\n\n$$\n\\begin{equation}\n4\\pi \\int dx \\frac{ \\sin{(\\vert \\mathbf{p}-\\mathbf{k}\\vert)x} }{\\vert \\mathbf{p}-\\mathbf{k}\\vert}{\\exp{(-\\mu x)}}= \\frac{4\\pi}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}.\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nThis results gives us\n\n\n
\n\n$$\n\\begin{equation}\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{p}\\int d\\mathbf{y}\\frac{4\\pi}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}=\\lim_{\\mu \\to 0}\\frac{e^{2}}{ 2\\pi^2} \\int d\\mathbf{p}\\frac{1}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2},\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nwhere we have used that the integrand on the left-hand side does not depend on $\\mathbf{y}$ and that $\\int d\\mathbf{y}=V$.\n\nIntroducing spherical coordinates we can rewrite the integral as\n\n\n
\n\n$$\n\\begin{equation}\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{ 2\\pi^2} \\int d\\mathbf{p}\\frac{1}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}=\\frac{e^{2}}{ 2\\pi^2} \\int d\\mathbf{p}\\frac{1}{\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}= \n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{e^{2}}{\\pi} \\int_0^{k_F} p^2dp\\int_0^{\\pi} d\\theta\\cos{(\\theta)}\\frac{1}{p^2+k^2-2pk\\cos{(\\theta)}},\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nand with the change of variables $\\cos{(\\theta)}=u$ we have\n\n$$\n\\frac{e^{2}}{\\pi} \\int_0^{k_F} p^2dp\\int_{0}^{\\pi} d\\theta\\cos{(\\theta)}\\frac{1}{p^2+k^2-2pk\\cos{(\\theta)}}=\\frac{e^{2}}{\\pi} \\int_0^{k_F} p^2dp\\int_{-1}^{1} du\\frac{1}{p^2+k^2-2pku},\n$$\n\nwhich gives\n\n$$\n\\frac{e^{2}}{k\\pi} \\int_0^{k_F} pdp\\left\\{ln(\\vert p+k\\vert)-ln(\\vert p-k\\vert)\\right\\}.\n$$\n\nIntroducing new variables $x=p+k$ and $y=p-k$, we obtain after some straightforward reordering of the integral\n\n$$\n\\frac{e^{2}}{k\\pi}\\left[\nkk_F+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right],\n$$\n\nwhich gives the abovementioned expression for the single-particle energy.\n\n\n\n**b)**\nRewrite the above result as a function of the density\n\n$$\nn= \\frac{k_F^3}{3\\pi^2}=\\frac{3}{4\\pi r_s^3},\n$$\n\nwhere $n=N/V$, $N$ being the number of particles, and $r_s$ is the radius of a sphere which represents the volum per conducting electron.\n\n\n\n**Solution.**\nIntroducing the dimensionless quantity $x=k/k_F$ and the function\n\n$$\nF(x) = \\frac{1}{2}+\\frac{1-x^2}{4x}\\ln{\\left\\vert \\frac{1+x}{1-x}\\right\\vert},\n$$\n\nwe can rewrite the single-particle Hartree-Fock energy as\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{2e^{2}\nk_{F}}{\\pi}F(k/k_F),\n$$\n\nand dividing by the non-interacting contribution at the Fermi level,\n\n$$\n\\varepsilon_{0}^{F}=\\frac{\\hbar^{2}k_F^{2}}{2m},\n$$\n\nwe have\n\n$$\n\\frac{\\varepsilon_{k}^{HF} }{\\varepsilon_{0}^{F}}=x^2-\\frac{e^2m}{\\hbar^2 k_F\\pi}F(x)=x^2-\\frac{4}{\\pi k_Fa_0}F(x),\n$$\n\nwhere $a_0=0.0529$ nm is the Bohr radius, setting thereby a natural length scale. \n\n\nBy introducing the radius $r_s$ of a sphere whose volume is the volume occupied by each electron, we can rewrite the previous equation in terms of $r_s$ using that the electron density $n=N/V$\n\n$$\nn=\\frac{k_F^3}{3\\pi^2} = \\frac{3}{4\\pi r_s^3},\n$$\n\nwe have (with $k_F=1.92/r_s$,\n\n$$\n\\frac{\\varepsilon_{k}^{HF} }{\\varepsilon_{0}^{F}}=x^2-\\frac{e^2m}{\\hbar^2 k_F\\pi}F(x)=x^2-\\frac{r_s}{a_0}0.663F(x),\n$$\n\nwith $r_s \\sim 2-6$ for most metals.\n\n\n\nIt can be convenient to use the Bohr radius $a_0=\\hbar^2/e^2m$.\nFor most metals we have a relation $r_s/a_0\\sim 2-6$.\n\n**c)**\nMake a plot of the free electron energy and the Hartree-Fock energy and discuss the behavior around the Fermi surface. Extract also the Hartree-Fock band width $\\Delta\\varepsilon^{HF}$ defined as\n\n$$\n\\Delta\\varepsilon^{HF}=\\varepsilon_{k_{F}}^{HF}-\n\\varepsilon_{0}^{HF}.\n$$\n\nCompare this results with the corresponding one for a free electron and comment your results. How large is the contribution due to the exchange term in the Hartree-Fock equation?\n\n\n\n**Solution.**\nWe can now define the so-called band gap, that is the scatter between the maximal and the minimal value of the electrons in the conductance band of a metal (up to the Fermi level). \nFor $x=1$ and $r_s/a_0=4$ we have\n\n$$\n\\frac{\\varepsilon_{k=k_F}^{HF} }{\\varepsilon_{0}^{F}} = -0.326,\n$$\n\nand for $x=0$ we have\n\n$$\n\\frac{\\varepsilon_{k=0}^{HF} }{\\varepsilon_{0}^{F}} = -2.652,\n$$\n\nwhich results in a gap at the Fermi level of\n\n$$\n\\Delta \\varepsilon^{HF} = \\frac{\\varepsilon_{k=k_F}^{HF} }{\\varepsilon_{0}^{F}}-\\frac{\\varepsilon_{k=0}^{HF} }{\\varepsilon_{0}^{F}} = 2.326.\n$$\n\nThis quantity measures the deviation from the $k=0$ single-particle energy and the energy at the Fermi level.\nThe general result is\n\n$$\n\\Delta \\varepsilon^{HF} = 1+\\frac{r_s}{a_0}0.663.\n$$\n\nThe following python code produces a plot of the electron energy for a free electron (only kinetic energy) and \nfor the Hartree-Fock solution. We have chosen here a ratio $r_s/a_0=4$ and the equations are plotted as funtions\nof $k/f_F$.\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nfrom math import log\nfrom matplotlib import pyplot as plt\nfrom matplotlib import rc, rcParams\nimport matplotlib.units as units\nimport matplotlib.ticker as ticker\nrc('text',usetex=True)\nrc('font',**{'family':'serif','serif':['Hartree-Fock energy']})\nfont = {'family' : 'serif',\n 'color' : 'darkred',\n 'weight' : 'normal',\n 'size' : 16,\n }\n\nN = 100\nx = np.linspace(0.0, 2.0,N)\nF = 0.5+np.log(abs((1.0+x)/(1.0-x)))*(1.0-x*x)*0.25/x\ny = x*x -4.0*0.663*F\n\nplt.plot(x, y, 'b-')\nplt.plot(x, x*x, 'r-')\nplt.title(r'{\\bf Hartree-Fock single-particle energy for electron gas}', fontsize=20) \nplt.text(3, -40, r'Parameters: $r_s/a_0=4$', fontdict=font)\nplt.xlabel(r'$k/k_F$',fontsize=20)\nplt.ylabel(r'$\\varepsilon_k^{HF}/\\varepsilon_0^F$',fontsize=20)\n# Tweak spacing to prevent clipping of ylabel\nplt.subplots_adjust(left=0.15)\nplt.savefig('hartreefockspelgas.pdf', format='pdf')\nplt.show()\n```\n\nFrom the plot we notice that the exchange term increases considerably the band gap\ncompared with the non-interacting gas of electrons.\n\n\nWe will now define a quantity called the effective mass.\nFor $\\vert\\mathbf{k}\\vert$ near $k_{F}$, we can Taylor expand the Hartree-Fock energy as\n\n$$\n\\varepsilon_{k}^{HF}=\\varepsilon_{k_{F}}^{HF}+\n\\left(\\frac{\\partial\\varepsilon_{k}^{HF}}{\\partial k}\\right)_{k_{F}}(k-k_{F})+\\dots\n$$\n\nIf we compare the latter with the corresponding expressiyon for the non-interacting system\n\n$$\n\\varepsilon_{k}^{(0)}=\\frac{\\hbar^{2}k^{2}_{F}}{2m}+\n\\frac{\\hbar^{2}k_{F}}{m}\\left(k-k_{F}\\right)+\\dots ,\n$$\n\nwe can define the so-called effective Hartree-Fock mass as\n\n$$\nm_{HF}^{*}\\equiv\\hbar^{2}k_{F}\\left(\n\\frac{\\partial\\varepsilon_{k}^{HF}}\n{\\partial k}\\right)_{k_{F}}^{-1}\n$$\n\n**d)**\nCompute $m_{HF}^{*}$ and comment your results.\n\n**e)**\nShow that the level density (the number of single-electron states per unit energy) can be written as\n\n$$\nn(\\varepsilon)=\\frac{Vk^{2}}{2\\pi^{2}}\\left(\n\\frac{\\partial\\varepsilon}{\\partial k}\\right)^{-1}\n$$\n\nCalculate $n(\\varepsilon_{F}^{HF})$ and comment the results.\n\n\n\n\n\n\n\n\n\n\n\n## Exercise 2: Hartree-Fock ground state energy for the electron gas in three dimensions\n\nWe consider a system of electrons in infinite matter, the so-called electron gas. This is a homogeneous system and the one-particle states are given by plane wave function normalized to a volume $\\Omega$ \nfor a box with length $L$ (the limit $L\\rightarrow \\infty$ is to be taken after we have computed various expectation values)\n\n$$\n\\psi_{\\mathbf{k}\\sigma}(\\mathbf{r})= \\frac{1}{\\sqrt{\\Omega}}\\exp{(i\\mathbf{kr})}\\xi_{\\sigma}\n$$\n\nwhere $\\mathbf{k}$ is the wave number and $\\xi_{\\sigma}$ is a spin function for either spin up or down\n\n$$\n\\xi_{\\sigma=+1/2}=\\left(\\begin{array}{c} 1 \\\\ 0 \\end{array}\\right) \\hspace{0.5cm}\n\\xi_{\\sigma=-1/2}=\\left(\\begin{array}{c} 0 \\\\ 1 \\end{array}\\right).\n$$\n\nWe assume that we have periodic boundary conditions which limit the allowed wave numbers to\n\n$$\nk_i=\\frac{2\\pi n_i}{L}\\hspace{0.5cm} i=x,y,z \\hspace{0.5cm} n_i=0,\\pm 1,\\pm 2, \\dots\n$$\n\nWe assume first that the particles interact via a central, symmetric and translationally invariant\ninteraction $V(r_{12})$ with\n$r_{12}=|\\mathbf{r}_1-\\mathbf{r}_2|$. The interaction is spin independent.\n\nThe total Hamiltonian consists then of kinetic and potential energy\n\n$$\n\\hat{H} = \\hat{T}+\\hat{V}.\n$$\n\nThe operator for the kinetic energy is given by\n\n$$\n\\hat{T}=\\sum_{\\mathbf{k}\\sigma}\\frac{\\hbar^2k^2}{2m}a_{\\mathbf{k}\\sigma}^{\\dagger}a_{\\mathbf{k}\\sigma}.\n$$\n\n**a)**\nFind the expression for the interaction\n$\\hat{V}$ expressed with creation and annihilation operators. The expression for the interaction\nhas to be written in $k$ space, even though $V$ depends only on the relative distance. It means that you need to set up the Fourier transform $\\langle \\mathbf{k}_i\\mathbf{k}_j| V | \\mathbf{k}_m\\mathbf{k}_n\\rangle$.\n\n\n\n**Solution.**\nA general two-body interaction element is given by (not using anti-symmetrized matrix elements)\n\n$$\n\\hat{V} = \\frac{1}{2} \\sum_{pqrs} \\langle pq \\hat{v} \\vert rs\\rangle a_p^\\dagger a_q^\\dagger a_s a_r ,\n$$\n\nwhere $\\hat{v}$ is assumed to depend only on the relative distance between two interacting particles, that is\n$\\hat{v} = v(\\vec r_1, \\vec r_2) = v(|\\vec r_1 - \\vec r_2|) = v(r)$, with $r = |\\vec r_1 - \\vec r_2|$). \nIn our case we have, writing out explicitely the spin degrees of freedom as well\n\n\n
\n\n$$\n\\begin{equation}\n\\hat{V} = \\frac{1}{2} \\sum_{\\substack{\\sigma_p \\sigma_q \\\\ \\sigma_r \\sigma_s}}\n\\sum_{\\substack{\\mathbf{k}_p \\mathbf{k}_q \\\\ \\mathbf{k}_r \\mathbf{k}_s}}\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_2\\vert v \\vert \\mathbf{k}_r \\sigma_3, \\mathbf{k}_s \\sigma_s\\rangle\na_{\\mathbf{k}_p \\sigma_p}^\\dagger a_{\\mathbf{k}_q \\sigma_q}^\\dagger a_{\\mathbf{k}_s \\sigma_s} a_{\\mathbf{k}_r \\sigma_r} .\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nInserting plane waves as eigenstates we can rewrite the matrix element as\n\n$$\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_q\\vert \\hat{v} \\vert \\mathbf{k}_r \\sigma_r, \\mathbf{k}_s \\sigma_s\\rangle =\n\\frac{1}{\\Omega^2} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s}\n\\int\\int \\exp{-i(\\mathbf{k}_p \\cdot \\mathbf{r}_p)} \\exp{-i( \\mathbf{k}_q \\cdot \\mathbf{r}_q)} \\hat{v}(r) \\exp{i(\\mathbf{k}_r \\cdot \\mathbf{r}_p)} \\exp{i( \\mathbf{k}_s \\cdot \\mathbf{r}_q)} d\\mathbf{r}_p d\\mathbf{r}_q ,\n$$\n\nwhere we have used the orthogonality properties of the spin functions. We change now the variables of integration\nby defining $\\mathbf{r} = \\mathbf{r}_p - \\mathbf{r}_q$, which gives $\\mathbf{r}_p = \\mathbf{r} + \\mathbf{r}_q$ and $d^3 \\mathbf{r} = d^3 \\mathbf{r}_p$. \nThe limits are not changed since they are from $-\\infty$ to $\\infty$ for all integrals. This results in\n\n$$\n\\begin{align*}\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_q\\vert \\hat{v} \\vert \\mathbf{k}_r \\sigma_r, \\mathbf{k}_s \\sigma_s\\rangle\n&= \\frac{1}{\\Omega^2} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s} \\int\\exp{i (\\mathbf{k}_s - \\mathbf{k}_q) \\cdot \\mathbf{r}_q} \\int v(r) \\exp{i(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot ( \\mathbf{r} + \\mathbf{r}_q)} d\\mathbf{r} d\\mathbf{r}_q \\\\\n&= \\frac{1}{\\Omega^2} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s} \\int v(r) \\exp{i\\left[(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}\\right]}\n\\int \\exp{i\\left[(\\mathbf{k}_s - \\mathbf{k}_q + \\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}_q\\right]} d\\mathbf{r}_q d\\mathbf{r} .\n\\end{align*}\n$$\n\nWe recognize the integral over $\\mathbf{r}_q$ as a $\\delta$-function, resulting in\n\n$$\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_q\\vert \\hat{v} \\vert \\mathbf{k}_r \\sigma_r, \\mathbf{k}_s \\sigma_s\\rangle =\n\\frac{1}{\\Omega} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s} \\delta_{(\\mathbf{k}_p + \\mathbf{k}_q),(\\mathbf{k}_r + \\mathbf{k}_s)} \\int v(r) \\exp{i\\left[(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}\\right]} d^3r .\n$$\n\nFor this equation to be different from zero, we must have conservation of momenta, we need to satisfy\n$\\mathbf{k}_p + \\mathbf{k}_q = \\mathbf{k}_r + \\mathbf{k}_s$. We can use the conservation of momenta to remove one of the summation variables resulting in\n\n$$\n\\hat{V} =\n\\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k}_p \\mathbf{k}_q \\mathbf{k}_r} \\left[ \\int v(r) \\exp{i\\left[(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}\\right]} d^3r \\right]\na_{\\mathbf{k}_p \\sigma}^\\dagger a_{\\mathbf{k}_q \\sigma'}^\\dagger a_{\\mathbf{k}_p + \\mathbf{k}_q - \\mathbf{k}_r, \\sigma'} a_{\\mathbf{k}_r \\sigma},\n$$\n\nwhich can be rewritten as\n\n\n
\n\n$$\n\\begin{equation}\n\\hat{V} =\n\\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k} \\mathbf{p} \\mathbf{q}} \\left[ \\int v(r) \\exp{-i( \\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right]\na_{\\mathbf{k} + \\mathbf{q}, \\sigma}^\\dagger a_{\\mathbf{p} - \\mathbf{q}, \\sigma'}^\\dagger a_{\\mathbf{p} \\sigma'} a_{\\mathbf{k} \\sigma},\n\\label{eq:V} \\tag{9}\n\\end{equation}\n$$\n\nThis equation will be useful for our nuclear matter calculations as well. In the last equation we defined\nthe quantities\n$\\mathbf{p} = \\mathbf{k}_p + \\mathbf{k}_q - \\mathbf{k}_r$, $\\mathbf{k} = \\mathbf{k}_r$ og $\\mathbf{q} = \\mathbf{k}_p - \\mathbf{k}_r$.\n\n\n\n**b)**\nCalculate thereafter the reference energy for the infinite electron gas in three dimensions using the above expressions for the kinetic energy and the potential energy.\n\n\n\n**Solution.**\nLet us now compute the expectation value of the reference energy using the expressions for the kinetic energy operator and the interaction.\nWe need to compute $\\langle \\Phi_0\\vert \\hat{H} \\vert \\Phi_0\\rangle = \\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle + \\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle$, where $\\vert \\Phi_0\\rangle$ is our reference Slater determinant, constructed from filling all single-particle states up to the Fermi level.\nLet us start with the kinetic energy first\n\n$$\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle \n= \\langle \\Phi_0\\vert \\left( \\sum_{\\mathbf{p} \\sigma} \\frac{\\hbar^2 p^2}{2m} a_{\\mathbf{p} \\sigma}^\\dagger a_{\\mathbf{p} \\sigma} \\right) \\vert \\Phi_0\\rangle \\\\\n= \\sum_{\\mathbf{p} \\sigma} \\frac{\\hbar^2 p^2}{2m} \\langle \\Phi_0\\vert a_{\\mathbf{p} \\sigma}^\\dagger a_{\\mathbf{p} \\sigma} \\vert \\Phi_0\\rangle .\n$$\n\nFrom the possible contractions using Wick's theorem, it is straightforward to convince oneself that the expression for the kinetic energy becomes\n\n$$\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle = \\sum_{\\mathbf{i} \\leq F} \\frac{\\hbar^2 k_i^2}{m} = \\frac{\\Omega}{(2\\pi)^3} \\frac{\\hbar^2}{m} \\int_0^{k_F} k^2 d\\mathbf{k}.\n$$\n\nThe sum of the spin degrees of freedom results in a factor of two only if we deal with identical spin $1/2$ fermions. \nChanging to spherical coordinates, the integral over the momenta $k$ results in the final expression\n\n$$\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle = \\frac{\\Omega}{(2\\pi)^3} \\left( 4\\pi \\int_0^{k_F} k^4 d\\mathbf{k} \\right) = \\frac{4\\pi\\Omega}{(2\\pi)^3} \\frac{1}{5} k_F^5 = \\frac{4\\pi\\Omega}{5(2\\pi)^3} k_F^5 = \\frac{\\hbar^2 \\Omega}{10\\pi^2 m} k_F^5 .\n$$\n\nThe density of states in momentum space is given by $2\\Omega/(2\\pi)^3$, where we have included the degeneracy due to the spin degrees of freedom.\nThe volume is given by $4\\pi k_F^3/3$, and the number of particles becomes\n\n$$\nN = \\frac{2\\Omega}{(2\\pi)^3} \\frac{4}{3} \\pi k_F^3 = \\frac{\\Omega}{3\\pi^2} k_F^3 \\quad \\Rightarrow \\quad\nk_F = \\left( \\frac{3\\pi^2 N}{\\Omega} \\right)^{1/3}.\n$$\n\nThis gives us\n\n\n
\n\n$$\n\\begin{equation}\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle =\n\\frac{\\hbar^2 \\Omega}{10\\pi^2 m} \\left( \\frac{3\\pi^2 N}{\\Omega} \\right)^{5/3} =\n\\frac{\\hbar^2 (3\\pi^2)^{5/3} N}{10\\pi^2 m} \\rho^{2/3} ,\n\\label{eq:T_forventning} \\tag{10}\n\\end{equation}\n$$\n\nWe are now ready to calculate the expectation value of the potential energy\n\n$$\n\\begin{align*}\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle \n&= \\langle \\Phi_0\\vert \\left( \\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k} \\mathbf{p} \\mathbf{q} } \\left[ \\int v(r) \\exp{-i (\\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right] a_{\\mathbf{k} + \\mathbf{q}, \\sigma}^\\dagger a_{\\mathbf{p} - \\mathbf{q}, \\sigma'}^\\dagger a_{\\mathbf{p} \\sigma'} a_{\\mathbf{k} \\sigma} \\right) \\vert \\Phi_0\\rangle \\\\\n&= \\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k} \\mathbf{p} \\mathbf{q}} \\left[ \\int v(r) \\exp{-i (\\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right]\\langle \\Phi_0\\vert a_{\\mathbf{k} + \\mathbf{q}, \\sigma}^\\dagger a_{\\mathbf{p} - \\mathbf{q}, \\sigma'}^\\dagger a_{\\mathbf{p} \\sigma'} a_{\\mathbf{k} \\sigma} \\vert \\Phi_0\\rangle .\n\\end{align*}\n$$\n\nThe only contractions which result in non-zero results are those that involve states below the Fermi level, that is \n$k \\leq k_F$, $p \\leq k_F$, $|\\mathbf{p} - \\mathbf{q}| < \\mathbf{k}_F$ and $|\\mathbf{k} + \\mathbf{q}| \\leq k_F$. Due to momentum conservation we must also have $\\mathbf{k} + \\mathbf{q} = \\mathbf{p}$, $\\mathbf{p} - \\mathbf{q} = \\mathbf{k}$ and $\\sigma = \\sigma'$ or $\\mathbf{k} + \\mathbf{q} = \\mathbf{k}$ and $\\mathbf{p} - \\mathbf{q} = \\mathbf{p}$. \nSummarizing, we must have\n\n$$\n\\mathbf{k} + \\mathbf{q} = \\mathbf{p} \\quad \\text{and} \\quad \\sigma = \\sigma', \\qquad\n\\text{or} \\qquad\n\\mathbf{q} = \\mathbf{0} .\n$$\n\nWe obtain then\n\n$$\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle =\n\\frac{1}{2\\Omega} \\left( \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{q} \\mathbf{p} \\leq F} \\left[ \\int v(r) d\\mathbf{r} \\right] - \\sum_{\\sigma}\n\\sum_{\\mathbf{q} \\mathbf{p} \\leq F} \\left[ \\int v(r) \\exp{-i (\\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right] \\right).\n$$\n\nThe first term is the so-called direct term while the second term is the exchange term. \nWe can rewrite this equation as (and this applies to any potential which depends only on the relative distance between particles)\n\n\n
\n\n$$\n\\begin{equation}\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle =\n\\frac{1}{2\\Omega} \\left( N^2 \\left[ \\int v(r) d\\mathbf{r} \\right] - N \\sum_{\\mathbf{q}} \\left[ \\int v(r) \\exp{-i (\\mathbf{q}\\cdot \\mathbf{r})} d\\mathbf{r} \\right] \\right),\n\\label{eq:V_b} \\tag{11}\n\\end{equation}\n$$\n\nwhere we have used the fact that a sum like $\\sum_{\\sigma}\\sum_{\\mathbf{k}}$ equals the number of particles. Using the fact that the density is given by\n$\\rho = N/\\Omega$, with $\\Omega$ being our volume, we can rewrite the last equation as\n\n$$\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle =\n\\frac{1}{2} \\left( \\rho N \\left[ \\int v(r) d\\mathbf{r} \\right] - \\rho\\sum_{\\mathbf{q}} \\left[ \\int v(r) \\exp{-i (\\mathbf{q}\\cdot \\mathbf{r})} d\\mathbf{r} \\right] \\right).\n$$\n\nFor the electron gas\nthe interaction part of the Hamiltonian operator is given by\n\n$$\n\\hat{H}_I=\\hat{H}_{el}+\\hat{H}_{b}+\\hat{H}_{el-b},\n$$\n\nwith the electronic part\n\n$$\n\\hat{H}_{el}=\\sum_{i=1}^N\\frac{p_i^2}{2m}+\\frac{e^2}{2}\\sum_{i\\ne j}\\frac{e^{-\\mu |\\mathbf{r}_i-\\mathbf{r}_j|}}{|\\mathbf{r}_i-\\mathbf{r}_j|},\n$$\n\nwhere we have introduced an explicit convergence factor\n(the limit $\\mu\\rightarrow 0$ is performed after having calculated the various integrals).\nCorrespondingly, we have\n\n$$\n\\hat{H}_{b}=\\frac{e^2}{2}\\int\\int d\\mathbf{r}d\\mathbf{r}'\\frac{n(\\mathbf{r})n(\\mathbf{r}')e^{-\\mu |\\mathbf{r}-\\mathbf{r}'|}}{|\\mathbf{r}-\\mathbf{r}'|},\n$$\n\nwhich is the energy contribution from the positive background charge with density\n$n(\\mathbf{r})=N/\\Omega$. Finally,\n\n$$\n\\hat{H}_{el-b}=-\\frac{e^2}{2}\\sum_{i=1}^N\\int d\\mathbf{r}\\frac{n(\\mathbf{r})e^{-\\mu |\\mathbf{r}-\\mathbf{x}_i|}}{|\\mathbf{r}-\\mathbf{x}_i|},\n$$\n\nis the interaction between the electrons and the positive background.\nWe can show that\n\n$$\n\\hat{H}_{b}=\\frac{e^2}{2}\\frac{N^2}{\\Omega}\\frac{4\\pi}{\\mu^2},\n$$\n\nand\n\n$$\n\\hat{H}_{el-b}=-e^2\\frac{N^2}{\\Omega}\\frac{4\\pi}{\\mu^2}.\n$$\n\nFor the electron gas and a Coulomb interaction, these two terms are cancelled (in the thermodynamic limit) by the contribution from the direct term arising\nfrom the repulsive electron-electron interaction. What remains then when computing the reference energy is only the kinetic energy contribution and the contribution from the exchange term. For other interactions, like nuclear forces with a short range part and no infinite range, we need to compute both the direct term and the exchange term.\n\n\n\n**c)**\nShow thereafter that the final Hamiltonian can be written as\n\n$$\nH=H_{0}+H_{I},\n$$\n\nwith\n\n$$\nH_{0}={\\displaystyle\\sum_{\\mathbf{k}\\sigma}}\n\\frac{\\hbar^{2}k^{2}}{2m}a_{\\mathbf{k}\\sigma}^{\\dagger}\na_{\\mathbf{k}\\sigma},\n$$\n\nand\n\n$$\nH_{I}=\\frac{e^{2}}{2\\Omega}{\\displaystyle\\sum_{\\sigma_{1}\\sigma_{2}}}{\\displaystyle\\sum_{\\mathbf{q}\\neq 0,\\mathbf{k},\\mathbf{p}}}\\frac{4\\pi}{q^{2}}\na_{\\mathbf{k}+\\mathbf{q},\\sigma_{1}}^{\\dagger}\na_{\\mathbf{p}-\\mathbf{q},\\sigma_{2}}^{\\dagger}\na_{\\mathbf{p}\\sigma_{2}}a_{\\mathbf{k}\\sigma_{1}}.\n$$\n\n**d)**\nCalculate $E_0/N=\\langle \\Phi_{0}\\vert H\\vert \\Phi_{0}\\rangle/N$ for for this system to first order in the interaction. Show that, by using\n\n$$\n\\rho= \\frac{k_F^3}{3\\pi^2}=\\frac{3}{4\\pi r_0^3},\n$$\n\nwith $\\rho=N/\\Omega$, $r_0$\nbeing the radius of a sphere representing the volume an electron occupies \nand the Bohr radius $a_0=\\hbar^2/e^2m$, \nthat the energy per electron can be written as\n\n$$\nE_0/N=\\frac{e^2}{2a_0}\\left[\\frac{2.21}{r_s^2}-\\frac{0.916}{r_s}\\right].\n$$\n\nHere we have defined\n$r_s=r_0/a_0$ to be a dimensionless quantity.\n\n**e)**\nPlot your results. Why is this system stable?\nCalculate thermodynamical quantities like the pressure, given by\n\n$$\nP=-\\left(\\frac{\\partial E}{\\partial \\Omega}\\right)_N,\n$$\n\nand the bulk modulus\n\n$$\nB=-\\Omega\\left(\\frac{\\partial P}{\\partial \\Omega}\\right)_N,\n$$\n\nand comment your results.\n\n\n\n\n\n\n\n\n## Preparing the ground for numerical calculations; kinetic energy and Ewald term\n\nThe kinetic energy operator is\n\n\n
\n\n$$\n\\begin{equation}\n \\hat{H}_{\\text{kin}} = -\\frac{\\hbar^{2}}{2m}\\sum_{i=1}^{N}\\nabla_{i}^{2},\n\\label{_auto9} \\tag{12}\n\\end{equation}\n$$\n\nwhere the sum is taken over all particles in the finite\nbox. The Ewald electron-electron interaction operator \ncan be written as\n\n\n
\n\n$$\n\\begin{equation}\n \\hat{H}_{ee} = \\sum_{i < j}^{N} v_{E}\\left( \\mathbf{r}_{i}-\\mathbf{r}_{j}\\right)\n + \\frac{1}{2}Nv_{0},\n\\label{_auto10} \\tag{13}\n\\end{equation}\n$$\n\nwhere $v_{E}(\\mathbf{r})$ is the effective two-body \ninteraction and $v_{0}$ is the self-interaction, defined \nas $v_{0} = \\lim_{\\mathbf{r} \\rightarrow 0} \\left\\{ v_{E}(\\mathbf{r}) - 1/r\\right\\} $. \n\nThe negative \nelectron charges are neutralized by a positive, homogeneous \nbackground charge. Fraser *et al.* explain how the\nelectron-background and background-background terms, \n$\\hat{H}_{eb}$ and $\\hat{H}_{bb}$, vanish\nwhen using Ewald's interaction for the three-dimensional\nelectron gas. Using the same arguments, one can show that\nthese terms are also zero in the corresponding \ntwo-dimensional system. \n\n\n\n\n## Ewald correction term\n\nIn the three-dimensional electron gas, the Ewald \ninteraction is\n\n$$\nv_{E}(\\mathbf{r}) = \\sum_{\\mathbf{k} \\neq \\mathbf{0}}\n \\frac{4\\pi }{L^{3}k^{2}}e^{i\\mathbf{k}\\cdot \\mathbf{r}}\n e^{-\\eta^{2}k^{2}/4} \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n + \\sum_{\\mathbf{R}}\\frac{1}{\\left| \\mathbf{r}\n -\\mathbf{R}\\right| } \\mathrm{erfc} \\left( \\frac{\\left| \n \\mathbf{r}-\\mathbf{R}\\right|}{\\eta }\\right)\n - \\frac{\\pi \\eta^{2}}{L^{3}},\n\\label{_auto11} \\tag{14}\n\\end{equation}\n$$\n\nwhere $L$ is the box side length, $\\mathrm{erfc}(x)$ is the \ncomplementary error function, and $\\eta $ is a free\nparameter that can take any value in the interval \n$(0, \\infty )$.\n\n\n\n## Interaction in momentum space\n\nThe translational vector\n\n\n
\n\n$$\n\\begin{equation}\n \\mathbf{R} = L\\left(n_{x}\\mathbf{u}_{x} + n_{y}\n \\mathbf{u}_{y} + n_{z}\\mathbf{u}_{z}\\right) ,\n\\label{_auto12} \\tag{15}\n\\end{equation}\n$$\n\nwhere $\\mathbf{u}_{i}$ is the unit vector for dimension $i$,\nis defined for all integers $n_{x}$, $n_{y}$, and \n$n_{z}$. These vectors are used to obtain all image\ncells in the entire real space. \nThe parameter $\\eta $ decides how \nthe Coulomb interaction is divided into a short-ranged\nand long-ranged part, and does not alter the total\nfunction. However, the number of operations needed\nto calculate the Ewald interaction with a desired \naccuracy depends on $\\eta $, and $\\eta $ is therefore \noften chosen to optimize the convergence as a function\nof the simulation-cell size. In\nour calculations, we choose $\\eta $ to be an infinitesimally\nsmall positive number, similarly as was done by [Shepherd *et al.*](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.86.035111) and [Roggero *et al.*](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.88.115138).\n\nThis gives an interaction that is evaluated only in\nFourier space. \n\nWhen studying the two-dimensional electron gas, we\nuse an Ewald interaction that is quasi two-dimensional.\nThe interaction is derived in three dimensions, with \nFourier discretization in only two dimensions. The Ewald effective\ninteraction has the form\n\n$$\nv_{E}(\\mathbf{r}) = \\sum_{\\mathbf{k} \\neq \\mathbf{0}} \n \\frac{\\pi }{L^{2}k}\\left\\{ e^{-kz} \\mathrm{erfc} \\left(\n \\frac{\\eta k}{2} - \\frac{z}{\\eta }\\right)+ \\right. \\nonumber\n$$\n\n$$\n\\left. e^{kz}\\mathrm{erfc} \\left( \\frac{\\eta k}{2} + \\frac{z}{\\eta }\n \\right) \\right\\} e^{i\\mathbf{k}\\cdot \\mathbf{r}_{xy}} \n \\nonumber\n$$\n\n$$\n+ \\sum_{\\mathbf{R}}\\frac{1}{\\left| \\mathbf{r}-\\mathbf{R}\n \\right| } \\mathrm{erfc} \\left( \\frac{\\left| \\mathbf{r}-\\mathbf{R}\n \\right|}{\\eta }\\right) \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n - \\frac{2\\pi}{L^{2}}\\left\\{ z\\mathrm{erf} \\left( \\frac{z}{\\eta }\n \\right) + \\frac{\\eta }{\\sqrt{\\pi }}e^{-z^{2}/\\eta^{2}}\\right\\},\n\\label{_auto13} \\tag{16}\n\\end{equation}\n$$\n\nwhere the Fourier vectors $\\mathbf{k}$ and the position vector\n$\\mathbf{r}_{xy}$ are defined in the $(x,y)$ plane. When\napplying the interaction $v_{E}(\\mathbf{r})$ to two-dimensional\nsystems, we set $z$ to zero. \n\n\nSimilarly as in the \nthree-dimensional case, also here we \nchoose $\\eta $ to approach zero from above. The resulting \nFourier-transformed interaction is\n\n\n
\n\n$$\n\\begin{equation}\n v_{E}^{\\eta = 0, z = 0}(\\mathbf{r}) = \\sum_{\\mathbf{k} \\neq \\mathbf{0}} \n \\frac{2\\pi }{L^{2}k}e^{i\\mathbf{k}\\cdot \\mathbf{r}_{xy}}. \n\\label{_auto14} \\tag{17}\n\\end{equation}\n$$\n\nThe self-interaction $v_{0}$ is a constant that can be \nincluded in the reference energy.\n\n\n\n\n## Antisymmetrized matrix elements in three dimensions\n\nIn the three-dimensional electron gas, the antisymmetrized\nmatrix elements are\n\n\n
\n\n$$\n\\label{eq:vmat_3dheg} \\tag{18}\n \\langle \\mathbf{k}_{p}m_{s_{p}}\\mathbf{k}_{q}m_{s_{q}}\n |\\tilde{v}|\\mathbf{k}_{r}m_{s_{r}}\\mathbf{k}_{s}m_{s_{s}}\\rangle_{AS} \n \\nonumber\n$$\n\n$$\n= \\frac{4\\pi }{L^{3}}\\delta_{\\mathbf{k}_{p}+\\mathbf{k}_{q},\n \\mathbf{k}_{r}+\\mathbf{k}_{s}}\\left\\{ \n \\delta_{m_{s_{p}}m_{s_{r}}}\\delta_{m_{s_{q}}m_{s_{s}}}\n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{r}}\\right) \n \\frac{1}{|\\mathbf{k}_{r}-\\mathbf{k}_{p}|^{2}}\n \\right. \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n \\left. - \\delta_{m_{s_{p}}m_{s_{s}}}\\delta_{m_{s_{q}}m_{s_{r}}}\n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{s}} \\right)\n \\frac{1}{|\\mathbf{k}_{s}-\\mathbf{k}_{p}|^{2}} \n \\right\\} ,\n\\label{_auto15} \\tag{19}\n\\end{equation}\n$$\n\nwhere the Kronecker delta functions \n$\\delta_{\\mathbf{k}_{p}\\mathbf{k}_{r}}$ and\n$\\delta_{\\mathbf{k}_{p}\\mathbf{k}_{s}}$ ensure that the \ncontribution with zero momentum transfer vanishes.\n\n\nSimilarly, the matrix elements for the two-dimensional\nelectron gas are\n\n\n
\n\n$$\n\\label{eq:vmat_2dheg} \\tag{20}\n \\langle \\mathbf{k}_{p}m_{s_{p}}\\mathbf{k}_{q}m_{s_{q}}\n |v|\\mathbf{k}_{r}m_{s_{r}}\\mathbf{k}_{s}m_{s_{s}}\\rangle_{AS} \n \\nonumber\n$$\n\n$$\n= \\frac{2\\pi }{L^{2}}\n \\delta_{\\mathbf{k}_{p}+\\mathbf{k}_{q},\\mathbf{k}_{r}+\\mathbf{k}_{s}}\n \\left\\{ \\delta_{m_{s_{p}}m_{s_{r}}}\\delta_{m_{s_{q}}m_{s_{s}}} \n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{r}}\\right)\n \\frac{1}{\n |\\mathbf{k}_{r}-\\mathbf{k}_{p}|} \\right.\n \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n - \\left. \\delta_{m_{s_{p}}m_{s_{s}}}\\delta_{m_{s_{q}}m_{s_{r}}}\n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{s}}\\right)\n \\frac{1}{ \n |\\mathbf{k}_{s}-\\mathbf{k}_{p}|}\n \\right\\} ,\n\\label{_auto16} \\tag{21}\n\\end{equation}\n$$\n\nwhere the single-particle momentum vectors $\\mathbf{k}_{p,q,r,s}$\nare now defined in two dimensions.\n\nIn actual calculations, the \nsingle-particle energies, defined by the operator $\\hat{f}$, are given by\n\n\n
\n\n$$\n\\begin{equation}\n \\langle \\mathbf{k}_{p}|f|\\mathbf{k}_{q} \\rangle\n = \\frac{\\hbar^{2}k_{p}^{2}}{2m}\\delta_{\\mathbf{k}_{p},\n \\mathbf{k}_{q}} + \\sum_{\\mathbf{k}_{i}}\\langle \n \\mathbf{k}_{p}\\mathbf{k}_{i}|v|\\mathbf{k}_{q}\n \\mathbf{k}_{i}\\rangle_{AS}.\n\\label{eq:fock_heg} \\tag{22}\n\\end{equation}\n$$\n\n## Periodic boundary conditions and single-particle states\n\nWhen using periodic boundary conditions, the \ndiscrete-momentum single-particle basis functions\n\n$$\n\\phi_{\\mathbf{k}}(\\mathbf{r}) =\ne^{i\\mathbf{k}\\cdot \\mathbf{r}}/L^{d/2}\n$$\n\nare associated with \nthe single-particle energy\n\n\n
\n\n$$\n\\begin{equation}\n \\varepsilon_{n_{x}, n_{y}} = \\frac{\\hbar^{2}}{2m} \\left( \\frac{2\\pi }{L}\\right)^{2}\\left( n_{x}^{2} + n_{y}^{2}\\right)\n\\label{_auto17} \\tag{23}\n\\end{equation}\n$$\n\nfor two-dimensional sytems and\n\n\n
\n\n$$\n\\begin{equation}\n \\varepsilon_{n_{x}, n_{y}, n_{z}} = \\frac{\\hbar^{2}}{2m}\n \\left( \\frac{2\\pi }{L}\\right)^{2}\n \\left( n_{x}^{2} + n_{y}^{2} + n_{z}^{2}\\right)\n\\label{_auto18} \\tag{24}\n\\end{equation}\n$$\n\nfor three-dimensional systems.\n\n\nWe choose the single-particle basis such that both the occupied and \nunoccupied single-particle spaces have a closed-shell \nstructure. This means that all single-particle states \ncorresponding to energies below a chosen cutoff are\nincluded in the basis. We study only the unpolarized spin\nphase, in which all orbitals are occupied with one spin-up \nand one spin-down electron. \n\n\nThe table illustrates how single-particle energies\n fill energy shells in a two-dimensional electron box.\n Here $n_{x}$ and $n_{y}$ are the momentum quantum numbers,\n $n_{x}^{2} + n_{y}^{2}$ determines the single-particle \n energy level, $N_{\\uparrow \\downarrow }$ represents the \n cumulated number of spin-orbitals in an unpolarized spin\n phase, and $N_{\\uparrow \\uparrow }$ stands for the\n cumulated number of spin-orbitals in a spin-polarized\n system.\n\n\n\n\n## Magic numbers for the two-dimensional electron gas\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$n_{x}^{2}+n_{y}^{2}$ $n_{x}$ $n_{y}$ $N_{\\uparrow \\downarrow }$ $N_{\\uparrow \\uparrow }$
0 0 0 2 1
1 -1 0
1 0
0 -1
0 1 10 5
2 -1 -1
-1 1
1 -1
1 1 18 9
4 -2 0
2 0
0 -2
0 2 26 13
5 -2 -1
2 -1
-2 1
2 1
-1 -2
-1 2
1 -2
1 2 42 21
\n## Hartree-Fock energies\n\nFinally, a useful benchmark for our calculations is the expression for\nthe reference energy $E_0$ per particle.\nDefining the $T=0$ density $\\rho_0$, we can in turn determine in three\ndimensions the radius $r_0$ of a sphere representing the volume an\nelectron occupies (the classical electron radius) as\n\n$$\nr_0= \\left(\\frac{3}{4\\pi \\rho}\\right)^{1/3}.\n$$\n\nIn two dimensions the corresponding quantity is\n\n$$\nr_0= \\left(\\frac{1}{\\pi \\rho}\\right)^{1/2}.\n$$\n\nOne can then express the reference energy per electron in terms of the\ndimensionless quantity $r_s=r_0/a_0$, where we have introduced the\nBohr radius $a_0=\\hbar^2/e^2m$. The energy per electron computed with\nthe reference Slater determinant can then be written as\n(using hereafter only atomic units, meaning that $\\hbar = m = e = 1$)\n\n$$\ng\nE_0/N=\\frac{1}{2}\\left[\\frac{2.21}{r_s^2}-\\frac{0.916}{r_s}\\right],\n$$\n\nfor the three-dimensional electron gas. For the two-dimensional gas\nthe corresponding expression is (show this)\n\n$$\nE_0/N=\\frac{1}{r_s^2}-\\frac{8\\sqrt{2}}{3\\pi r_s}.a\n$$\n\nFor an infinite homogeneous system, there are some particular\nsimplications due to the conservation of the total momentum of the\nparticles. By symmetry considerations, the total momentum of the\nsystem has to be zero. Both the kinetic energy operator and the\ntotal Hamiltonian $\\hat{H}$ are assumed to be diagonal in the total\nmomentum $\\mathbf{K}$. Hence, both the reference state $\\Phi_{0}$ and\nthe correlated ground state $\\Psi$ must be eigenfunctions of the\noperator $\\mathbf{\\hat{K}}$ with the corresponding eigemnvalue\n$\\mathbf{K} = \\mathbf{0}$. This leads to important\nsimplications to our different many-body methods. In coupled cluster\ntheory for example, all\nterms that involve single particle-hole excitations vanish. \n\n\n\n\n\n## Exercise 3: Magic numbers for the three-dimensional electron gas and perturbation theory to second order\n\n\n**a)**\nSet up the possible magic numbers for the electron gas in three dimensions using periodic boundary conditions..\n\n\n\n**Hint.**\nFollow the example for the two-dimensional electron gas and add the third dimension via the quantum number $n_z$.\n\n\n\n\n\n**Solution.**\nUsing the same approach as made with the two-dimensional electron gas with the single-particle kinetic energy defined as\n\n$$\n\\frac{\\hbar^2}{2m}\\left(k_{n_x}^2+k_{n_y}^2k_{n_z}^2\\right),\n$$\n\nand\n\n$$\nk_{n_i}=\\frac{2\\pi n_i}{L} \\hspace{0.1cm} n_i = 0, \\pm 1, \\pm 2, \\dots,\n$$\n\nwe can set up a similar table and obtain (assuming identical particles one and including spin up and spin down solutions) for energies less than or equal to $n_{x}^{2}+n_{y}^{2}+n_{z}^{2}\\le 3$\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$n_{x}^{2}+n_{y}^{2}+n_{z}^{2}$ $n_{x}$ $n_{y}$ $n_{z}$ $N_{\\uparrow \\downarrow }$
0 0 0 0 2
1 -1 0 0
1 1 0 0
1 0 -1 0
1 0 1 0
1 0 0 -1
1 0 0 1 14
2 -1 -1 0
2 -1 1 0
2 1 -1 0
2 1 1 0
2 -1 0 -1
2 -1 0 1
2 1 0 -1
2 1 0 1
2 0 -1 -1
2 0 -1 1
2 0 1 -1
2 0 1 1 38
3 -1 -1 -1
3 -1 -1 1
3 -1 1 -1
3 -1 1 1
3 1 -1 -1
3 1 -1 1
3 1 1 -1
3 1 1 1 54
\nContinuing in this way we get for $n_{x}^{2}+n_{y}^{2}+n_{z}^{2}=4$ a total of 22 additional states, resulting in $76$ as a new magic number. For the lowest six energy values the degeneracy in energy gives us $2$, $14$, $38$, $54$, $76$ and $114$ as magic numbers. These numbers will then define our Fermi level when we compute the energy in a Cartesian basis. When performing calculations based on many-body perturbation theory, Coupled cluster theory or other many-body methods, we need then to add states above the Fermi level in order to sum over single-particle states which are not occupied. \n\nIf we wish to study infinite nuclear matter with both protons and neutrons, the above magic numbers become $4, 28, 76, 108, 132, 228, \\dots$.\n\n\n\n**b)**\nEvery number of particles for filled shells defines also the number of particles to be used in a given calculation. Use the number of particles to define the density of the system\n\n$$\n\\rho = g \\frac{k_F^3}{6\\pi^2},\n$$\n\nwhere you need to define $k_F$ and the degeneracy $g$, which is two for one type of spin-$1/2$ particles and four for symmetric nuclear matter.\n\n**c)**\nUse the density to find the length $L$ of the box used with periodic boundary contributions, that is use the relation\n\n$$\nV= L^3= \\frac{A}{\\rho}.\n$$\n\nYou can use $L$ to define the spacing to set up the spacing between varipus $k$-values, that is\n\n$$\n\\Delta k = \\frac{2\\pi}{L}.\n$$\n\nHere, $A$ can be the number of nucleons. If we deal with the electron gas only, this needs to be replaced by the number of electrons $N$.\n\n**d)**\nCalculate thereafter the Hartree-Fock total energy for the electron gas or infinite nuclear matter using the Minnesota interaction discussed during the lectures. Compare the results with the exact Hartree-Fock results for the electron gas as a function of the number of particles.\n\n**e)**\nCompute now the contribution to the correlation energy for the electron gas at the level of second-order perturbation theory using a given number of electrons $N$ and a given (defined by you) number of single-particle states above the Fermi level.\nThe following Python code shows an implementation for the electron gas in three dimensions for second perturbation theory using the Coulomb interaction. Here we have hard-coded a case which computes the energy for $N=14$ and a total of $5$ major shells.\n\n\n\n**Solution.**\n\n\n```python\nfrom numpy import *\n\nclass electronbasis():\n def __init__(self, N, rs, Nparticles):\n ############################################################\n ##\n ## Initialize basis: \n ## N = number of shells\n ## rs = parameter for volume \n ## Nparticles = Number of holes (conflicting naming, sorry)\n ##\n ###########################################################\n \n self.rs = rs\n self.states = []\n self.nstates = 0\n self.nparticles = Nparticles\n self.nshells = N - 1\n self.Nm = N + 1\n \n self.k_step = 2*(self.Nm + 1)\n Nm = N\n n = 0 #current shell\n ene_integer = 0\n while n <= self.nshells:\n is_shell = False\n for x in range(-Nm, Nm+1):\n for y in range(-Nm, Nm+1):\n for z in range(-Nm,Nm+1):\n e = x*x + y*y + z*z\n if e == ene_integer:\n is_shell = True\n self.nstates += 2\n self.states.append([e, x,y,z,1])\n self.states.append([e, x,y,z, -1])\n \n if is_shell:\n n += 1\n ene_integer += 1\n self.L3 = (4*pi*self.nparticles*self.rs**3)/3.0\n self.L2 = self.L3**(2/3.0)\n self.L = pow(self.L3, 1/3.0)\n \n for i in range(self.nstates):\n self.states[i][0] *= 2*(pi**2)/self.L**2 #Multiplying in the missing factors in the single particle energy\n self.states = array(self.states) #converting to array to utilize vectorized calculations \n \n def hfenergy(self, nParticles):\n #Calculate the HF-energy (reference energy) for nParticles particles\n e0 = 0.0\n if nParticles<=self.nstates:\n for i in range(nParticles):\n e0 += self.h(i,i)\n for j in range(nParticles):\n if j != i:\n e0 += .5*self.v(i,j,i,j)\n else:\n #Safety for cases where nParticles exceeds size of basis\n print \"Not enough basis states.\"\n \n return e0\n \n def h(self, p,q):\n #Return single particle energy\n return self.states[p,0]*(p==q)\n\n \n def v(self,p,q,r,s):\n #Two body interaction for electron gas\n val = 0\n terms = 0.0\n term1 = 0.0\n term2 = 0.0\n kdpl = self.kdplus(p,q,r,s)\n if kdpl != 0:\n val = 1.0/self.L3\n if self.kdspin(p,r)*self.kdspin(q,s)==1:\n if self.kdwave(p,r) != 1.0:\n term1 = self.L2/(pi*self.absdiff2(r,p))\n if self.kdspin(p,s)*self.kdspin(q,r)==1:\n if self.kdwave(p,s) != 1.0:\n term2 = self.L2/(pi*self.absdiff2(s,p))\n return val*(term1-term2)\n\n \n #The following is a series of kroenecker deltas used in the two-body interactions. \n #Just ignore these lines unless you suspect an error here\n def kdi(self,a,b):\n #Kroenecker delta integer\n return 1.0*(a==b)\n def kda(self,a,b):\n #Kroenecker delta array\n d = 1.0\n #print a,b,\n for i in range(len(a)):\n d*=(a[i]==b[i])\n return d\n def kdfullplus(self,p,q,r,s):\n #Kroenecker delta wavenumber p+q,r+s\n return self.kda(self.states[p][1:5]+self.states[q][1:5],self.states[r][1:5]+self.states[s][1:5])\n def kdplus(self,p,q,r,s):\n #Kroenecker delta wavenumber p+q,r+s\n return self.kda(self.states[p][1:4]+self.states[q][1:4],self.states[r][1:4]+self.states[s][1:4])\n def kdspin(self,p,q):\n #Kroenecker delta spin\n return self.kdi(self.states[p][4], self.states[q][4])\n def kdwave(self,p,q):\n #Kroenecker delta wavenumber\n return self.kda(self.states[p][1:4],self.states[q][1:4])\n def absdiff2(self,p,q):\n val = 0.0\n for i in range(1,4):\n val += (self.states[p][i]-self.states[q][i])*(self.states[p][i]-self.states[q][i])\n return val\n\n \ndef MBPT2(bs):\n #2. order MBPT Energy \n Nh = bs.nparticles\n Np = bs.nstates-bs.nparticles #Note the conflicting notation here. bs.nparticles is number of hole states \n vhhpp = zeros((Nh**2, Np**2))\n vpphh = zeros((Np**2, Nh**2))\n #manual MBPT(2) energy (Should be -0.525588309385 for 66 states, shells = 5, in this code)\n psum2 = 0\n for i in range(Nh):\n for j in range(Nh):\n for a in range(Np):\n for b in range(Np):\n #val1 = bs.v(i,j,a+Nh,b+Nh)\n #val2 = bs.v(a+Nh,b+Nh,i,j)\n #if val1!=val2:\n # print val1, val2\n vhhpp[i + j*Nh, a+b*Np] = bs.v(i,j,a+Nh,b+Nh)\n vpphh[a+b*Np,i + j*Nh] = bs.v(a+Nh,b+Nh,i,j)/(bs.states[i,0] + bs.states[j,0] - bs.states[a + Nh, 0] - bs.states[b+Nh,0])\n psum = .25*sum(dot(vhhpp,vpphh).diagonal())\n return psum\n \ndef MBPT2_fast(bs):\n #2. order MBPT Energy \n Nh = bs.nparticles\n Np = bs.nstates-bs.nparticles #Note the conflicting notation here. bs.nparticles is number of hole states \n vhhpp = zeros((Nh**2, Np**2))\n vpphh = zeros((Np**2, Nh**2))\n #manual MBPT(2) energy (Should be -0.525588309385 for 66 states, shells = 5, in this code)\n psum2 = 0\n for i in range(Nh):\n for j in range(i):\n for a in range(Np):\n for b in range(a):\n val = bs.v(i,j,a+Nh,b+Nh)\n eps = val/(bs.states[i,0] + bs.states[j,0] - bs.states[a + Nh, 0] - bs.states[b+Nh,0])\n vhhpp[i + j*Nh, a+b*Np] = val \n vhhpp[j + i*Nh, a+b*Np] = -val \n vhhpp[i + j*Nh, b+a*Np] = -val\n vhhpp[j + i*Nh, b+a*Np] = val \n \n \n vpphh[a+b*Np,i + j*Nh] = eps\n vpphh[a+b*Np,j + i*Nh] = -eps\n vpphh[b+a*Np,i + j*Nh] = -eps\n vpphh[b+a*Np,j + i*Nh] = eps\n \n \n psum = .25*sum(dot(vhhpp,vpphh).diagonal())\n return psum\n\n\n#user input here\nnumber_of_shells = 5\nnumber_of_holes = 14 #(particles)\n\n\n#initialize basis \nbs = electronbasis(number_of_shells,1.0,number_of_holes) #shells, r_s = 1.0, holes\n\n#Print some info to screen\nprint \"Number of shells:\", number_of_shells\nprint \"Number of states:\", bs.nstates\nprint \"Number of holes :\", bs.nparticles\nprint \"Reference Energy:\", bs.hfenergy(number_of_holes), \"hartrees \"\nprint \" :\", 2*bs.hfenergy(number_of_holes), \"rydbergs \"\n\nprint \"Ref.E. per hole :\", bs.hfenergy(number_of_holes)/number_of_holes, \"hartrees \"\nprint \" :\", 2*bs.hfenergy(number_of_holes)/number_of_holes, \"rydbergs \"\n\n\n\n#calculate MBPT2 energy\nprint \"MBPT2 energy :\", MBPT2_fast(bs), \" hartrees\"\n```\n\nAs we will see later, for the infinite electron gas, second-order perturbation theory diverges in the thermodynamical limit, a feature which can easily be noted if one lets the number of single-particle states above the Fermi level to increase. The resulting expression in a Cartesian basis will not converge.\n\n\n\n\n", "meta": {"hexsha": "6eae99d54d3e645f24384bc7998a1998401cb0f5", "size": 130899, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/src/.ipynb_checkpoints/inf-checkpoint.ipynb", "max_stars_repo_name": "NuclearTalent/ManyBody2018", "max_stars_repo_head_hexsha": "2339ed834777fa10f6156344f17494b9a7c0bf91", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-07-17T01:09:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-08T02:34:02.000Z", "max_issues_repo_path": "doc/src/.ipynb_checkpoints/inf-checkpoint.ipynb", "max_issues_repo_name": "NuclearTalent/ManyBody2018", "max_issues_repo_head_hexsha": "2339ed834777fa10f6156344f17494b9a7c0bf91", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/src/.ipynb_checkpoints/inf-checkpoint.ipynb", "max_forks_repo_name": "NuclearTalent/ManyBody2018", "max_forks_repo_head_hexsha": "2339ed834777fa10f6156344f17494b9a7c0bf91", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-07-16T06:31:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-01T07:53:38.000Z", "avg_line_length": 47.3928312817, "max_line_length": 29528, "alphanum_fraction": 0.6015324792, "converted": true, "num_tokens": 22161, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.4843800991636031, "lm_q2_score": 0.3073580232098525, "lm_q1q2_score": 0.14887810976111737}} {"text": "\n\n\n# PHY321: More on Motion and Forces with examples, begin Work and Energy discussion\n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway\n\nDate: **Jan 27, 2022**\n\nCopyright 1999-2022, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n## Aims and Overarching Motivation\n\n### Monday January 24, Lecture and examples\n\nWe discuss various forces and their pertinent equations of motion\n\nRecommended reading: Taylor 2.1-2.4. Malthe-S\u00f8renssen chapters 6 and 7 contain many interesting examples with codes and solutions.\nWe will cover in particular a falling object in two dimensions with linear air resistance relevant for homework 3.\n\n### Wednesday, Lecture and Examples\n\nWe discuss other force models with examples such as the gravitational\nforce and a spring force. See Malthe-S\u00f8renssen chapter 7.3-7.5.\n\nWe discuss also exercises 5 and 6 from homework 2. We will continue with this on Friday.\n\n### Friday, summary and work on weekly assignments\n\nWe start our discussion of energy and work, see Taylor 4.1 and 4.2 the first 20 minutes. Thereafter we solve exercises.\n\n## Air Resistance in One Dimension\n\nLast week we considered the motion of a falling object with air\nresistance. Here we look at both a quadratic in velocity resistance\nand linear in velocity. But first we give a qualitative argument\nabout the mathematical expression for the air resistance we used last\nFriday. The discussion here is also contained in the [video on how to make a model for a drag force](https://youtu.be/DRTmLZYdTFY).\n\nAir resistance tends to scale as the square of the velocity. This is\nin contrast to many problems chosen for textbooks, where it is linear\nin the velocity. The choice of a linear dependence is motivated by\nmathematical simplicity (it keeps the differential equation linear)\nrather than by physics. One can see that the force should be quadratic\nin velocity by considering the momentum imparted on the air\nmolecules. If an object sweeps through a volume $dV$ of air in time\n$dt$, the momentum imparted on the air is\n\n\n
\n\n$$\n\\begin{equation}\ndP=\\rho_m dV v,\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nwhere $v$ is the velocity of the object and $\\rho_m$ is the mass\ndensity of the air. If the molecules bounce back as opposed to stop\nyou would double the size of the term. The opposite value of the\nmomentum is imparted onto the object itself. Geometrically, the\ndifferential volume is\n\n\n
\n\n$$\n\\begin{equation}\ndV=Avdt,\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nwhere $A$ is the cross-sectional area and $vdt$ is the distance the\nobject moved in time $dt$.\n\n## Resulting Acceleration\nPlugging this into the expression above,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dP}{dt}=-\\rho_m A v^2.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nThis is the force felt by the particle, and is opposite to its\ndirection of motion. Now, because air doesn't stop when it hits an\nobject, but flows around the best it can, the actual force is reduced\nby a dimensionless factor $c_W$, called the drag coefficient.\n\n\n
\n\n$$\n\\begin{equation}\nF_{\\rm drag}=-c_W\\rho_m Av^2,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the acceleration is\n\n$$\n\\begin{eqnarray}\n\\frac{dv}{dt}=-\\frac{c_W\\rho_mA}{m}v^2.\n\\end{eqnarray}\n$$\n\nFor a particle with initial velocity $v_0$, one can separate the $dt$\nto one side of the equation, and move everything with $v$s to the\nother side. We did this in our discussion of simple motion and will not repeat it here.\n\nOn more general terms,\nfor many systems, e.g. an automobile, there are multiple sources of\nresistance. In addition to wind resistance, where the force is\nproportional to $v^2$, there are dissipative effects of the tires on\nthe pavement, and in the axel and drive train. These other forces can\nhave components that scale proportional to $v$, and components that\nare independent of $v$. Those independent of $v$, e.g. the usual\n$f=\\mu_K N$ frictional force you consider in your first Physics courses, only set in\nonce the object is actually moving. As speeds become higher, the $v^2$\ncomponents begin to dominate relative to the others. For automobiles\nat freeway speeds, the $v^2$ terms are largely responsible for the\nloss of efficiency. To travel a distance $L$ at fixed speed $v$, the\nenergy/work required to overcome the dissipative forces are $fL$,\nwhich for a force of the form $f=\\alpha v^n$ becomes\n\n$$\n\\begin{eqnarray}\nW=\\int dx~f=\\alpha v^n L.\n\\end{eqnarray}\n$$\n\nFor $n=0$ the work is\nindependent of speed, but for the wind resistance, where $n=2$,\nslowing down is essential if one wishes to reduce fuel consumption. It\nis also important to consider that engines are designed to be most\nefficient at a chosen range of power output. Thus, some cars will get\nbetter mileage at higher speeds (They perform better at 50 mph than at\n5 mph) despite the considerations mentioned above.\n\n## Going Ballistic, Projectile Motion or a Softer Approach, Falling Raindrops\n\nAs an example of Newton's Laws we consider projectile motion (or a\nfalling raindrop or a ball we throw up in the air) with a drag force. Even though air resistance is\nlargely proportional to the square of the velocity, we will consider\nthe drag force to be linear to the velocity, $\\boldsymbol{F}=-m\\gamma\\boldsymbol{v}$,\nfor the purposes of this exercise.\n\nSuch a dependence can be extracted from experimental data for objects moving at low velocities, see for example Malthe-S\u00f8renssen chapter 5.6.\n\nWe will here focus on a two-dimensional problem.\n\n## Two-dimensional falling object\n\nThe acceleration for a projectile moving upwards,\n$\\boldsymbol{a}=\\boldsymbol{F}/m$, becomes\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{dt}=-\\gamma v_x,\\\\\n\\nonumber\n\\frac{dv_y}{dt}=-\\gamma v_y-g,\n\\end{eqnarray}\n$$\n\nand $\\gamma$ has dimensions of inverse time. \n\nIf you on the other hand have a falling raindrop, how do these equations change? See for example Figure 2.1 in Taylor.\nLet us stay with a ball which is thrown up in the air at $t=0$.\n\n## Ways of solving these equations\n\nWe will go over two different ways to solve this equation. The first\nby direct integration, and the second as a differential equation. To\ndo this by direct integration, one simply multiplies both sides of the\nequations above by $dt$, then divide by the appropriate factors so\nthat the $v$s are all on one side of the equation and the $dt$ is on\nthe other. For the $x$ motion one finds an easily integrable equation,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{v_x}&=&-\\gamma dt,\\\\\n\\nonumber\n\\int_{v_{0x}}^{v_{x}}\\frac{dv_x}{v_x}&=&-\\gamma\\int_0^{t}dt,\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{x}}{v_{0x}}\\right)&=&-\\gamma t,\\\\\n\\nonumber\nv_{x}(t)&=&v_{0x}e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nThis is very much the result you would have written down\nby inspection. For the $y$-component of the velocity,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_y}{v_y+g/\\gamma}&=&-\\gamma dt\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{y}+g/\\gamma}{v_{0y}-g/\\gamma}\\right)&=&-\\gamma t_f,\\\\\n\\nonumber\nv_{fy}&=&-\\frac{g}{\\gamma}+\\left(v_{0y}+\\frac{g}{\\gamma}\\right)e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nWhereas $v_x$ starts at some value and decays\nexponentially to zero, $v_y$ decays exponentially to the terminal\nvelocity, $v_t=-g/\\gamma$.\n\n## Solving as differential equations\n\nAlthough this direct integration is simpler than the method we invoke\nbelow, the method below will come in useful for some slightly more\ndifficult differential equations in the future. The differential\nequation for $v_x$ is straight-forward to solve. Because it is a first\norder differential euqation there is one arbitrary constant, $A$, and by inspection the\nsolution is\n\n\n
\n\n$$\n\\begin{equation}\nv_x=Ae^{-\\gamma t}.\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nThe arbitrary constants for equations of motion are usually determined\nby the initial conditions, or more generally boundary conditions. By\ninspection $A=v_{0x}$, the initial $x$ component of the velocity.\n\n## Differential Equations, contn\nThe differential equation for $v_y$ is a bit more complicated due to\nthe presence of $g$. Differential equations where all the terms are\nlinearly proportional to a function, in this case $v_y$, or to\nderivatives of the function, e.g., $v_y$, $dv_y/dt$,\n$d^2v_y/dt^2\\cdots$, are called linear differential equations. If\nthere are terms proportional to $v^2$, as would happen if the drag\nforces were proportional to the square of the velocity, the\ndifferential equation is not longer linear. Because this expression\nhas only one derivative in $v$ it is a first-order linear differential\nequation. If a term were added proportional to $d^2v/dt^2$ it would be\na second-order differential equation. In this case we have a term\ncompletely independent of $v$, the gravitational acceleration $g$, and\nthe usual strategy is to first rewrite the equation with all the\nlinear terms on one side of the equal sign,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_y}{dt}+\\gamma v_y=-g.\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\n## Splitting into two parts\n\nNow, the solution to the equation can be broken into two\nparts. Because this is a first-order differential equation we know\nthat there will be one arbitrary constant. Physically, the arbitrary\nconstant will be determined by setting the initial velocity, though it\ncould be determined by setting the velocity at any given time. Like\nmost differential equations, solutions are not \"solved\". Instead,\none guesses at a form, then shows the guess is correct. For these\ntypes of equations, one first tries to find a single solution,\ni.e. one with no arbitrary constants. This is called the {\\it\nparticular} solution, $y_p(t)$, though it should really be called\n\"a\" particular solution because there are an infinite number of such\nsolutions. One then finds a solution to the {\\it homogenous} equation,\nwhich is the equation with zero on the right-hand side,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,h}}{dt}+\\gamma v_{y,h}=0.\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nHomogenous solutions will have arbitrary constants. \n\nThe particular solution will solve the same equation as the original\ngeneral equation\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,p}}{dt}+\\gamma v_{y,p}=-g.\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nHowever, we don't need find one with arbitrary constants. Hence, it is\ncalled a **particular** solution.\n\nThe sum of the two,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=v_{y,p}+v_{y,h},\n\\label{_auto9} \\tag{9}\n\\end{equation}\n$$\n\nis a solution of the total equation because of the linear nature of\nthe differential equation. One has now found a *general* solution\nencompassing all solutions, because it both satisfies the general\nequation (like the particular solution), and has an arbitrary constant\nthat can be adjusted to fit any initial condition (like the homogeneous\nsolution). If the equations were not linear, that is if there were terms\nsuch as $v_y^2$ or $v_y\\dot{v}_y$, this technique would not work.\n\n## More details\n\nReturning to the example above, the homogenous solution is the same as\nthat for $v_x$, because there was no gravitational acceleration in\nthat case,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,h}=Be^{-\\gamma t}.\n\\label{_auto10} \\tag{10}\n\\end{equation}\n$$\n\nIn this case a particular solution is one with constant velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,p}=-g/\\gamma.\n\\label{_auto11} \\tag{11}\n\\end{equation}\n$$\n\nNote that this is the terminal velocity of a particle falling from a\ngreat height. The general solution is thus,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=Be^{-\\gamma t}-g/\\gamma,\n\\label{_auto12} \\tag{12}\n\\end{equation}\n$$\n\nand one can find $B$ from the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{0y}=B-g/\\gamma,~~~B=v_{0y}+g/\\gamma.\n\\label{_auto13} \\tag{13}\n\\end{equation}\n$$\n\nPlugging in the expression for $B$ gives the $y$ motion given the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=(v_{0y}+g/\\gamma)e^{-\\gamma t}-g/\\gamma.\n\\label{_auto14} \\tag{14}\n\\end{equation}\n$$\n\nIt is easy to see that this solution has $v_y=v_{0y}$ when $t=0$ and\n$v_y=-g/\\gamma$ when $t\\rightarrow\\infty$.\n\nOne can also integrate the two equations to find the coordinates $x$\nand $y$ as functions of $t$,\n\n$$\n\\begin{eqnarray}\nx&=&\\int_0^t dt'~v_{0x}(t')=\\frac{v_{0x}}{\\gamma}\\left(1-e^{-\\gamma t}\\right),\\\\\n\\nonumber\ny&=&\\int_0^t dt'~v_{0y}(t')=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\end{eqnarray}\n$$\n\nIf the question was to find the position at a time $t$, we would be\nfinished. However, the more common goal in a projectile equation\nproblem is to find the range, i.e. the distance $x$ at which $y$\nreturns to zero. For the case without a drag force this was much\nsimpler. The solution for the $y$ coordinate would have been\n$y=v_{0y}t-gt^2/2$. One would solve for $t$ to make $y=0$, which would\nbe $t=2v_{0y}/g$, then plug that value for $t$ into $x=v_{0x}t$ to\nfind $x=2v_{0x}v_{0y}/g=v_0\\sin(2\\theta_0)/g$. One follows the same\nsteps here, except that the expression for $y(t)$ is more\ncomplicated. Searching for the time where $y=0$, and we get\n\n\n
\n\n$$\n\\begin{equation}\n0=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\label{_auto15} \\tag{15}\n\\end{equation}\n$$\n\nThis cannot be inverted into a simple expression $t=\\cdots$. Such\nexpressions are known as \"transcendental equations\", and are not the\nrare instance, but are the norm. In the days before computers, one\nmight plot the right-hand side of the above graphically as\na function of time, then find the point where it crosses zero.\n\nNow, the most common way to solve for an equation of the above type\nwould be to apply Newton's method numerically. This involves the\nfollowing algorithm for finding solutions of some equation $F(t)=0$.\n\n1. First guess a value for the time, $t_{\\rm guess}$.\n\n2. Calculate $F$ and its derivative, $F(t_{\\rm guess})$ and $F'(t_{\\rm guess})$. \n\n3. Unless you guessed perfectly, $F\\ne 0$, and assuming that $\\Delta F\\approx F'\\Delta t$, one would choose \n\n4. $\\Delta t=-F(t_{\\rm guess})/F'(t_{\\rm guess})$.\n\n5. Now repeat step 1, but with $t_{\\rm guess}\\rightarrow t_{\\rm guess}+\\Delta t$.\n\nIf the $F(t)$ were perfectly linear in $t$, one would find $t$ in one\nstep. Instead, one typically finds a value of $t$ that is closer to\nthe final answer than $t_{\\rm guess}$. One breaks the loop once one\nfinds $F$ within some acceptable tolerance of zero. A program to do\nthis will be added shortly.\n\n## Motion in a Magnetic Field\n\nThis case is just another example of motion under a specific force.\n\nAnother example of a velocity-dependent force is magnetism,\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{F}&=&q\\boldsymbol{v}\\times\\boldsymbol{B},\\\\\n\\nonumber\nF_i&=&q\\sum_{jk}\\epsilon_{ijk}v_jB_k.\n\\end{eqnarray}\n$$\n\nFor a uniform field in the $z$ direction $\\boldsymbol{B}=B\\hat{z}$, the force can only have $x$ and $y$ components,\n\n$$\n\\begin{eqnarray}\nF_x&=&qBv_y\\\\\n\\nonumber\nF_y&=&-qBv_x.\n\\end{eqnarray}\n$$\n\nThe differential equations are\n\n$$\n\\begin{eqnarray}\n\\dot{v}_x&=&\\omega_c v_y,\\omega_c= qB/m\\\\\n\\nonumber\n\\dot{v}_y&=&-\\omega_c v_x.\n\\end{eqnarray}\n$$\n\nOne can solve the equations by taking time derivatives of either equation, then substituting into the other equation,\n\n$$\n\\begin{eqnarray}\n\\ddot{v}_x=\\omega_c\\dot{v_y}=-\\omega_c^2v_x,\\\\\n\\nonumber\n\\ddot{v}_y&=&-\\omega_c\\dot{v}_x=-\\omega_cv_y.\n\\end{eqnarray}\n$$\n\nThe solution to these equations can be seen by inspection,\n\n$$\n\\begin{eqnarray}\nv_x&=&A\\sin(\\omega_ct+\\phi),\\\\\n\\nonumber\nv_y&=&A\\cos(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nOne can integrate the equations to find the positions as a function of time,\n\n$$\n\\begin{eqnarray}\nx-x_0&=&\\int_{x_0}^x dx=\\int_0^t dt v(t)\\\\\n\\nonumber\n&=&\\frac{-A}{\\omega_c}\\cos(\\omega_ct+\\phi),\\\\\n\\nonumber\ny-y_0&=&\\frac{A}{\\omega_c}\\sin(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nThe trajectory is a circle centered at $x_0,y_0$ with amplitude $A$ rotating in the clockwise direction.\n\nThe equations of motion for the $z$ motion are\n\n\n
\n\n$$\n\\begin{equation}\n\\dot{v_z}=0,\n\\label{_auto16} \\tag{16}\n\\end{equation}\n$$\n\nwhich leads to\n\n\n
\n\n$$\n\\begin{equation}\nz-z_0=V_zt.\n\\label{_auto17} \\tag{17}\n\\end{equation}\n$$\n\nAdded onto the circle, the motion is helical.\n\nNote that the kinetic energy,\n\n\n
\n\n$$\n\\begin{equation}\nT=\\frac{1}{2}m(v_x^2+v_y^2+v_z^2)=\\frac{1}{2}m(\\omega_c^2A^2+V_z^2),\n\\label{_auto18} \\tag{18}\n\\end{equation}\n$$\n\nis constant. This is because the force is perpendicular to the\nvelocity, so that in any differential time element $dt$ the work done\non the particle $\\boldsymbol{F}\\cdot{dr}=dt\\boldsymbol{F}\\cdot{v}=0$.\n\nOne should think about the implications of a velocity dependent\nforce. Suppose one had a constant magnetic field in deep space. If a\nparticle came through with velocity $v_0$, it would undergo cyclotron\nmotion with radius $R=v_0/\\omega_c$. However, if it were still its\nmotion would remain fixed. Now, suppose an observer looked at the\nparticle in one reference frame where the particle was moving, then\nchanged their velocity so that the particle's velocity appeared to be\nzero. The motion would change from circular to fixed. Is this\npossible?\n\nThe solution to the puzzle above relies on understanding\nrelativity. Imagine that the first observer believes $\\boldsymbol{B}\\ne 0$ and\nthat the electric field $\\boldsymbol{E}=0$. If the observer then changes\nreference frames by accelerating to a velocity $\\boldsymbol{v}$, in the new\nframe $\\boldsymbol{B}$ and $\\boldsymbol{E}$ both change. If the observer moved to the\nframe where the charge, originally moving with a small velocity $v$,\nis now at rest, the new electric field is indeed $\\boldsymbol{v}\\times\\boldsymbol{B}$,\nwhich then leads to the same acceleration as one had before. If the\nvelocity is not small compared to the speed of light, additional\n$\\gamma$ factors come into play,\n$\\gamma=1/\\sqrt{1-(v/c)^2}$. Relativistic motion will not be\nconsidered in this course.\n\n## Sliding Block tied to a Wall\n\nAnother classical case is that of simple harmonic oscillations, here\nrepresented by a block sliding on a horizontal frictionless\nsurface. The block is tied to a wall with a spring. If the spring is\nnot compressed or stretched too far, the force on the block at a given\nposition $x$ is\n\n$$\nF=-kx.\n$$\n\n## Back and Forth, Sliding Block with no friction\n\nThe negative sign means that the force acts to restore the object to an equilibrium position. Newton's equation of motion for this idealized system is then\n\n$$\nm\\frac{d^2x}{dt^2}=-kx,\n$$\n\nor we could rephrase it as\n\n\n
\n\n$$\n\\frac{d^2x}{dt^2}=-\\frac{k}{m}x=-\\omega_0^2x,\n\\label{eq:newton1} \\tag{19}\n$$\n\nwith the angular frequency $\\omega_0^2=k/m$. \n\nWe will derive the above force when we start studying **harmonic oscillations**.\n\n## Final rewrite\n\nWith the position $x(t)$ and the velocity $v(t)=dx/dt$ we can reformulate Newton's equation in the following way\n\n$$\n\\frac{dx(t)}{dt}=v(t),\n$$\n\nand\n\n$$\n\\frac{dv(t)}{dt}=-\\omega_0^2x(t).\n$$\n\nWith initial conditions $x(t_0)=x_0$ and $v(t_0)=v_0$ we can in turn solve the differential equations.\n\n## Analytical Solution\n\nThe above differential equation has the advantage that it can be\nsolved analytically with general solutions on the form\n\n$$\nx(t)=A\\cos{\\omega_0t}+B\\sin{\\omega_0t},\n$$\n\nand\n\n$$\nv(t)=-\\omega_0 A\\sin{\\omega_0t}+\\omega_0 B\\cos{\\omega_0t},\n$$\n\nwhere $A$ and $B$ are constants to be determined from the initial conditions.\n\nThis provides in turn an important test for the numerical solution and\nthe development of a program for more complicated cases which cannot\nbe solved analytically.\n\nWe will discuss the above equations in more detail when we discuss harmonic oscillations.\n\n## Summarizing the various motion problems 1\n\nThe examples we have discussed above were included in order to\nillustrate various methods (which depend on the specific problem) to\nfind the solutions of the equations of motion.\nWe have solved the equations of motion in the following ways:\n\n**Solve the differential equations analytically.**\n\nWe did this for example with the following object in one or two dimensions or the sliding block. \nHere we had for example an equation set like\n\n$$\n\\frac{dv_x}{dt}=-\\gamma v_x,\n$$\n\nand\n\n$$\n\\frac{dv_y}{dt}=-\\gamma v_y-g,\n$$\n\nand $\\gamma$ has dimension of inverse time.\n\n## Summarizing the various motion problems 2\n\n**Integrate the equations.**\n\nWe could also integrate directly in case we can separate the degrees of freedom in a an easy way. Take for example one of the equations in the previous slide\n\n$$\n\\frac{dv_x}{dt}=-\\gamma v_x,\n$$\n\nwhich we can rewrite in terms of a left-hand side which depends only on the velocity and a right-hand side which depends only on time\n\n$$\n\\frac{dv_x}{v_x}=-\\gamma dt.\n$$\n\nIntegrating we have (since we can separate $v_x$ and $t$)\n\n$$\n\\int_{v_0}^{v_t}\\frac{dv_x}{v_x}=-\\int_{t_0}^{t_f}\\gamma dt,\n$$\n\nwhere $v_f$ is the velocity at a final time and $t_f$ is the final time.\nIn this case we found, after having integrated the above two sides that\n\n$$\nv_f(t)=v_0\\exp{-\\gamma t}.\n$$\n\n## Summarizing the various motion problems 3\n\n**Solve the differential equations numerically.**\n\nFinally, using for example Euler's method, we can solve the\ndifferential equations numerically. If we can compare our numerical\nsolutions with analytical solutions, we have an extra check of our\nnumerical approaches.\n\nThe example code on the next slide is relevant for homework 3. Here we deal with a falling object in two dimensions. Except for the derivations above with an\nair resistance which is linear in the velocity, homework 3 uses a quadratic velocity dependence.\n\n## Code example using Euler's methods\n\n**Note**: this code needs some additional expressions and will not run\n\n\n```\n%matplotlib inline\n\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\n#define the gravitational acceleration\ng = 9.80655 #m/s^2\n# The mass and the drag constant D\nD = 0.00245 #mass/length kg/m\nm = 0.2 #kg, mass of falling object\nDeltaT = 0.001\n#set up final time, here just a number we have chosen\ntfinal = 1.0\n# set up number of points for all variables\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and y and arrays for analytical results\n# Note the brute force setting up of arrays for x and y, vx, vy, ax and ay\n# For hw3 you should think of using the 2-dim vectors you used in homework 2\nt = np.zeros(n)\nvy = np.zeros(n)\ny = np.zeros(n)\nvx = np.zeros(n)\nx = np.zeros(n)\n# Initial conditions\nvx[0] = 10.0 #m/s\nvy[0] = 0.0 #m/s\ny[0] = 10.0 #m\nx[0] = 0.0 #m\n# Start integrating using Euler's method\nfor i in range(n-1):\n # expression for acceleration, note the absolute value and division by mass\n# ax = You need to set up the expression for force and thereby the acceleration in the x-direction\n# ay = You need to set up the expression for force and thereby the acceleration in the y-direction\n # update velocity and position\n vx[i+1] = vx[i] + DeltaT*ax\n x[i+1] = x[i] + DeltaT*vx[i]\n vy[i+1] = vy[i] + DeltaT*ay\n y[i+1] = y[i] + DeltaT*vy[i]\n # update time to next time step and compute analytical answer\n t[i+1] = t[i] + DeltaT\n# Here you need to set up the analytical solution for y(t) and x(t)\n\nif ( y[i+1] < 0.0):\n break\ndata = {'t[s]': t,\n 'Relative error in y': abs((y-yanalytic)/yanalytic),\n 'vy[m/s]': vy,\n 'Relative error in x': abs((x-xanalytic)/xanalytic),\n 'vx[m/s]': vx\n}\nNewData = pd.DataFrame(data)\ndisplay(NewData)\n# save to file\nNewData.to_csv(outfile, index=False)\n#then plot\nfig, axs = plt.subplots(4, 1)\naxs[0].plot(t, y)\naxs[0].set_xlim(0, tfinal)\naxs[0].set_ylabel('y')\naxs[1].plot(t, vy)\naxs[1].set_ylabel('vy[m/s]')\naxs[1].set_xlabel('time[s]')\naxs[2].plot(t, x)\naxs[2].set_xlim(0, tfinal)\naxs[2].set_ylabel('x')\naxs[3].plot(t, vx)\naxs[3].set_ylabel('vx[m/s]')\naxs[3].set_xlabel('time[s]')\nfig.tight_layout()\nplt.show()\n```\n\n## Work, Energy, Momentum and Conservation laws\n\nThe previous three cases have shown us how to use Newton\u2019s laws of\nmotion to determine the motion of an object based on the forces acting\non it. For two of the cases there is an underlying assumption that we can find an analytical solution to a continuous problem.\nWith a continuous problem we mean a problem where the various variables can take any value within a finite or infinite interval. \n\nUnfortunately, in many cases we\ncannot find an exact solution to the equations of motion we get from\nNewton\u2019s second law. The numerical approach, where we discretize the continuous problem, allows us however to study a much richer set of problems.\nFor problems involving Newton's laws and the various equations of motion we encounter, solving the equations numerically, is the standard approach.\n\nIt allows us to focus on the underlying forces. Often we end up using the same numerical algorithm for different problems.\n\nHere we introduce a commonly used technique that allows us to find the\nvelocity as a function of position without finding the position as a\nfunction of time\u2014an alternate form of Newton\u2019s second law. The method\nis based on a simple principle: Instead of solving the equations of\nmotion directly, we integrate the equations of motion. Such a method\nis called an integration method. \n\nThis allows us also to introduce the **work-energy** theorem. This\ntheorem lets us find the velocity as a function of position for\nan object even in cases when we cannot solve the equations of\nmotion. This introduces us to the concept of work and kinetic energy,\nwhich is the energy related to the motion of an object.\n\nAnd finally, later, we will link the work-energy theorem with the principle of conservation of energy.\n\n## The Work-Energy Theorem\n\nLet us define the kinetic energy $K$ with a given velocity $\\boldsymbol{v}$\n\n$$\nK=\\frac{1}{2}mv^2,\n$$\n\nwhere $m$ is the mass of the object we are considering.\nWe assume also that there is a force $\\boldsymbol{F}$ acting on the given object\n\n$$\n\\boldsymbol{F}=\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t),\n$$\n\nwith $\\boldsymbol{r}$ the position and $t$ the time.\nIn general we assume the force is a function of all these variables.\nMany of the more central forces in Nature however, depende only on the\nposition. Examples are the gravitational force and the force derived\nfrom the Coulomb potential in electromagnetism.\n\n## Rewriting the Kinetic Energy\n\nLet us study the derivative of the kinetic energy with respect to time $t$. Its continuous form is\n\n$$\n\\frac{dK}{dt}=\\frac{1}{2}m\\frac{d\\boldsymbol{v}\\cdot\\boldsymbol{v}}{dt}.\n$$\n\nUsing our results from exercise 3 of homework 1, we can write the derivative of a vector dot product as\n\n$$\n\\frac{dK}{dt}=\\frac{1}{2}m\\frac{d\\boldsymbol{v}\\cdot\\boldsymbol{v}}{dt}= \\frac{1}{2}m\\left(\\frac{d\\boldsymbol{v}}{dt}\\cdot\\boldsymbol{v}+\\boldsymbol{v}\\cdot\\frac{d\\boldsymbol{v}}{dt}\\right)=m\\frac{d\\boldsymbol{v}}{dt}\\cdot\\boldsymbol{v}.\n$$\n\nWe know also that the acceleration is defined as\n\n$$\n\\boldsymbol{a}=\\frac{\\boldsymbol{F}}{m}=\\frac{d\\boldsymbol{v}}{dt}.\n$$\n\nWe can then rewrite the equation for the derivative of the kinetic energy as\n\n$$\n\\frac{dK}{dt}=m\\frac{d\\boldsymbol{v}}{dt}\\boldsymbol{v}=\\boldsymbol{F}\\frac{d\\boldsymbol{r}}{dt},\n$$\n\nwhere we defined the velocity as the derivative of the position with respect to time.\n\n## Discretizing\n\nLet us now discretize the above equation by letting the instantaneous terms be replaced by a discrete quantity, that is\nwe let $dK\\rightarrow \\Delta K$, $dt\\rightarrow \\Delta t$, $d\\boldsymbol{r}\\rightarrow \\Delta \\boldsymbol{r}$ and $d\\boldsymbol{v}\\rightarrow \\Delta \\boldsymbol{v}$.\n\nWe have then\n\n$$\n\\frac{\\Delta K}{\\Delta t}=m\\frac{\\Delta \\boldsymbol{v}}{\\Delta t}\\boldsymbol{v}=\\boldsymbol{F}\\frac{\\Delta \\boldsymbol{r}}{\\Delta t},\n$$\n\nor by multiplying out $\\Delta t$ we have\n\n$$\n\\Delta K=\\boldsymbol{F}\\Delta \\boldsymbol{r}.\n$$\n\nWe define this quantity as the **work** done by the force $\\boldsymbol{F}$\nduring the displacement $\\Delta \\boldsymbol{r}$. If we study the dimensionality\nof this problem we have mass times length squared divided by time\nsquared, or just dimension energy.\n\n## Difference in kinetic energy\n\nIf we now define a series of such displacements $\\Delta\\boldsymbol{r}$ we have a difference in kinetic energy at a final position $\\boldsymbol{r}_n$ and an \ninitial position $\\boldsymbol{r}_0$ given by\n\n$$\n\\Delta K=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\sum_{i=0}^n\\boldsymbol{F}_i\\Delta \\boldsymbol{r},\n$$\n\nwhere $\\boldsymbol{F}_i$ are the forces acting at every position $\\boldsymbol{r}_i$.\n\nThe work done by acting with a force on a set of displacements can\nthen be as expressed as the difference between the initial and final\nkinetic energies.\n\nThis defines the **work-energy** theorem.\n\n## From the discrete version to the continuous version\n\nIf we take the limit $\\Delta \\boldsymbol{r}\\rightarrow 0$, we can rewrite the sum over the various displacements in terms of an integral, that is\n\n$$\n\\Delta K=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\sum_{i=0}^n\\boldsymbol{F}_i\\Delta \\boldsymbol{r}\\rightarrow \\int_{\\boldsymbol{r}_0}^{\\boldsymbol{r}_n}\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t)d\\boldsymbol{r}.\n$$\n\nThis integral defines a path integral since it will depend on the given path we take between the two end points. We will replace the limits with the symbol $c$ in order to indicate that we take a specific countour in space when the force acts on the system. That is the work $W_{n0}$ between two points $\\boldsymbol{r}_n$ and $\\boldsymbol{r}_0$ is labeled as\n\n$$\nW_{n0}=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\int_{c}\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t)d\\boldsymbol{r}.\n$$\n\nNote that if the force is perpendicular to the displacement, then the force does not affect the kinetic energy.\n\nLet us now study some examples of forces and how to find the velocity from the integration over a given path.\n\nThereafter we study how to evaluate an integral numerically.\n\n## Studying the Work-energy Theorem numerically\n\nIn order to study the work- energy, we will normally need to perform\na numerical integration, unless we can integrate analytically. Here we\npresent some of the simpler methods such as the **rectangle** rule, the **trapezoidal** rule and higher-order methods like the Simpson family of methods.\n\n## Example of an Electron moving along a Surface\n\nAs an example, let us consider the following case.\nWe have classical electron which moves in the $x$-direction along a surface. The force from the surface is\n\n$$\n\\boldsymbol{F}(x)=-F_0\\sin{(\\frac{2\\pi x}{b})}\\boldsymbol{e}_1.\n$$\n\nThe constant $b$ represents the distance between atoms at the surface of the material, $F_0$ is a constant and $x$ is the position of the electron.\n\nUsing the work-energy theorem we can find the work $W$ done when moving an electron from a position $x_0$ to a final position $x$ through the\n integral\n\n$$\nW=\\int_{x_0}^x \\boldsymbol{F}(x')dx' = -\\int_{x_0}^x F_0\\sin{(\\frac{2\\pi x'}{b})} dx',\n$$\n\nwhich results in\n\n$$\nW=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right].\n$$\n\n## Finding the Velocity\n\nIf we now use the work-energy theorem we can find the the velocity at a final position $x$ by setting up\nthe differences in kinetic energies between the final position and the initial position $x_0$.\n\nWe have that the work done by the force is given by the difference in kinetic energies as\n\n$$\nW=\\frac{1}{2}m\\left(v^2(x)-v^2(x_0)\\right)=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right],\n$$\n\nand labeling $v(x_0)=v_0$ (and assuming we know the initial velocity) we have\n\n$$\nv(x)=\\pm \\sqrt{v_0^2+\\frac{F_0b}{m\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right]},\n$$\n\nChoosing $x_0=0$m and $v_0=0$m/s we can simplify the above equation to\n\n$$\nv(x)=\\pm \\sqrt{\\frac{F_0b}{m\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-1\\right]},\n$$\n\n## Harmonic Oscillations\n\nAnother well-known force (and we will derive when we come to Harmonic\nOscillations) is the case of a sliding block attached to a wall\nthrough a spring. The block is attached to a spring with spring\nconstant $k$. The other end of the spring is attached to the wall at\nthe origin $x=0$. We assume the spring has an equilibrium length\n$L_0$.\n\nThe force $F_x$ from the spring on the block is then\n\n$$\nF_x=-k(x-L_0).\n$$\n\nThe position $x$ where the spring force is zero is called the equilibrium position. In our case this is\n$x=L_0$.\n\nWe can now compute the work done by this force when we move our block from an initial position $x_0$ to a final position $x$\n\n$$\nW=\\int_{x_0}^{x}F_xdx'=-k\\int_{x_0}^{x}(x'-L_0)dx'=\\frac{1}{2}k(x_0-L_0)^2-\\frac{1}{2}k(x-L_0)^2.\n$$\n\nIf we now bring back the definition of the work-energy theorem in terms of the kinetic energy we have\n\n$$\nW=\\frac{1}{2}mv^2(x)-\\frac{1}{2}mv_0^2=\\frac{1}{2}k(x_0-L_0)^2-\\frac{1}{2}k(x-L_0)^2,\n$$\n\nwhich we rewrite as\n\n$$\n\\frac{1}{2}mv^2(x)+\\frac{1}{2}k(x-L_0)^2=\\frac{1}{2}mv_0^2+\\frac{1}{2}k(x_0-L_0)^2.\n$$\n\nWhat does this mean? The total energy, which is the sum of potential and kinetic energy, is conserved.\nWow, this sounds interesting. We will analyze this next week in more detail when we study energy, momentum and angular momentum conservation.\n\n## Numerical Integration\n\nThis material is optional. We will not cover this in the lectures and we will not use it in exercises or exams.\nIt is included here for the same of completeness.\n\nLet us now see how we could have solved the above integral numerically.\n\nThere are several numerical algorithms for finding an integral\nnumerically. The more familiar ones like the rectangular rule or the\ntrapezoidal rule have simple geometric interpretations.\n\nLet us look at the mathematical details of what are called equal-step methods, also known as Newton-Cotes quadrature.\n\n## Newton-Cotes Quadrature or equal-step methods\nThe integral\n\n\n
\n\n$$\n\\begin{equation}\n I=\\int_a^bf(x) dx\n\\label{eq:integraldef} \\tag{20}\n\\end{equation}\n$$\n\nhas a very simple meaning. The integral is the\narea enscribed by the function $f(x)$ starting from $x=a$ to $x=b$. It is subdivided in several smaller areas whose evaluation is to be approximated by different techniques. The areas under the curve can for example be approximated by rectangular boxes or trapezoids.\n\n## Basic philosophy of equal-step methods\nIn considering equal step methods, our basic approach is that of approximating\na function $f(x)$ with a polynomial of at most \ndegree $N-1$, given $N$ integration points. If our polynomial is of degree $1$,\nthe function will be approximated with $f(x)\\approx a_0+a_1x$.\n\n## Simple algorithm for equal step methods\nThe algorithm for these integration methods is rather simple, and the number of approximations perhaps unlimited!\n\n* Choose a step size $h=(b-a)/N$ where $N$ is the number of steps and $a$ and $b$ the lower and upper limits of integration.\n\n* With a given step length we rewrite the integral as\n\n$$\n\\int_a^bf(x) dx= \\int_a^{a+h}f(x)dx + \\int_{a+h}^{a+2h}f(x)dx+\\dots \\int_{b-h}^{b}f(x)dx.\n$$\n\n* The strategy then is to find a reliable polynomial approximation for $f(x)$ in the various intervals. Choosing a given approximation for $f(x)$, we obtain a specific approximation to the integral.\n\n* With this approximation to $f(x)$ we perform the integration by computing the integrals over all subintervals.\n\n## Simple algorithm for equal step methods\n\nOne possible strategy then is to find a reliable polynomial expansion for $f(x)$ in the smaller\nsubintervals. Consider for example evaluating\n\n$$\n\\int_a^{a+2h}f(x)dx,\n$$\n\nwhich we rewrite as\n\n\n
\n\n$$\n\\begin{equation}\n\\int_a^{a+2h}f(x)dx=\\int_{x_0-h}^{x_0+h}f(x)dx.\n\\label{eq:hhint} \\tag{21}\n\\end{equation}\n$$\n\nWe have chosen a midpoint $x_0$ and have defined $x_0=a+h$.\n\n## The rectangle method\n\nA very simple approach is the so-called midpoint or rectangle method.\nIn this case the integration area is split in a given number of rectangles with length $h$ and height given by the mid-point value of the function. This gives the following simple rule for approximating an integral\n\n\n
\n\n$$\n\\begin{equation}\nI=\\int_a^bf(x) dx \\approx h\\sum_{i=1}^N f(x_{i-1/2}), \n\\label{eq:rectangle} \\tag{22}\n\\end{equation}\n$$\n\nwhere $f(x_{i-1/2})$ is the midpoint value of $f$ for a given rectangle. We will discuss its truncation \nerror below. It is easy to implement this algorithm, as shown below\n\n## Truncation error for the rectangular rule\n\nThe correct mathematical expression for the local error for the rectangular rule $R_i(h)$ for element $i$ is\n\n$$\n\\int_{-h}^hf(x)dx - R_i(h)=-\\frac{h^3}{24}f^{(2)}(\\xi),\n$$\n\nand the global error reads\n\n$$\n\\int_a^bf(x)dx -R_h(f)=-\\frac{b-a}{24}h^2f^{(2)}(\\xi),\n$$\n\nwhere $R_h$ is the result obtained with rectangular rule and $\\xi \\in [a,b]$.\n\n## Codes for the Rectangular rule\n\nWe go back to our simple example above and set $F_0=b=1$ and choose $x_0=0$ and $x=1/2$, and have\n\n$$\nW=\\frac{1}{\\pi}.\n$$\n\nThe code here computes the integral using the rectangle rule and $n=100$ integration points we have a relative error of\n$10^{-5}$.\n\n\n```\nfrom math import sin, pi\nimport numpy as np\nfrom sympy import Symbol, integrate\n# function for the Rectangular rule \ndef Rectangular(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n for i in range(0,n,1):\n x = (i+0.5)*h\n s = s+ f(x)\n return h*s\n# function to integrate\ndef function(x):\n return sin(2*pi*x)\n# define integration limits and integration points \na = 0.0; b = 0.5;\nn = 100\nExact = 1./pi\nprint(\"Relative error= \", abs( (Rectangular(a,b,function,n)-Exact)/Exact))\n```\n\n## The trapezoidal rule\n\nThe other integral gives\n\n$$\n\\int_{x_0-h}^{x_0}f(x)dx=\\frac{h}{2}\\left(f(x_0) + f(x_0-h)\\right)+O(h^3),\n$$\n\nand adding up we obtain\n\n\n
\n\n$$\n\\begin{equation}\n \\int_{x_0-h}^{x_0+h}f(x)dx=\\frac{h}{2}\\left(f(x_0+h) + 2f(x_0) + f(x_0-h)\\right)+O(h^3),\n\\label{eq:trapez} \\tag{23}\n\\end{equation}\n$$\n\nwhich is the well-known trapezoidal rule. Concerning the error in the approximation made,\n$O(h^3)=O((b-a)^3/N^3)$, you should note \nthat this is the local error. Since we are splitting the integral from\n$a$ to $b$ in $N$ pieces, we will have to perform approximately $N$ \nsuch operations.\n\nThis means that the *global error* goes like $\\approx O(h^2)$. \nThe trapezoidal reads then\n\n\n
\n\n$$\n\\begin{equation}\n I=\\int_a^bf(x) dx=h\\left(f(a)/2 + f(a+h) +f(a+2h)+\n \\dots +f(b-h)+ f_{b}/2\\right),\n\\label{eq:trapez1} \\tag{24}\n\\end{equation}\n$$\n\nwith a global error which goes like $O(h^2)$. \n\nHereafter we use the shorthand notations $f_{-h}=f(x_0-h)$, $f_{0}=f(x_0)$\nand $f_{h}=f(x_0+h)$.\n\n## Error in the trapezoidal rule\n\nThe correct mathematical expression for the local error for the trapezoidal rule is\n\n$$\n\\int_a^bf(x)dx -\\frac{b-a}{2}\\left[f(a)+f(b)\\right]=-\\frac{h^3}{12}f^{(2)}(\\xi),\n$$\n\nand the global error reads\n\n$$\n\\int_a^bf(x)dx -T_h(f)=-\\frac{b-a}{12}h^2f^{(2)}(\\xi),\n$$\n\nwhere $T_h$ is the trapezoidal result and $\\xi \\in [a,b]$.\n\n## Algorithm for the trapezoidal rule\nThe trapezoidal rule is easy to implement numerically \nthrough the following simple algorithm\n\n * Choose the number of mesh points and fix the step length.\n\n * calculate $f(a)$ and $f(b)$ and multiply with $h/2$.\n\n * Perform a loop over $n=1$ to $n-1$ ($f(a)$ and $f(b)$ are known) and sum up the terms $f(a+h) +f(a+2h)+f(a+3h)+\\dots +f(b-h)$. Each step in the loop corresponds to a given value $a+nh$.\n\n * Multiply the final result by $h$ and add $hf(a)/2$ and $hf(b)/2$.\n\n## Trapezoidal Rule\n\nWe use the same function and integrate now using the trapoezoidal rule.\n\n\n```\nimport numpy as np\nfrom sympy import Symbol, integrate\n# function for the trapezoidal rule\ndef Trapez(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n x = a\n for i in range(1,n,1):\n x = x+h\n s = s+ f(x)\n s = 0.5*(f(a)+f(b)) +s\n return h*s\n# function to integrate\ndef function(x):\n return sin(2*pi*x)\n# define integration limits and integration points \na = 0.0; b = 0.5;\nn = 100\nExact = 1./pi\nprint(\"Relative error= \", abs( (Trapez(a,b,function,n)-Exact)/Exact))\n```\n\n## Simpsons' rule\n\nInstead of using the above first-order polynomials \napproximations for $f$, we attempt at using a second-order polynomials.\nIn this case we need three points in order to define a second-order \npolynomial approximation\n\n$$\nf(x) \\approx P_2(x)=a_0+a_1x+a_2x^2.\n$$\n\nUsing again Lagrange's interpolation formula we have\n\n$$\nP_2(x)=\\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}y_2+\n \\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}y_1+\n \\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}y_0.\n$$\n\nInserting this formula in the integral of Eq. ([21](#eq:hhint)) we obtain\n\n$$\n\\int_{-h}^{+h}f(x)dx=\\frac{h}{3}\\left(f_h + 4f_0 + f_{-h}\\right)+O(h^5),\n$$\n\nwhich is Simpson's rule.\n\n## Simpson's rule\nNote that the improved accuracy in the evaluation of\nthe derivatives gives a better error approximation, $O(h^5)$ vs.\\ $O(h^3)$ .\nBut this is again the *local error approximation*. \nUsing Simpson's rule we can easily compute\nthe integral of Eq. ([20](#eq:integraldef)) to be\n\n\n
\n\n$$\n\\begin{equation}\n I=\\int_a^bf(x) dx=\\frac{h}{3}\\left(f(a) + 4f(a+h) +2f(a+2h)+\n \\dots +4f(b-h)+ f_{b}\\right),\n\\label{eq:simpson} \\tag{25}\n\\end{equation}\n$$\n\nwith a global error which goes like $O(h^4)$.\n\n## Mathematical expressions for the truncation error\nMore formal expressions for the local and global errors are for the local error\n\n$$\n\\int_a^bf(x)dx -\\frac{b-a}{6}\\left[f(a)+4f((a+b)/2)+f(b)\\right]=-\\frac{h^5}{90}f^{(4)}(\\xi),\n$$\n\nand for the global error\n\n$$\n\\int_a^bf(x)dx -S_h(f)=-\\frac{b-a}{180}h^4f^{(4)}(\\xi).\n$$\n\nwith $\\xi\\in[a,b]$ and $S_h$ the results obtained with Simpson's method.\n\n## Algorithm for Simpson's rule\nThe method \ncan easily be implemented numerically through the following simple algorithm\n\n * Choose the number of mesh points and fix the step.\n\n * calculate $f(a)$ and $f(b)$\n\n * Perform a loop over $n=1$ to $n-1$ ($f(a)$ and $f(b)$ are known) and sum up the terms $4f(a+h) +2f(a+2h)+4f(a+3h)+\\dots +4f(b-h)$. Each step in the loop corresponds to a given value $a+nh$. Odd values of $n$ give $4$ as factor while even values yield $2$ as factor.\n\n * Multiply the final result by $\\frac{h}{3}$.\n\n## Code example\n\n\n```\nfrom math import sin, pi\nimport numpy as np\nfrom sympy import Symbol, integrate\n# function for the trapezoidal rule \ndef Simpson(a,b,f,n):\n h = (b-a)/float(n)\n sum = f(a)/float(2);\n for i in range(1,n):\n sum = sum + f(a+i*h)*(3+(-1)**(i+1))\n sum = sum + f(b)/float(2)\n return sum*h/3.0\n# function to integrate \ndef function(x):\n return sin(2*pi*x)\n# define integration limits and integration points \na = 0.0; b = 0.5;\nn = 100\nExact = 1./pi\nprint(\"Relative error= \", abs( (Simpson(a,b,function,n)-Exact)/Exact))\n```\n\nWe see that Simpson's rule gives a much better estimation of the relative error with the same amount of points as we had for the Rectangle rule and the Trapezoidal rule.\n\n## Symbolic integration\n\nWe could also use the symbolic mathematics. Here Python comes to our rescue with [SymPy](https://www.sympy.org/en/index.html), which is a Python library for symbolic mathematics.\n\nHere's an example on how you could use **Sympy** where we compare the symbolic calculation with an\nintegration of a function $f(x)$ by the Trapezoidal rule.\nHere we show an\nexample code that evaluates the integral\n$\\int_0^1 dx x^2 = 1/3$.\nThe following code for the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to $10^8$ points one arrives at a region where numerical errors start to accumulate.\n\n\n```\nfrom math import log10\nimport numpy as np\nfrom sympy import Symbol, integrate\nimport matplotlib.pyplot as plt\n# function for the trapezoidal rule\ndef Trapez(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n x = a\n for i in range(1,n,1):\n x = x+h\n s = s+ f(x)\n s = 0.5*(f(a)+f(b)) +s\n return h*s\n# function to compute pi\ndef function(x):\n return x*x\n# define integration limits\na = 0.0; b = 1.0;\n# find result from sympy\n# define x as a symbol to be used by sympy\nx = Symbol('x')\nexact = integrate(function(x), (x, a, b))\n# set up the arrays for plotting the relative error\nn = np.zeros(9); y = np.zeros(9);\n# find the relative error as function of integration points\nfor i in range(1, 8, 1):\n npts = 10**i\n result = Trapez(a,b,function,npts)\n RelativeError = abs((exact-result)/exact)\n n[i] = log10(npts); y[i] = log10(RelativeError);\nplt.plot(n,y, 'ro')\nplt.xlabel('n')\nplt.ylabel('Relative error')\nplt.show()\n```\n", "meta": {"hexsha": "5136b604c74b087aeec21653d16e444d36e95c7b", "size": 84574, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week4/ipynb/week4.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/pub/week4/ipynb/week4.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pub/week4/ipynb/week4.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.5039172673, "max_line_length": 366, "alphanum_fraction": 0.5465509495, "converted": true, "num_tokens": 13650, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3486451488696663, "lm_q2_score": 0.4263215925474903, "lm_q1q2_score": 0.14863495510007296}} {"text": " \n#### Procesamiento Digital de Se\u00f1ales\n\n# Trabajo Pr\u00e1ctico N\u00ba0\n#### Nombre y Apellido\n\n\n# Introducci\u00f3n\nJupyter Notebook es una herramienta para la confecci\u00f3n de reportes t\u00e9cnicos, dado que permite la interacci\u00f3n en el mismo ambiente de: \n1. un procesador de texto elemental (formato Markdown) que permite resaltar texto, en forma de *it\u00e1lica* o **negrita** de manera muy legible (haciendo doble click en este texto podr\u00e1s ver el c\u00f3digo fuente estilo Markdown). Cuenta con estilos predefinidos:\n\n# T\u00edtulo 1\n## T\u00edtulo 2\n### T\u00edtulo 3\n\ny tambi\u00e9n la capacidad de incluir enlaces a otras p\u00e1ginas, como por ejemplo [esta p\u00e1gina](https://medium.com/ibm-data-science-experience/markdown-for-jupyter-notebooks-cheatsheet-386c05aeebed) donde encontrar\u00e1s m\u00e1s funcionalidades del lenguaje **Markdown**\n\n2. capacidad para incluir lenguaje matem\u00e1tico estilo LaTex, tanto de forma presentada\n\n\\begin{equation}\nT(z) = \\frac{Y(z)}{X(z)} = \\frac{ b_2 \\, z^{-2} + b_1 \\, z^{-1} + b_0 }\n{a_2 \\, z^{-2} + a_1 \\, z^{-1} + a_0}\n\\end{equation}\n\ncomo *inline* en el propio p\u00e1rrafo $y[k] = \\frac{1}{a_0} \\left( \\sum_{m=0}^{M} b_m \\; x[k-m] - \\sum_{n=1}^{N} a_n \\; y[k-n] \\right) $\n\n3. La posibilidad de incluir scripts en Python, como los que usaremos para las simulaciones en los TPs de la materia. En este caso usaremos el *testbench0.py* como ejemplo. Una vez que lo probamos y estamos seguros que funciona de forma esperada en *Spyder*, podemos incluir los resultados de la simulaci\u00f3n de manera casi transparente. Solo tenemos que agregar una celda de c\u00f3digo donde incluimos el c\u00f3digo, y los resultados directamente quedan incluidos en este documento.\n\n\n```python\n# M\u00f3dulos para Jupyter\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport matplotlib as mpl\n#%% Inicializaci\u00f3n de librer\u00edas\n# Setup inline graphics: Esto lo hacemos para que el tama\u00f1o de la salida, \n# sea un poco m\u00e1s adecuada al tama\u00f1o del documento\nmpl.rcParams['figure.figsize'] = (10,10)\n\nimport matplotlib.pyplot as plt\nimport pdsmodulos as pds\n\n#%% Esto tiene que ver con cuestiones de presentaci\u00f3n de los gr\u00e1ficos,\n# NO ES IMPORTANTE\nfig_sz_x = 14\nfig_sz_y = 13\nfig_dpi = 80 # dpi\n\nfig_font_family = 'Ubuntu'\nfig_font_size = 16\n\nplt.rcParams.update({'font.size':fig_font_size})\nplt.rcParams.update({'font.family':fig_font_family})\n\n##############################################\n#%% A partir de aqu\u00ed comienza lo IMPORTANTE #\n#############################################\n\ndef my_testbench( sig_type ):\n \n # Datos generales de la simulaci\u00f3n\n fs = 1000.0 # frecuencia de muestreo (Hz)\n N = 1000 # cantidad de muestras\n \n ts = 1/fs # tiempo de muestreo\n df = fs/N # resoluci\u00f3n espectral\n \n # grilla de sampleo temporal\n tt = np.linspace(0, (N-1)*ts, N).flatten()\n \n # grilla de sampleo frecuencial\n ff = np.linspace(0, (N-1)*df, N).flatten()\n\n # Concatenaci\u00f3n de matrices:\n # guardaremos las se\u00f1ales creadas al ir poblando la siguiente matriz vac\u00eda\n x = np.array([], dtype=np.float).reshape(N,0)\n ii = 0\n \n # estructuras de control de flujo\n if sig_type['tipo'] == 'senoidal':\n \n \n # calculo cada senoidal de acuerdo a sus par\u00e1metros\n for this_freq in sig_type['frecuencia']:\n # prestar atenci\u00f3n que las tuplas dentro de los diccionarios tambi\u00e9n pueden direccionarse mediante \"ii\"\n aux = sig_type['amplitud'][ii] * np.sin( 2*np.pi*this_freq*tt + sig_type['fase'][ii] )\n # para concatenar horizontalmente es necesario cuidar que tengan iguales FILAS\n x = np.hstack([x, aux.reshape(N,1)] )\n ii += 1\n \n elif sig_type['tipo'] == 'ruido':\n \n # calculo cada se\u00f1al de ruido incorrelado (blanco), Gausiano de acuerdo a sus par\u00e1metros\n # de varianza\n for this_var in sig_type['varianza']:\n aux = np.sqrt(this_var) * np.random.randn(N,1)\n # para concatenar horizontalmente es necesario cuidar que tengan iguales FILAS\n x = np.hstack([x, aux] )\n \n # Podemos agregar alg\u00fan dato extra a la descripci\u00f3n de forma program\u00e1tica\n # {0:.3f} significa 0: primer argunmento de format\n # .3f formato flotante, con 3 decimales\n # $ ... $ indicamos que incluiremos sintaxis LaTex: $\\hat{{\\sigma}}^2$\n sig_props['descripcion'] = [ sig_props['descripcion'][ii] + ' - $\\hat{{\\sigma}}^2$ :{0:.3f}'.format( np.var(x[:,ii])) for ii in range(0,len(sig_props['descripcion'])) ]\n \n else:\n \n print(\"Tipo de se\u00f1al no implementado.\") \n return\n \n #%% Presentaci\u00f3n gr\u00e1fica de los resultados\n \n plt.figure(1)\n line_hdls = plt.plot(tt, x)\n plt.title('Se\u00f1al: ' + sig_type['tipo'] )\n plt.xlabel('tiempo [segundos]')\n plt.ylabel('Amplitud [V]')\n # plt.grid(which='both', axis='both')\n \n # presentar una leyenda para cada tipo de se\u00f1al\n axes_hdl = plt.gca()\n \n # este tipo de sintaxis es *MUY* de Python\n axes_hdl.legend(line_hdls, sig_type['descripcion'], loc='upper right' )\n \n plt.show()\n\n```\n\nDado que nuestro *testbench* ha sido desarrollado de manera funcional, llamando a la funci\u00f3n *my_testbench()* con diferentes par\u00e1metros, podemos lograr funcionalidades diferentes, como mostramos a continuaci\u00f3n primero con una senoidal:\n\n\n```python\nsig_props = { 'tipo': 'senoidal', \n 'frecuencia': (3, 10, 20), # Uso de tuplas para las frecuencias \n 'amplitud': (1, 1, 1),\n 'fase': (0, 0, 0)\n } \n# Como tambi\u00e9n puedo agregar un campo descripci\u00f3n de manera program\u00e1tica\n# este tipo de sintaxis es *MUY* de Python\nsig_props['descripcion'] = [ str(a_freq) + ' Hz' for a_freq in sig_props['frecuencia'] ]\n \n# Invocamos a nuestro testbench exclusivamente: \nmy_testbench( sig_props )\n```\n\nY ahora con una se\u00f1al aleatoria, en este caso ruido blanco Gaussiano incorrelado de varianza $\\sigma^2$:\n\n\n```python\n# Usar CTRL+1 para comentar o descomentar el bloque de abajo.\nsig_props = { 'tipo': 'ruido', \n 'varianza': (1, 1, 1) # Uso de tuplas para las frecuencias \n } \nsig_props['descripcion'] = [ '$\\sigma^2$ = ' + str(a_var) for a_var in sig_props['varianza'] ]\n \n# Invocamos a nuestro testbench exclusivamente: \nmy_testbench( sig_props )\n\n```\n\nComo puede verse en la figura anterior, al samplear una distribuci\u00f3n estad\u00edstica de media nula y varianza $\\sigma^2=1$, obtenemos realizaciones cuyo par\u00e1metro $\\sigma^2$ estimado, es decir $\\hat\\sigma^2$, tienen una desviaci\u00f3n respecto al verdadero valor (sesgo). Nos ocuparemos de estudiar el sesgo y la varianza de algunos estimadores cuando veamos **Estimaci\u00f3n Espectral**.\n\n# Una vez terminado ...\nUna vez que hayas termiando con la confecci\u00f3n del documento, podemos utilizar una ventaja muy importante de este tipo de documentos que es la posibilidad de compartirlos *online* mediante la [p\u00e1gina de nbviewer](http://nbviewer.jupyter.org/). Para ello es necesario que tu notebook y todos los recursos asociados est\u00e9n alojados en un repositorio de [Github](https://github.com/). Como ejemplo, pod\u00e9s ver este mismo documento disponible [online](http://nbviewer.jupyter.org/github/marianux/pdstestbench/blob/master/notebook0.ipynb).\n", "meta": {"hexsha": "e4c357d6baf1a706d24c07e009b46bc879132848", "size": 127105, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook0.ipynb", "max_stars_repo_name": "yaninacorsaro/test", "max_stars_repo_head_hexsha": "2e0c35d1e2e875beb1537c78470ce7f56518b4cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook0.ipynb", "max_issues_repo_name": "yaninacorsaro/test", "max_issues_repo_head_hexsha": "2e0c35d1e2e875beb1537c78470ce7f56518b4cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook0.ipynb", "max_forks_repo_name": "yaninacorsaro/test", "max_forks_repo_head_hexsha": "2e0c35d1e2e875beb1537c78470ce7f56518b4cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-09-02T18:39:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-09T21:59:52.000Z", "avg_line_length": 498.4509803922, "max_line_length": 72496, "alphanum_fraction": 0.9382479053, "converted": true, "num_tokens": 2017, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48828339529583464, "lm_q2_score": 0.30404168757891037, "lm_q1q2_score": 0.14845850752250575}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nsns.set(font_scale=1.5, style='ticks')\n```\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nimport pandas as pd\n```\n\n\n```python\ncount_data = np.loadtxt(\"data/txtdata.csv\")\ncount_data = pd.Series(count_data)\nn_count_data = len(count_data)\n```\n\n\n```python\ncount_data\n```\n\n\n\n\n 0 13.0\n 1 24.0\n 2 8.0\n 3 24.0\n 4 7.0\n 5 35.0\n 6 14.0\n 7 11.0\n 8 15.0\n 9 11.0\n 10 22.0\n 11 22.0\n 12 11.0\n 13 57.0\n 14 11.0\n 15 19.0\n 16 29.0\n 17 6.0\n 18 19.0\n 19 12.0\n 20 22.0\n 21 12.0\n 22 18.0\n 23 72.0\n 24 32.0\n 25 9.0\n 26 7.0\n 27 13.0\n 28 19.0\n 29 23.0\n ... \n 44 19.0\n 45 70.0\n 46 49.0\n 47 7.0\n 48 53.0\n 49 22.0\n 50 21.0\n 51 31.0\n 52 19.0\n 53 11.0\n 54 18.0\n 55 20.0\n 56 12.0\n 57 35.0\n 58 17.0\n 59 23.0\n 60 17.0\n 61 4.0\n 62 2.0\n 63 31.0\n 64 30.0\n 65 13.0\n 66 27.0\n 67 0.0\n 68 39.0\n 69 37.0\n 70 5.0\n 71 14.0\n 72 13.0\n 73 22.0\n Length: 74, dtype: float64\n\n\n\n\n```python\nfigsize(15,5)\nplt.bar(np.arange(n_count_data), \n count_data, \n alpha=0.8,\n color=\"#348ABD\")\n\nax = plt.gca()\n\ncount_data.rolling(window=5,\n win_type='gaussian',\n min_periods=1,\n center=True).mean(std=2).plot(ax=ax)\n \n\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. **A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data.** Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\n WARNING (theano.configdefaults): install mkl with `conda install mkl-service`: No module named 'mkl'\n\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n Multiprocess sampling (2 chains in 2 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n Sampling 2 chains: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 30000/30000 [00:08<00:00, 3526.72draws/s]\n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\n(44 < tau_samples)\n```\n\n\n\n\n array([False, True, True, ..., True, False, False])\n\n\n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\nprint(lambda_1_samples.mean())\nprint(lambda_2_samples.mean())\n```\n\n 17.753978108380828\n 22.723803317605842\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\nprint( ((lambda_2_samples - lambda_1_samples) / lambda_1_samples).mean() )\n```\n\n 0.28147995232483525\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\nix = tau_samples < 45\nprint(lambda_1_samples[ix].mean())\n```\n\n 17.746674012065846\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "276545463b115bd3b46d937dd817c49c61de021b", "size": 770853, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "josesho/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "127d7ebc3fd510544576978fe240b0f34bcf7466", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "josesho/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "127d7ebc3fd510544576978fe240b0f34bcf7466", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "josesho/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "127d7ebc3fd510544576978fe240b0f34bcf7466", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 576.5542258788, "max_line_length": 196908, "alphanum_fraction": 0.9361175218, "converted": true, "num_tokens": 12028, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.32423540551084407, "lm_q1q2_score": 0.14821990875445867}} {"text": "\n# PHY321: More on Motion and Forces, begin Work and Energy discussion\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway\n\nDate: **Feb 3, 2021**\n\nCopyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n\n\n\n## Aims and Overarching Motivation\n\n### Monday\n\nWe discuss various forces and their pertinent equations of motion\n\nRecommended reading: Taylor 2.1-2.4. Malthe-S\u00f8renssen chapter 6-7 contains many examples.\nWe will cover in particular a falling object in two dimensions with linear air resistance relevant for homework 3. \n\n### Wednesday\n\nWe discuss other force models with examples such as the gravitational\nforce and a spring force. See Malthe-S\u00f8renssen chapter 7.3-7.5. We\nstart our discussion of energy and work, see Taylor 4.1\n\nWe discuss also exercise 5 from homework 2.\n\n### Friday\n\nWe discuss several examples of energy and work. Taylor 4.1-4.3.\n\n\n\n\n\n\n## Air Resistance in One Dimension\n\nLast week we considered the motion of a falling object with air\nresistance. Here we look at both a quadratic in velocity resistance\nand linear in velocity. But first we give a qualitative argument\nabout the mathematical expression for the air resistance we used last\nFriday.\n\n\nAir resistance tends to scale as the square of the velocity. This is\nin contrast to many problems chosen for textbooks, where it is linear\nin the velocity. The choice of a linear dependence is motivated by\nmathematical simplicity (it keeps the differential equation linear)\nrather than by physics. One can see that the force should be quadratic\nin velocity by considering the momentum imparted on the air\nmolecules. If an object sweeps through a volume $dV$ of air in time\n$dt$, the momentum imparted on the air is\n\n\n
\n\n$$\n\\begin{equation}\ndP=\\rho_m dV v,\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nwhere $v$ is the velocity of the object and $\\rho_m$ is the mass\ndensity of the air. If the molecules bounce back as opposed to stop\nyou would double the size of the term. The opposite value of the\nmomentum is imparted onto the object itself. Geometrically, the\ndifferential volume is\n\n\n
\n\n$$\n\\begin{equation}\ndV=Avdt,\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nwhere $A$ is the cross-sectional area and $vdt$ is the distance the\nobject moved in time $dt$.\n\n## Resulting Acceleration\nPlugging this into the expression above,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dP}{dt}=-\\rho_m A v^2.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nThis is the force felt by the particle, and is opposite to its\ndirection of motion. Now, because air doesn't stop when it hits an\nobject, but flows around the best it can, the actual force is reduced\nby a dimensionless factor $c_W$, called the drag coefficient.\n\n\n
\n\n$$\n\\begin{equation}\nF_{\\rm drag}=-c_W\\rho_m Av^2,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the acceleration is\n\n$$\n\\begin{eqnarray}\n\\frac{dv}{dt}=-\\frac{c_W\\rho_mA}{m}v^2.\n\\end{eqnarray}\n$$\n\nFor a particle with initial velocity $v_0$, one can separate the $dt$\nto one side of the equation, and move everything with $v$s to the\nother side. We did this in our discussion of simple motion and will not repeat it here.\n\nOn more general terms,\nfor many systems, e.g. an automobile, there are multiple sources of\nresistance. In addition to wind resistance, where the force is\nproportional to $v^2$, there are dissipative effects of the tires on\nthe pavement, and in the axel and drive train. These other forces can\nhave components that scale proportional to $v$, and components that\nare independent of $v$. Those independent of $v$, e.g. the usual\n$f=\\mu_K N$ frictional force you consider in your first Physics courses, only set in\nonce the object is actually moving. As speeds become higher, the $v^2$\ncomponents begin to dominate relative to the others. For automobiles\nat freeway speeds, the $v^2$ terms are largely responsible for the\nloss of efficiency. To travel a distance $L$ at fixed speed $v$, the\nenergy/work required to overcome the dissipative forces are $fL$,\nwhich for a force of the form $f=\\alpha v^n$ becomes\n\n$$\n\\begin{eqnarray}\nW=\\int dx~f=\\alpha v^n L.\n\\end{eqnarray}\n$$\n\nFor $n=0$ the work is\nindependent of speed, but for the wind resistance, where $n=2$,\nslowing down is essential if one wishes to reduce fuel consumption. It\nis also important to consider that engines are designed to be most\nefficient at a chosen range of power output. Thus, some cars will get\nbetter mileage at higher speeds (They perform better at 50 mph than at\n5 mph) despite the considerations mentioned above.\n\n## Going Ballistic, Projectile Motion or a Softer Approach, Falling Raindrops\n\n\nAs an example of Newton's Laws we consider projectile motion (or a\nfalling raindrop or a ball we throw up in the air) with a drag force. Even though air resistance is\nlargely proportional to the square of the velocity, we will consider\nthe drag force to be linear to the velocity, $\\boldsymbol{F}=-m\\gamma\\boldsymbol{v}$,\nfor the purposes of this exercise.\n\nSuch a dependence can be extracted from experimental data for objects moving at low velocities, see for example Malthe-S\u00f8renssen chapter 5.6.\n\nWe will here focus on a two-dimensional problem.\n\n## Two-dimensional falling object\n\nThe acceleration for a projectile moving upwards,\n$\\boldsymbol{a}=\\boldsymbol{F}/m$, becomes\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{dt}=-\\gamma v_x,\\\\\n\\nonumber\n\\frac{dv_y}{dt}=-\\gamma v_y-g,\n\\end{eqnarray}\n$$\n\nand $\\gamma$ has dimensions of inverse time. \n\nIf you on the other hand have a falling raindrop, how do these equations change? See for example Figure 2.1 in Taylor.\nLet us stay with a ball which is thrown up in the air at $t=0$. \n\n## Ways of solving these equations\n\nWe will go over two different ways to solve this equation. The first\nby direct integration, and the second as a differential equation. To\ndo this by direct integration, one simply multiplies both sides of the\nequations above by $dt$, then divide by the appropriate factors so\nthat the $v$s are all on one side of the equation and the $dt$ is on\nthe other. For the $x$ motion one finds an easily integrable equation,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{v_x}&=&-\\gamma dt,\\\\\n\\nonumber\n\\int_{v_{0x}}^{v_{x}}\\frac{dv_x}{v_x}&=&-\\gamma\\int_0^{t}dt,\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{x}}{v_{0x}}\\right)&=&-\\gamma t,\\\\\n\\nonumber\nv_{x}(t)&=&v_{0x}e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nThis is very much the result you would have written down\nby inspection. For the $y$-component of the velocity,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_y}{v_y+g/\\gamma}&=&-\\gamma dt\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{y}+g/\\gamma}{v_{0y}-g/\\gamma}\\right)&=&-\\gamma t_f,\\\\\n\\nonumber\nv_{fy}&=&-\\frac{g}{\\gamma}+\\left(v_{0y}+\\frac{g}{\\gamma}\\right)e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nWhereas $v_x$ starts at some value and decays\nexponentially to zero, $v_y$ decays exponentially to the terminal\nvelocity, $v_t=-g/\\gamma$.\n\n## Solving as differential equations\n\nAlthough this direct integration is simpler than the method we invoke\nbelow, the method below will come in useful for some slightly more\ndifficult differential equations in the future. The differential\nequation for $v_x$ is straight-forward to solve. Because it is first\norder there is one arbitrary constant, $A$, and by inspection the\nsolution is\n\n\n
\n\n$$\n\\begin{equation}\nv_x=Ae^{-\\gamma t}.\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nThe arbitrary constants for equations of motion are usually determined\nby the initial conditions, or more generally boundary conditions. By\ninspection $A=v_{0x}$, the initial $x$ component of the velocity.\n\n\n## Differential Equations, contn\nThe differential equation for $v_y$ is a bit more complicated due to\nthe presence of $g$. Differential equations where all the terms are\nlinearly proportional to a function, in this case $v_y$, or to\nderivatives of the function, e.g., $v_y$, $dv_y/dt$,\n$d^2v_y/dt^2\\cdots$, are called linear differential equations. If\nthere are terms proportional to $v^2$, as would happen if the drag\nforce were proportional to the square of the velocity, the\ndifferential equation is not longer linear. Because this expression\nhas only one derivative in $v$ it is a first-order linear differential\nequation. If a term were added proportional to $d^2v/dt^2$ it would be\na second-order differential equation. In this case we have a term\ncompletely independent of $v$, the gravitational acceleration $g$, and\nthe usual strategy is to first rewrite the equation with all the\nlinear terms on one side of the equal sign,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_y}{dt}+\\gamma v_y=-g.\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\n## Splitting into two parts\n\nNow, the solution to the equation can be broken into two\nparts. Because this is a first-order differential equation we know\nthat there will be one arbitrary constant. Physically, the arbitrary\nconstant will be determined by setting the initial velocity, though it\ncould be determined by setting the velocity at any given time. Like\nmost differential equations, solutions are not \"solved\". Instead,\none guesses at a form, then shows the guess is correct. For these\ntypes of equations, one first tries to find a single solution,\ni.e. one with no arbitrary constants. This is called the {\\it\nparticular} solution, $y_p(t)$, though it should really be called\n\"a\" particular solution because there are an infinite number of such\nsolutions. One then finds a solution to the {\\it homogenous} equation,\nwhich is the equation with zero on the right-hand side,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,h}}{dt}+\\gamma v_{y,h}=0.\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nHomogenous solutions will have arbitrary constants. \n\nThe particular solution will solve the same equation as the original\ngeneral equation\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,p}}{dt}+\\gamma v_{y,p}=-g.\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nHowever, we don't need find one with arbitrary constants. Hence, it is\ncalled a **particular** solution.\n\nThe sum of the two,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=v_{y,p}+v_{y,h},\n\\label{_auto9} \\tag{9}\n\\end{equation}\n$$\n\nis a solution of the total equation because of the linear nature of\nthe differential equation. One has now found a *general* solution\nencompassing all solutions, because it both satisfies the general\nequation (like the particular solution), and has an arbitrary constant\nthat can be adjusted to fit any initial condition (like the homogeneous\nsolution). If the equations were not linear, that is if there were terms\nsuch as $v_y^2$ or $v_y\\dot{v}_y$, this technique would not work.\n\n## More details\n\nReturning to the example above, the homogenous solution is the same as\nthat for $v_x$, because there was no gravitational acceleration in\nthat case,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,h}=Be^{-\\gamma t}.\n\\label{_auto10} \\tag{10}\n\\end{equation}\n$$\n\nIn this case a particular solution is one with constant velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,p}=-g/\\gamma.\n\\label{_auto11} \\tag{11}\n\\end{equation}\n$$\n\nNote that this is the terminal velocity of a particle falling from a\ngreat height. The general solution is thus,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=Be^{-\\gamma t}-g/\\gamma,\n\\label{_auto12} \\tag{12}\n\\end{equation}\n$$\n\nand one can find $B$ from the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{0y}=B-g/\\gamma,~~~B=v_{0y}+g/\\gamma.\n\\label{_auto13} \\tag{13}\n\\end{equation}\n$$\n\nPlugging in the expression for $B$ gives the $y$ motion given the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=(v_{0y}+g/\\gamma)e^{-\\gamma t}-g/\\gamma.\n\\label{_auto14} \\tag{14}\n\\end{equation}\n$$\n\nIt is easy to see that this solution has $v_y=v_{0y}$ when $t=0$ and\n$v_y=-g/\\gamma$ when $t\\rightarrow\\infty$.\n\nOne can also integrate the two equations to find the coordinates $x$\nand $y$ as functions of $t$,\n\n$$\n\\begin{eqnarray}\nx&=&\\int_0^t dt'~v_{0x}(t')=\\frac{v_{0x}}{\\gamma}\\left(1-e^{-\\gamma t}\\right),\\\\\n\\nonumber\ny&=&\\int_0^t dt'~v_{0y}(t')=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\end{eqnarray}\n$$\n\nIf the question was to find the position at a time $t$, we would be\nfinished. However, the more common goal in a projectile equation\nproblem is to find the range, i.e. the distance $x$ at which $y$\nreturns to zero. For the case without a drag force this was much\nsimpler. The solution for the $y$ coordinate would have been\n$y=v_{0y}t-gt^2/2$. One would solve for $t$ to make $y=0$, which would\nbe $t=2v_{0y}/g$, then plug that value for $t$ into $x=v_{0x}t$ to\nfind $x=2v_{0x}v_{0y}/g=v_0\\sin(2\\theta_0)/g$. One follows the same\nsteps here, except that the expression for $y(t)$ is more\ncomplicated. Searching for the time where $y=0$, and we get\n\n\n
\n\n$$\n\\begin{equation}\n0=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\label{_auto15} \\tag{15}\n\\end{equation}\n$$\n\nThis cannot be inverted into a simple expression $t=\\cdots$. Such\nexpressions are known as \"transcendental equations\", and are not the\nrare instance, but are the norm. In the days before computers, one\nmight plot the right-hand side of the above graphically as\na function of time, then find the point where it crosses zero.\n\nNow, the most common way to solve for an equation of the above type\nwould be to apply Newton's method numerically. This involves the\nfollowing algorithm for finding solutions of some equation $F(t)=0$.\n\n1. First guess a value for the time, $t_{\\rm guess}$.\n\n2. Calculate $F$ and its derivative, $F(t_{\\rm guess})$ and $F'(t_{\\rm guess})$. \n\n3. Unless you guessed perfectly, $F\\ne 0$, and assuming that $\\Delta F\\approx F'\\Delta t$, one would choose \n\n4. $\\Delta t=-F(t_{\\rm guess})/F'(t_{\\rm guess})$.\n\n5. Now repeat step 1, but with $t_{\\rm guess}\\rightarrow t_{\\rm guess}+\\Delta t$.\n\nIf the $F(t)$ were perfectly linear in $t$, one would find $t$ in one\nstep. Instead, one typically finds a value of $t$ that is closer to\nthe final answer than $t_{\\rm guess}$. One breaks the loop once one\nfinds $F$ within some acceptable tolerance of zero. A program to do\nthis will be added shortly.\n\n## Motion in a Magnetic Field\n\n\nAnother example of a velocity-dependent force is magnetism,\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{F}&=&q\\boldsymbol{v}\\times\\boldsymbol{B},\\\\\n\\nonumber\nF_i&=&q\\sum_{jk}\\epsilon_{ijk}v_jB_k.\n\\end{eqnarray}\n$$\n\nFor a uniform field in the $z$ direction $\\boldsymbol{B}=B\\hat{z}$, the force can only have $x$ and $y$ components,\n\n$$\n\\begin{eqnarray}\nF_x&=&qBv_y\\\\\n\\nonumber\nF_y&=&-qBv_x.\n\\end{eqnarray}\n$$\n\nThe differential equations are\n\n$$\n\\begin{eqnarray}\n\\dot{v}_x&=&\\omega_c v_y,\\omega_c= qB/m\\\\\n\\nonumber\n\\dot{v}_y&=&-\\omega_c v_x.\n\\end{eqnarray}\n$$\n\nOne can solve the equations by taking time derivatives of either equation, then substituting into the other equation,\n\n$$\n\\begin{eqnarray}\n\\ddot{v}_x=\\omega_c\\dot{v_y}=-\\omega_c^2v_x,\\\\\n\\nonumber\n\\ddot{v}_y&=&-\\omega_c\\dot{v}_x=-\\omega_cv_y.\n\\end{eqnarray}\n$$\n\nThe solution to these equations can be seen by inspection,\n\n$$\n\\begin{eqnarray}\nv_x&=&A\\sin(\\omega_ct+\\phi),\\\\\n\\nonumber\nv_y&=&A\\cos(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nOne can integrate the equations to find the positions as a function of time,\n\n$$\n\\begin{eqnarray}\nx-x_0&=&\\int_{x_0}^x dx=\\int_0^t dt v(t)\\\\\n\\nonumber\n&=&\\frac{-A}{\\omega_c}\\cos(\\omega_ct+\\phi),\\\\\n\\nonumber\ny-y_0&=&\\frac{A}{\\omega_c}\\sin(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nThe trajectory is a circle centered at $x_0,y_0$ with amplitude $A$ rotating in the clockwise direction.\n\nThe equations of motion for the $z$ motion are\n\n\n
\n\n$$\n\\begin{equation}\n\\dot{v_z}=0,\n\\label{_auto16} \\tag{16}\n\\end{equation}\n$$\n\nwhich leads to\n\n\n
\n\n$$\n\\begin{equation}\nz-z_0=V_zt.\n\\label{_auto17} \\tag{17}\n\\end{equation}\n$$\n\nAdded onto the circle, the motion is helical.\n\nNote that the kinetic energy,\n\n\n
\n\n$$\n\\begin{equation}\nT=\\frac{1}{2}m(v_x^2+v_y^2+v_z^2)=\\frac{1}{2}m(\\omega_c^2A^2+V_z^2),\n\\label{_auto18} \\tag{18}\n\\end{equation}\n$$\n\nis constant. This is because the force is perpendicular to the\nvelocity, so that in any differential time element $dt$ the work done\non the particle $\\boldsymbol{F}\\cdot{dr}=dt\\boldsymbol{F}\\cdot{v}=0$.\n\nOne should think about the implications of a velocity dependent\nforce. Suppose one had a constant magnetic field in deep space. If a\nparticle came through with velocity $v_0$, it would undergo cyclotron\nmotion with radius $R=v_0/\\omega_c$. However, if it were still its\nmotion would remain fixed. Now, suppose an observer looked at the\nparticle in one reference frame where the particle was moving, then\nchanged their velocity so that the particle's velocity appeared to be\nzero. The motion would change from circular to fixed. Is this\npossible?\n\nThe solution to the puzzle above relies on understanding\nrelativity. Imagine that the first observer believes $\\boldsymbol{B}\\ne 0$ and\nthat the electric field $\\boldsymbol{E}=0$. If the observer then changes\nreference frames by accelerating to a velocity $\\boldsymbol{v}$, in the new\nframe $\\boldsymbol{B}$ and $\\boldsymbol{E}$ both change. If the observer moved to the\nframe where the charge, originally moving with a small velocity $v$,\nis now at rest, the new electric field is indeed $\\boldsymbol{v}\\times\\boldsymbol{B}$,\nwhich then leads to the same acceleration as one had before. If the\nvelocity is not small compared to the speed of light, additional\n$\\gamma$ factors come into play,\n$\\gamma=1/\\sqrt{1-(v/c)^2}$. Relativistic motion will not be\nconsidered in this course.\n\n\n\n## Sliding Block tied to a Wall\n\nAnother classical case is that of simple harmonic oscillations, here\nrepresented by a block sliding on a horizontal frictionless\nsurface. The block is tied to a wall with a spring. If the spring is\nnot compressed or stretched too far, the force on the block at a given\nposition $x$ is\n\n$$\nF=-kx.\n$$\n\n## Back and Forth, Sliding Block with no friction\n\nThe negative sign means that the force acts to restore the object to an equilibrium position. Newton's equation of motion for this idealized system is then\n\n$$\nm\\frac{d^2x}{dt^2}=-kx,\n$$\n\nor we could rephrase it as\n\n\n
\n\n$$\n\\frac{d^2x}{dt^2}=-\\frac{k}{m}x=-\\omega_0^2x,\n\\label{eq:newton1} \\tag{19}\n$$\n\nwith the angular frequency $\\omega_0^2=k/m$. \n\nWe will derive the above force when we start studying **harmonic oscillations**. \n\n## Final rewrite\n\nWith the position $x(t)$ and the velocity $v(t)=dx/dt$ we can reformulate Newton's equation in the following way\n\n$$\n\\frac{dx(t)}{dt}=v(t),\n$$\n\nand\n\n$$\n\\frac{dv(t)}{dt}=-\\omega_0^2x(t).\n$$\n\nWith initial conditions $x(t_0)=x_0$ and $v(t_0)=v_0$ we can in turn solve the differential equations. \n\n## Analytical Solution\n\nThe above differential equation has the advantage that it can be\nsolved analytically with general solutions on the form\n\n$$\nx(t)=A\\cos{\\omega_0t}+B\\sin{\\omega_0t},\n$$\n\nand\n\n$$\nv(t)=-\\omega_0 A\\sin{\\omega_0t}+\\omega_0 B\\cos{\\omega_0t},\n$$\n\nwhere $A$ and $B$ are constants to be determined from the initial conditions.\n\nThis provides in turn an important test for the numerical solution and\nthe development of a program for more complicated cases which cannot\nbe solved analytically.\n\nWe will discuss the above equations in more detail when we discuss harmonic oscillations.\n\n\n\n## Summarizing the various motion problems 1\n\nThe examples we have discussed above were included in order to\nillustrate various methods (which depend on the specific problem) to\nfind the solutions of the equations of motion.\nWe have solved the equations of motion in the following ways:\n\n**Solve the differential equations analytically.**\n\nWe did this for example with the following object in one or two dimensions or the sliding block. \nHere we had for example an equation set like\n\n$$\n\\frac{dv_x}{dt}=-\\gamma v_x,\n$$\n\nand\n\n$$\n\\frac{dv_y}{dt}=-\\gamma v_y-g,\n$$\n\nand $\\gamma$ has dimension of inverse time.\n\n\n\n\n## Summarizing the various motion problems 2\n\n\n**Integrate the equations.**\n\nWe could also in case we can separate the degrees of freedom integrate. Take for example one of the equations in the previous slide\n\n$$\n\\frac{dv_x}{dt}=-\\gamma v_x,\n$$\n\nwhich we can rewrite in terms of a left-hand side which depends only on the velocity and a right-hand side which depends only on time\n\n$$\n\\frac{dv_x}{v_x}=-\\gamma dt.\n$$\n\nIntegrating we have (since we can separate $v_x$ and $t$)\n\n$$\n\\int_{v_0}^{v_t}\\frac{dv_x}{v_x}=-\\int_{t_0}^{t_f}\\gamma dt,\n$$\n\nwhere $v_f$ is the velocity at a final time and $t_f$ is the final time.\nIn this case we found, after having integrated the above two sides that\n\n$$\nv_f(t)=v_0\\exp{-\\gamma t}.\n$$\n\n## Summarizing the various motion problems 3\n\n\n**Solve the differential equations numerically.**\n\n\nFinally, using for example Euler's method, we can solve the\ndifferential equations numerically. If we can compare our numerical\nsolutions with analytical solutions, we have an extra check of our\nnumerical approaches.\n\nThe example code on the next slide is relevant for homework 3. Here we deal with a falling object in two dimensions. Except for the derivations above with an\nair resistance which is linear in the velocity, homework 3 uses a quadratic velocity dependence.\n\n\n\n## Code example using Euler's methods\n\n**Note**: this code needs some additional expressions and will not run\n\n\n```python\n%matplotlib inline\n\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\n#define the gravitational acceleration\ng = 9.80655 #m/s^2\n# The mass and the drag constant D\nD = 0.00245 #mass/length kg/m\nm = 0.2 #kg, mass of falling object\nDeltaT = 0.001\n#set up final time, here just a number we have chosen\ntfinal = 1.0\n# set up number of points for all variables\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and y and arrays for analytical results\n# Note the brute force setting up of arrays for x and y, vx, vy, ax and ay\n# For hw3 you should think of using the 2-dim vectors you used in homework 2\nt = np.zeros(n)\nvy = np.zeros(n)\ny = np.zeros(n)\nvx = np.zeros(n)\nx = np.zeros(n)\n# Initial conditions\nvx[0] = 10.0 #m/s\nvy[0] = 0.0 #m/s\ny[0] = 10.0 #m\nx[0] = 0.0 #m\n# Start integrating using Euler's method\nfor i in range(n-1):\n # expression for acceleration, you need to set them up\n# ax = You need to set up the expression for force and thereby the acceleration in the x-direction\n# ay = You need to set up the expression for force and thereby the acceleration in the y-direction\n # update velocity and position\n vx[i+1] = vx[i] + DeltaT*ax\n x[i+1] = x[i] + DeltaT*vx[i]\n vy[i+1] = vy[i] + DeltaT*ay\n y[i+1] = y[i] + DeltaT*vy[i]\n # update time to next time step and compute analytical answer\n t[i+1] = t[i] + DeltaT\n# Here you need to set up the analytical solution for y(t) and x(t)\n\nif ( y[i+1] < 0.0):\n break\ndata = {'t[s]': t,\n 'Relative error in y': abs((y-yanalytic)/yanalytic),\n 'vy[m/s]': vy,\n 'Relative error in x': abs((x-xanalytic)/xanalytic),\n 'vx[m/s]': vx\n}\nNewData = pd.DataFrame(data)\ndisplay(NewData)\n# save to file\nNewData.to_csv(outfile, index=False)\n#then plot\nfig, axs = plt.subplots(4, 1)\naxs[0].plot(t, y)\naxs[0].set_xlim(0, tfinal)\naxs[0].set_ylabel('y')\naxs[1].plot(t, vy)\naxs[1].set_ylabel('vy[m/s]')\naxs[1].set_xlabel('time[s]')\naxs[2].plot(t, x)\naxs[2].set_xlim(0, tfinal)\naxs[2].set_ylabel('x')\naxs[3].plot(t, vx)\naxs[3].set_ylabel('vx[m/s]')\naxs[3].set_xlabel('time[s]')\nfig.tight_layout()\nplt.show()\n```\n\n## Work, Energy, Momentum and Conservation laws\n\nThe previous three cases have shown us how to use Newton\u2019s laws of\nmotion to determine the motion of an object based on the forces acting\non it. For two of the cases there is an underlying assumption that we can find an analytical solution to a continuous problem.\nWith a continuous problem we mean a problem where the various variables can take any value within a finite or infinite interval. \n\nUnfortunately, in many cases we\ncannot find an exact solution to the equations of motion we get from\nNewton\u2019s second law. The numerical approach, where we discretize the continuous problem, allows us however to study a much richer set of problems.\nFor problems involving Newton's laws and the various equations of motion we encounter, solving the equations numerically, is the standard approach.\n\nIt allows us to focus on the underlying forces. Often we end up using the same numerical algorithm for different problems.\n\nHere we introduce a commonly used technique that allows us to find the\nvelocity as a function of position without finding the position as a\nfunction of time\u2014an alternate form of Newton\u2019s second law. The method\nis based on a simple principle: Instead of solving the equations of\nmotion directly, we integrate the equations of motion. Such a method\nis called an integration method. \n\nThis allows us also to introduce the **work-energy** theorem. This\ntheorem allows us to find the velocity as a function of position for\nan object even in cases when we cannot solve the equations of\nmotion. This introduces us to the concept of work and kinetic energy,\nan energy related to the motion of an object.\n\nAnd finally, we will link the work-energy theorem with the principle of conservation of energy.\n\n## The Work-Energy Theorem\n\nLet us define the kinetic energy $K$ with a given velocity $\\boldsymbol{v}$\n\n$$\nK=\\frac{1}{2}mv^2,\n$$\n\nwhere $m$ is the mass of the object we are considering.\nWe assume also that there is a force $\\boldsymbol{F}$ acting on the given object\n\n$$\n\\boldsymbol{F}=\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t),\n$$\n\nwith $\\boldsymbol{r}$ the position and $t$ the time.\nIn general we assume the force is a function of all these variables.\nMany of the more central forces in Nature however, depende only on the\nposition. Examples are the gravitational force and the force derived\nfrom the Coulomb potential in electromagnetism.\n\n## Rewriting the Kinetic Energy\n\nLet us study the derivative of the kinetic energy with respect to time $t$. Its continuous form is\n\n$$\n\\frac{dK}{dt}=\\frac{1}{2}m\\frac{d\\boldsymbol{v}\\cdot\\boldsymbol{v}}{dt}.\n$$\n\nUsing our results from exercise 3 of homework 1, we can write the derivative of a vector dot product as\n\n$$\n\\frac{dK}{dt}=\\frac{1}{2}m\\frac{d\\boldsymbol{v}\\cdot\\boldsymbol{v}}{dt}= \\frac{1}{2}m\\left(\\frac{d\\boldsymbol{v}}{dt}\\cdot\\boldsymbol{v}+\\boldsymbol{v}\\cdot\\frac{d\\boldsymbol{v}}{dt}\\right)=m\\frac{d\\boldsymbol{v}}{dt}\\cdot\\boldsymbol{v}.\n$$\n\nWe know also that the acceleration is defined as\n\n$$\n\\boldsymbol{a}=\\frac{\\boldsymbol{F}}{m}=\\frac{d\\boldsymbol{v}}{dt}.\n$$\n\nWe can then rewrite the equation for the derivative of the kinetic energy as\n\n$$\n\\frac{dK}{dt}=m\\frac{d\\boldsymbol{v}}{dt}\\boldsymbol{v}=\\boldsymbol{F}\\frac{d\\boldsymbol{r}}{dt},\n$$\n\nwhere we defined the velocity as the derivative of the position with respect to time.\n\n## Discretizing\n\nLet us now discretize the above equation by letting the instantenous terms be replaced by a discrete quantity, that is\nwe let $dK\\rightarrow \\Delta K$, $dt\\rightarrow \\Delta t$, $d\\boldsymbol{r}\\rightarrow \\Delta \\boldsymbol{r}$ and $d\\boldsymbol{v}\\rightarrow \\Delta \\boldsymbol{v}$.\n\nWe have then\n\n$$\n\\frac{\\Delta K}{\\Delta t}=m\\frac{\\Delta \\boldsymbol{v}}{\\Delta t}\\boldsymbol{v}=\\boldsymbol{F}\\frac{\\Delta \\boldsymbol{r}}{\\Delta t},\n$$\n\nor by multiplying out $\\Delta t$ we have\n\n$$\n\\Delta K=\\boldsymbol{F}\\Delta \\boldsymbol{r}.\n$$\n\nWe define this quantity as the **work** done by the force $\\boldsymbol{F}$ during the displacement $\\Delta \\boldsymbol{r}$. If study the dimensionality of this problem we have mass times length squared divided by time squared, or just dimension energy.\n\n\n## Difference in kinetic energy\n\nIf we now a series of such displacements $\\Delta \\boldsymbol{r}$ $i=0,1,\\dots,n$, we have a difference in kinetic energy at a final position $\\boldsymbol{r}_n$ and an \ninitial position $\\boldsymbol{r}_0$ given by\n\n$$\n\\Delta K=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\sum_{i=0}^n\\boldsymbol{F}_i\\Delta \\boldsymbol{r},\n$$\n\nwhere $\\boldsymbol{F}_i$ are the forces acting at every position $\\boldsymbol{r}_i$.\n\nThe work done by acting with a force on a set of displacements can\nthen be as expressed as the difference between the initial and final\nkinetic energies.\n\nThis defines the **work-energy** theorem.\n\n## From the discrete version to the continuous version\n\nIf we take the limit $\\Delta \\boldsymbol{r}\\rightarrow 0$, we can rewrite the sum over the various displacements in terms of an integral, that is\n\n$$\n\\Delta K=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\sum_{i=0}^n\\boldsymbol{F}_i\\Delta \\boldsymbol{r}\\rightarrow \\int_{\\boldsymbol{r}_0}^{\\boldsymbol{r}_n}\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t)d\\boldsymbol{r}.\n$$\n\nThis integral defines a path integral since it will depend on the given path we take between the two end points. We will replace the limits with the symbol $c$ in order to indicate that we take a specific countour in space when the force acts on the system. That is the work $W_{n0}$ between two points $\\boldsymbol{r}_n$ and $\\boldsymbol{r}_0$ is labeled as\n\n$$\nW_{n0}=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\int_{c}\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t)d\\boldsymbol{r}.\n$$\n\nNote that if the force is perpendicular to the displacement, then the force does not affect the kinetic energy.\n\nLet us now study some examples of forces and how to find the velocity from the integration over a given path.\n", "meta": {"hexsha": "eb2b1862f23c83539a483d5c2918a657c5f28e75", "size": 47663, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week5/ipynb/.ipynb_checkpoints/week5-checkpoint.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/pub/week5/ipynb/.ipynb_checkpoints/week5-checkpoint.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/pub/week5/ipynb/.ipynb_checkpoints/week5-checkpoint.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 30.8698186528, "max_line_length": 366, "alphanum_fraction": 0.5709460168, "converted": true, "num_tokens": 8806, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4416730056646256, "lm_q2_score": 0.33458943461801643, "lm_q1q2_score": 0.14777912125136705}} {"text": "# Plotting with Matplotlib\n\n## Prepare for action\n\n\n```\nimport numpy as np\nimport scipy as sp\nimport sympy\n\n# Pylab combines the pyplot functionality (for plotting) with the numpy\n# functionality (for mathematics and for working with arrays) in a single namespace\n# aims to provide a closer MATLAB feel (the easy way). Note that his approach\n# should only be used when doing some interactive quick and dirty data inspection.\n# DO NOT USE THIS FOR SCRIPTS\n#from pylab import *\n\n# the convienient Matplotib plotting interface pyplot (the tidy/right way)\n# use this for building scripts. The examples here will all use pyplot.\nimport matplotlib.pyplot as plt\n\n# for using the matplotlib API directly (the hard and verbose way)\n# use this when building applications, and/or backends\nimport matplotlib as mpl\n```\n\nHow would you like the IPython notebook show your plots? In order to use the\nmatplotlib IPython magic youre IPython notebook should be launched as\n\n ipython notebook --matplotlib=inline\n\nMake plots appear as a pop up window, chose the backend: 'gtk', 'inline', 'osx', 'qt', 'qt4', 'tk', 'wx'\n \n %matplotlib qt\n \nor inline the notebook (no panning, zooming through the plot). Not working in IPython 0.x\n \n %matplotib inline\n \n\n\n```\n# activate pop up plots\n#%matplotlib qt\n# or change to inline plots\n#%matplotlib inline\n%matplotlib\n```\n\n Using matplotlib backend: module://IPython.kernel.zmq.pylab.backend_inline\n\n\n### Matplotlib documentation\n\nFinding your own way (aka RTFM). Hint: there is search box available!\n\n* http://matplotlib.org/contents.html\n\nThe Matplotlib API docs:\n\n* http://matplotlib.org/api/index.html\n\nPyplot, object oriented plotting:\n\n* http://matplotlib.org/api/pyplot_api.html\n* http://matplotlib.org/api/pyplot_summary.html\n\nExtensive gallery with examples:\n\n* http://matplotlib.org/gallery.html\n\n### Tutorials for those who want to start playing\n\nIf reading manuals is too much for you, there is a very good tutorial available here:\n\n* http://nbviewer.ipython.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb\n\nNote that this tutorial uses\n\n from pylab import *\n\nwhich is usually not adviced in more advanced script environments. When using\n \n import matplotlib.pyplot as plt\n\nyou need to preceed all plotting commands as used in the above tutorial with\n \n plt.\n\n\nGive me more!\n\n[EuroScipy 2012 Matlotlib tutorial](http://www.loria.fr/~rougier/teaching/matplotlib/). Note that here the author uses ```from pylab import * ```. When using ```import matplotliblib.pyplot as plt``` the plotting commands need to be proceeded with ```plt.```\n\n\n## Plotting template starting point\n\n\n```\n# some sample data\nx = np.arange(-10,10,0.1)\n```\n\nTo change the default plot configuration values.\n\n\n```\n# create a figure instance, note that figure size is given in inches!\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6))\n# set the big title (note aligment relative to figure)\nfig.suptitle(\"suptitle 16, figure alignment\", fontsize=16)\n\n# actual plotting\nax.plot(x, x**2, label=\"label 12\")\n\n\n# set axes title (note aligment relative to axes)\nax.set_title(\"title 14, axes alignment\", fontsize=14)\n\n# axes labels\nax.set_xlabel('xlabel 12')\nax.set_ylabel(r'$y_{\\alpha}$ 12')\n\n# legend\nax.legend(fontsize=12, loc=\"best\")\n\npage_width_cm = 13\ndpi = 200\ninch = 2.54 # inch in cm\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n# If you don\u2019t need LaTeX, don\u2019t use it. It is slower to plot, and text\n\n# saving the figure in different formats\nfig.savefig('figure-%03i.png' % dpi, dpi=dpi)\nfig.savefig('figure.svg')\nfig.savefig('figure.eps')\n```\n\n\n```\n\n```\n\n\n```\nax.grid(True)\nfig.canvas.draw()\n```\n\n\n```\n# following steps are only relevant when using figures as pop up windows (with %matplotlib qt)\n# to update a figure with has been modified\nfig.canvas.draw()\n# show a figure\nfig.show()\n```\n\n## Exercise\n\nThe current section is about you trying to figure out how to do several plotting features. You should use the previously mentioned resources to find how to do that. In many cases, google is your friend!\n\n* add a grid to the plot\n\n\n\n```\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6))\nax.plot(x,x**2)\n#Write code to show grid in plot here\nax.set_xlim(0,10)\nax.set_ylim(0,100)\nax.xaxis.set_major_locator(MultipleLocator(1.0))\nax.xaxis.set_minor_locator(MultipleLocator(0.2))\nax.yaxis.set_major_locator(MultipleLocator(10.0))\nax.yaxis.set_minor_locator(MultipleLocator(2))\nax.grid(which='major', axis='x', linewidth=0.75, linestyle='-', color='0.75')\nax.grid(which='minor', axis='x', linewidth=0.25, linestyle='-', color='0.75')\nax.grid(which='major', axis='y', linewidth=0.75, linestyle='-', color='0.75')\nax.grid(which='minor', axis='y', linewidth=0.25, linestyle='-', color='0.75')\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n\nfig.show()\n```\n\n* change the location of the legend to different places\n\n\n\n```\nplt.plot(x,x**2, label=\"label 12\")\nplt.legend(fontsize=12, loc=\"best\")\nplt.legend(fontsize=12, loc=\"lower left\")\nplt.legend(fontsize=12, loc=\"upper center\")\nplt.legend(fontsize=12, loc=\"right\")\nplt.legend(fontsize=12, loc=\"center\")\nplt.legend(fontsize=12, loc=\"center right\")\nplt.legend(fontsize=12, loc=\"best\")\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n```\n\n* find a way to control the line type and color, marker type and color, control the frequency of the marks (`markevery`). See plot options at: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot \n\n\n\n```\nplt.plot(x,x**2,lw=0.5,ls=\"-\",c=\"red\",marker=\"*\",mec=\"black\",mew=0.2,mfc=\"white\",markevery=10)\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n```\n\n* add different sub-plots\n\n\n\n```\nfig = plt.figure(figsize=(14,4))\nn_row = 4\nfor i_row in range(n_row):\n if i_row != 0:\n ax = fig.add_subplot(1, n_row-1, i_row)\n ax.plot(x,x**2)\n ax.set_xlabel('x_'+str(i_row))\n ax.set_ylabel('y_'+str(i_row))\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n\nplt.tight_layout()\n\n```\n\n* size the figure such that when included on an A4 page the fonts are given in their true size\n\n\n\n```\n#Direct input \nplt.rcParams['text.latex.preamble']=[r\"\\usepackage{lmodern}\"]\n#Options\nparams = {'text.usetex' : True,\n 'font.size' : 11,\n 'font.family' : 'lmodern',\n 'text.latex.unicode': True,\n }\nplt.rcParams.update(params) \n\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n\nfig = plt.figure(figsize=(14,4))\nfig.subplots_adjust(bottom=0.025, left=0.025, top = 0.975, right=0.975)\nn_row = 4\nfor i_row in range(n_row):\n if i_row != 0:\n ax = fig.add_subplot(1, n_row-1, i_row)\n ax.plot(x,x**2)\n ax.set_xlabel('x_'+str(i_row))\n ax.set_ylabel('y_'+str(i_row))\n\nplt.tight_layout()\n\n```\n\n* make a contour plot\n\n\n\n```\nX, Y = np.meshgrid(x,x)\n\ncontourf(X, Y, X*(Y**2), 10, alpha=.75, cmap=cm.hot)\nC = contour(X, Y, X*(Y**2), 10, colors='black', linewidth=.5)\nclabel(C, inline=1, fontsize=10)\n\nxticks([]), yticks([])\nshow()\n```\n\n* use twinx() to create a second axis on the right for the second plot\n\n\n\n```\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6))\nax.plot(x,x**2)\nax2=ax.twinx() \nax2.plot(x,x**4, 'r')\n```\n\n* add horizontal and vertical lines using axvline(), axhline()\n\n\n\n```\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6))\nax.plot(x,x**2)\nax=gca()\nax.axvline(x=0, ymin=0, ymax=100, c='r')\nax.axhline(y=50, xmin=-10, xmax=10, c='r')\n\n```\n\n* autoformat dates for nice printing on the x-axis using fig.autofmt_xdate()\n\n\n```\nimport datetime\ndates = np.array([datetime.datetime.now() + datetime.timedelta(days=i) for i in xrange(24)])\nx=np.array([(i)**2 for i in xrange(24)])\nfig, ax = plt.subplots(nrows=1, ncols=1)\nax.plot(dates,x)\nfig.autofmt_xdate(bottom=0.2, rotation=90)\n```\n\n## Advanced exercises\n\nWe are going to play a bit with regression\n\n* Create a vector x of equally spaced number between $x \\in [0, 5\\pi]$ of 1000 points (keyword: linspace)\n\n\n```\nx = np.linspace(0,5*pi,1000)\n```\n\n* create a vector y, so that y=sin(x) with some random noise\n\n\n```\ny=np.sin(x)+np.random.rand(size(x))\n```\n\n* plot it like this: \n\n\n```\nplt.plot(x,y,lw=0.5,ls=\"\",c=\"black\",marker=\"o\",mec=\"blue\",mew=1,mfc=\"white\")\n\nplt.legend(fontsize=12, loc=\"best\")\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n```\n\nTry to do a polynomial fit on y(x) with different polynomial degree (Use numpy.polyfit to obtain coefficients)\n\nPlot it like this (use np.poly1d(coef)(x) to plot polynomials) \n\n\n\n```\nz = {}\ny_fit = {}\nfor i in range(10):\n z[i] = np.polyfit(x, y, i)\n p = np.poly1d(z[i])\n y_fit[i] = p(x)\n \n \n```\n\n\n```\nfig = plt.figure(figsize=(10,4))\nax = plt.subplot(111)\nax.plot(x,y,lw=0.5,ls=\"\",c=\"black\",marker=\"o\",mec=\"blue\",mew=0.5,mfc=\"white\",label=\"noisy\")\np = {}\nfor i in range(10):\n ax.plot(x,y_fit[i],lw=1, label=\"line \"+str(i))\n\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=20) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n\n# Shink current axis by 20%\nbox = ax.get_position()\nax.set_position([box.x0, box.y0, box.width * 0.8, box.height])\n\n# Put a legend to the right of the current axis\nax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) \n```\n\n\n```\n\n```\n", "meta": {"hexsha": "0c1d9f2402d1ae387a3b974c3d0b2af9a106003a", "size": 382541, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lesson 3/results/plotting-introduction-exercise - jumu.ipynb", "max_stars_repo_name": "gtpedrosa/Python4WindEnergy", "max_stars_repo_head_hexsha": "f8ad09018420cfb3a419173f97b129de7118d814", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2015-01-19T18:21:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-27T22:41:06.000Z", "max_issues_repo_path": "lesson 3/results/plotting-introduction-exercise - jumu.ipynb", "max_issues_repo_name": "arash7444/Python4WindEnergy", "max_issues_repo_head_hexsha": "8f97a5f86e81ce01d80dafb6f8104165fd3ad397", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-05-24T06:07:07.000Z", "max_issues_repo_issues_event_max_datetime": "2016-05-24T08:26:29.000Z", "max_forks_repo_path": "lesson 3/results/plotting-introduction-exercise - jumu.ipynb", "max_forks_repo_name": "arash7444/Python4WindEnergy", "max_forks_repo_head_hexsha": "8f97a5f86e81ce01d80dafb6f8104165fd3ad397", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2015-06-26T14:44:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-07T18:36:52.000Z", "avg_line_length": 463.6860606061, "max_line_length": 91693, "alphanum_fraction": 0.9290455141, "converted": true, "num_tokens": 3014, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.30735801686526387, "lm_q1q2_score": 0.1476789736308483}} {"text": "\n# Computational Physics Lectures: Introduction to programming (C++ and Fortran)\n\n **Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University\n\nDate: **2016**\n\n## Extremely useful tools, strongly recommended\n\n**and discussed at the lab sessions the first two weeks.**\n\n * GIT for version control, discussed at the lab this week (and next week as well)\n\n * ipython notebook, mentioned this week \n\n * QTcreator for editing and mastering computational projects\n\n * Armadillo as a useful numerical library for C++, highly recommended\n\n * Unit tests\n\n\n\n## A structured programming approach\n\n * Before writing a single line, have the algorithm clarified and understood. It is crucial to have a logical structure of e.g., the flow and organization of data before one starts writing.\n\n * Always try to choose the simplest algorithm. Computational speed can be improved upon later.\n\n * Try to write a as clear program as possible. Such programs are easier to debug, and although it may take more time, in the long run it may save you time. If you collaborate with other people, it reduces spending time on debuging and trying to understand what the codes do. A clear program will also allow you to remember better what the program really does!\n\n\n\n## A structured programming approach\n\n * The planning of the program should be from top down to bottom, trying to keep the flow as linear as possible. Avoid jumping back and forth in the program. First you need to arrange the major tasks to be achieved. Then try to break the major tasks into subtasks. These can be represented by functions or subprograms. They should accomplish limited tasks and as far as possible be independent of each other. That will allow you to use them in other programs as well.\n\n * Try always to find some cases where an analytical solution exists or where simple test cases can be applied. If possible, devise different algorithms for solving the same problem. If you get the same answers, you may have coded things correctly or made the same error twice or more.\n\n\n\n## Getting Started\n\n**Compiling and linking, without QTcreator.**\n\nIn order to obtain an executable file for a C++ program, the following\ninstructions under Linux/Unix can be used\n\n c++ -c -Wall myprogram.cpp\n c++ -o myprogram myprogram.o\n\n\nwhere the compiler is called through the command c++/g++. The compiler\noption -Wall means that a warning is issued in case of non-standard\nlanguage. The executable file is in this case `myprogram`. The option\n`-c` is for compilation only, where the program is translated into machine code,\nwhile the `-o` option links the produced object file `myprogram.o`\nand produces the executable `myprogram` .\n\nFor Fortran2008 we use the Intel compiler, replace `c++` with `ifort`.\nAlso, to speed up the code use compile options like\n\n c++ -O3 -c -Wall myprogram.cpp\n\n\n## Makefiles and simple scripts\n\nUnder Linux/Unix it is often convenient to create a\nso-called makefile, which is a script which includes possible\ncompiling commands.\n\n # Comment lines\n # General makefile for c - choose PROG = name of given program\n # Here we define compiler option, libraries and the target\n CC= g++ -Wall\n PROG= myprogram\n # this is the math library in C, not necessary for C++\n LIB = -lm\n # Here we make the executable file\n ${PROG} : ${PROG}.o\n ${CC} ${PROG}.o ${LIB} -o ${PROG}\n # whereas here we create the object file\n ${PROG}.o : ${PROG}.c\n ${CC} -c ${PROG}.c\n\n\nIf you name your file for `makefile`, simply type the command\n`make` and Linux/Unix executes all of the statements in the above\nmakefile. Note that C++ files have the extension `.cpp`.\n\n## [Hello world](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program1.cpp)\n\n**The C encounter.**\n\nHere we present first the C version.\n\n /* comments in C begin like this and end with */\n #include /* atof function */\n #include /* sine function */\n #include /* printf function */\n int main (int argc, char* argv[])\n {\n double r, s; /* declare variables */\n r = atof(argv[1]); /* convert the text argv[1] to double */\n s = sin(r);\n printf(\"Hello, World! sin(%g)=%g\\n\", r, s);\n return 0; /* success execution of the program */\n \n\n\n## Hello World, dissecting the code\n\n**Dissection I.**\n\nThe compiler must see a declaration of a function before you can\ncall it (the compiler checks the argument and return types).\nThe declaration of library functions appears\nin so-called \"header files\" that must be included in the program, e.g.,\n\n #include /* atof function */\n\n\nWe call three functions (atof, sin, printf)\nand these are declared in three different header files.\nThe main program is a function called main\nwith a return value set to an integer, int (0 if success).\nThe operating system stores the return value,\nand other programs/utilities can check whether\nthe execution was successful or not.\nThe command-line arguments are transferred to the main function through\n\n int main (int argc, char* argv[])\n\n\n## Hello World, more dissection\n\n**Dissection II.**\n\nThe command-line arguments are transferred to the main function through\n\n int main (int argc, char* argv[])\n\n\nThe integer `argc` is the no of command-line arguments, set to\none in our case, while\n`argv` is a vector of strings containing the command-line arguments\nwith `argv[0]` containing the name of the program\nand `argv[1]`, `argv[2]`, ... are the command-line args, i.e., the number of\nlines of input to the program.\nHere we define floating points, see also below,\nthrough the keywords `float` for single precision real numbers and\n`double` for double precision. The function\n`atof` transforms a text (`argv[1]`) to a float.\nThe sine function is declared in math.h, a library which\nis not automatically included and needs to be linked when computing\nan executable file.\n\nWith the command `printf` we obtain a formatted printout.\nThe `printf` syntax is used for formatting output\nin many C-inspired languages (Perl, Python, Awk, partly C++).\n\n\n\n## Hello World\n\n**Now in C++.**\n\nHere we present first the C++ version.\n\n // A comment line begins like this in C++ programs\n // Standard ANSI-C++ include files\n #include // input and output\n #include // math functions\n using namespace std;\n int main (int argc, char* argv[])\n {\n // convert the text argv[1] to double using atof:\n double r = atof(argv[1]); /* convert the text argv[1] to double */\n double s = sin(r);\n cout << \"Hello, World! sin(\" << r << \") =\" << s << endl;\n return 0; /* success execution of the program */\n }\n\n\n## C++ Hello World\n\n**Dissection I.**\n\nWe have replaced the call to `printf` with the standard C++ function\n`cout`. The header file `` is then needed.\nIn addition, we don't need to\ndeclare variables like `r` and `s` at the beginning of the program.\nI personally prefer\nhowever to declare all variables at the beginning of a function, as this\ngives *me* a feeling of greater readability.\n\n\n\n## Brief summary\n\n**C/C++ program.**\n\n * A C/C++ program begins with include statements of header files (libraries,intrinsic functions etc)\n\n * Functions which are used are normally defined at top (details next week)\n\n * The main program is set up as an integer, it returns 0 (everything correct) or 1 (something went wrong)\n\n * Standard `if`, `while` and `for` statements as in Java, Fortran, Python...\n\n * Integers have a very limited range.\n\n\n\n## Brief summary\n\n**Arrays.**\n\n * A C/C++ array begins by indexing at 0!\n\n * Array allocations are done by size, not by the final index value.If you allocate an array with 10 elements, you should index them from $0,1,\\dots, 9$.\n\n * Initialize always an array before a computation.\n\n\n\n## Serious problems and representation of numbers\n\n**Integer and Real Numbers.**\n\n\n * Overflow\n\n * Underflow\n\n * Roundoff errors\n\n * Loss of precision\n\n\n\n## Limits, you must declare variables\n\n\n**C++ and Fortran declarations.**\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
type in C/C++ and Fortran2008 bits range
int/INTEGER (2) 16 -32768 to 32767
unsigned int 16 0 to 65535
signed int 16 -32768 to 32767
short int 16 -32768 to 32767
unsigned short int 16 0 to 65535
signed short int 16 $-32768$ to 32767
int/long int/INTEGER (4) 32 -2147483648 to 2147483647
signed long int 32 -2147483648 to 2147483647
float/REAL(4) 32 $3.4\\times 10^{-44}$ to $3.4\\times 10^{+38}$
double/REAL(8) 64 $1.7\\times 10^{-322}$ to $1.7\\times 10^{+308}$
long double 64 $1.7\\times 10^{-322}$ to $1.7\\times 10^{+308}$
\n\n\n\n\n## From decimal to binary representation\n**How to do it.**\n\n$$\na_n2^n+a_{n-1}2^{n-1} +a_{n-2}2^{n-2} +\\dots +a_{0}2^{0}.\n$$\n\nIn binary notation we have thus $(417)_{10} =(110110001)_2$\nsince we have\n\n$$\n\\begin{align*}\n(110100001)_2\n&=1\\times2^8+1\\times 2^{7}+0\\times 2^{6}+1\\times 2^{5}+0\\times 2^{4}+0\\times 2^{3}\\\\\n&+0\\times 2^{2}+0\\times 2^{2}+0\\times 2^{1}+1\\times 2^{0}.\n\\end{align*}\n$$\n\n## From decimal to binary representation, the actual operation\nTo see this, we have performed the following divisions by 2\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
417/2=208 remainder 1 coefficient of $2^{0}$ is 1
208/2=104 remainder 0 coefficient of $2^{1}$ is 0
104/2=52 remainder 0 coefficient of $2^{2}$ is 0
52/2=26 remainder 0 coefficient of $2^{3}$ is 0
26/2=13 remainder 1 coefficient of $2^{4}$ is 0
13/2= 6 remainder 1 coefficient of $2^{5}$ is 1
6/2= 3 remainder 0 coefficient of $2^{6}$ is 0
3/2= 1 remainder 1 coefficient of $2^{7}$ is 1
1/2= 0 remainder 1 coefficient of $2^{8}$ is 1
\n\n\n\n## [From decimal to binary representation](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program2.cpp)\n\n**Integer numbers.**\n\n #include \n #include \n #include \n #include \n using namespace std;\n int main (int argc, char* argv[])\n {\n int i; \n int terms[32]; // storage of a0, a1, etc, up to 32 bits\n int save;\n int number = atoi(argv[1]); \n save = number;\n // initialise the term a0, a1 etc\n for (i=0; i < 32 ; i++){ terms[i] = 0;}\n for (i=0; i < 32 ; i++){ \n terms[i] = number%2;\n number /= 2;\n }\n // write out results\n cout << \"Number of bytes used= \" << sizeof(number) << endl;\n for (i=0; i < 32 ; i++){ \n cout << \" Term nr: \" << i << \"Value= \" << terms[i];\n cout << endl;\n }\n return 0; \n }\n\n\n## [From decimal to binary representation](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/Fortran/program2.f90)\n\n**Integer numbers, Fortran.**\n\n PROGRAM binary_integer\n IMPLICIT NONE\n INTEGER i, number, terms(0:31) ! storage of a0, a1, etc, up to 32 bits\n \n WRITE(*,*) 'Give a number to transform to binary notation'\n READ(*,*) number\n ! Initialise the terms a0, a1 etc\n terms = 0\n ! Fortran takes only integer loop variables\n DO i=0, 31\n terms(i) = MOD(number,2)\n number = number/2\n ENDDO\n ! write out results\n WRITE(*,*) 'Binary representation '\n DO i=0, 31\n WRITE(*,*)' Term nr and value', i, terms(i)\n ENDDO\n \n END PROGRAM binary_integer\n\n\n## [Representing Integer Numbers](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program3.cpp)\n\n**Possible Overflow for Integers.**\n\n // A comment line begins like this in C++ programs\n // Program to calculate 2**n\n // Standard ANSI-C++ include files */\n #include \n #include \n using namespace std\n int main()\n {\n int int1, int2, int3;\n // print to screen\n cout << \"Read in the exponential N for 2^N =\" << endl;\n // read from screen\n cin >> int2;\n int1 = (int) pow(2., (double) int2);\n cout << \" 2^N * 2^N = \" << int1*int1 << endl;\n int3 = int1 - 1;\n cout << \" 2^N*(2^N - 1) = \" << int1 * int3 << endl;\n cout << \" 2^N- 1 = \" << int3 << endl;\n return 0;\n \n // End: program main()\n\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nIn the decimal system we would write a number like $9.90625$\nin what is called the normalized scientific notation.\n\n$$\n9.90625=0.990625\\times 10^{1},\n$$\n\nand a real non-zero number could be generalized as\n\n\n
\n\n$$\n\\begin{equation}\n x=\\pm r\\times 10^{{\\mbox{n}}},\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nwith $r$ a number in the range $1/10 \\le r < 1$.\nIn a similar way we can use represent a binary number in\nscientific notation as\n\n\n
\n\n$$\n\\begin{equation}\n x=\\pm q\\times 2^{{\\mbox{m}}},\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nwith $q$ a number in the range $1/2 \\le q < 1$.\nThis means that the mantissa of a binary number would be represented by\nthe general formula\n\n\n
\n\n$$\n\\begin{equation}\n(0.a_{-1}a_{-2}\\dots a_{-n})_2=a_{-1}\\times 2^{-1}\n+a_{-2}\\times 2^{-2}+\\dots+a_{-n}\\times 2^{-n}.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nIn a typical computer, floating-point numbers are represented\nin the way described above, but with certain restrictions\non $q$ and $m$ imposed by the available word length.\nIn the machine, our\nnumber $x$ is represented as\n\n\n
\n\n$$\n\\begin{equation}\n x=(-1)^s\\times {\\mbox{mantissa}}\\times 2^{{\\mbox{exponent}}},\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nwhere $s$ is the sign bit, and the exponent gives the available range.\nWith a single-precision word, 32 bits, 8 bits would typically be reserved\nfor the exponent, 1 bit for the sign and 23 for the mantissa.\n\n\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nA modification of the scientific notation for binary numbers is to\nrequire that the leading binary digit 1 appears to the left of the binary point.\nIn this case the representation of the mantissa $q$ would be\n$(1.f)_2$ and $ 1 \\le q < 2$. This form is rather useful when storing\nbinary numbers in a computer word, since we can always assume that the leading\nbit 1 is there. One bit of space can then be saved meaning that a 23 bits\nmantissa has actually 24 bits. This means explicitely that a binary number with 23 bits\nfor the mantissa reads\n\n\n
\n\n$$\n\\begin{equation}\n(1.a_{-1}a_{-2}\\dots a_{-23})_2=1\\times 2^0+a_{-1}\\times 2^{-1}+\n+a_{-2}\\times 2^{-2}+\\dots+a_{-23}\\times 2^{-23}.\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\n## Loss of Precision, example\nAs an example, consider the 32 bits binary number\n\n$$\n(10111110111101000000000000000000)_2,\n$$\n\nwhere the first bit is reserved for the sign, 1 in this case yielding a\nnegative sign. The exponent $m$ is given by the next 8 binary numbers\n$01111101$ resulting in 125 in the decimal system.\n\n\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nHowever, since the\nexponent has eight bits, this means it has $2^8-1=255$ possible numbers in the interval\n$-128 \\le m \\le 127$, our final\nexponent is $125-127=-2$ resulting in $2^{-2}$.\nInserting the sign and the mantissa yields the final number in the decimal representation as\n\n$$\n-2^{-2}\\left(1\\times 2^0+1\\times 2^{-1}+\n1\\times 2^{-2}+1\\times 2^{-3}+0\\times 2^{-4}+1\\times 2^{-5}\\right)=\n$$\n\n$$\n(-0.4765625)_{10}.\n$$\n\nIn this case we have an exact machine representation with 32 bits (actually, we need less than\n23 bits for the mantissa).\n\n\n\n## Loss of Precision, consequences\nIf our number $x$ can be exactly represented in the machine, we call\n$x$ a machine number. Unfortunately, most numbers cannot and are thereby\nonly approximated in the machine. When such a number occurs as the result\nof reading some input data or of a computation, an inevitable error\nwill arise in representing it as accurately as possible by\na machine number.\n\n\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nA floating number x, labelled $fl(x)$ will therefore always be represented as\n\n\n
\n\n$$\n\\begin{equation}\n fl(x) = x(1\\pm \\epsilon_x),\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\nwith $x$ the exact number and the error $|\\epsilon_x| \\le |\\epsilon_M|$, where\n$\\epsilon_M$ is the precision assigned. A number like $1/10$ has no exact binary representation\nwith single or double precision. Since the mantissa\n\n$$\n\\left(1.a_{-1}a_{-2}\\dots a_{-n}\\right)_2\n$$\n\nis always truncated at some stage $n$ due to its limited number of bits, there is only a\nlimited number of real binary numbers. The spacing between every real binary number is given by the\nchosen machine precision.\nFor a 32 bit words this number is approximately\n$ \\epsilon_M \\sim 10^{-7}$ and for double precision (64 bits) we have\n$ \\epsilon_M \\sim 10^{-16}$, or in terms of a binary base\nas $2^{-23}$ and $2^{-52}$ for single and double precision, respectively.\n\n\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nIn the machine a number is represented as\n\n\n
\n\n$$\n\\begin{equation}\n fl(x)= x(1+\\epsilon)\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nwhere $|\\epsilon| \\leq \\epsilon_M$ and $\\epsilon$ is given by the\nspecified precision, $10^{-7}$ for single and $10^{-16}$ for double\nprecision, respectively.\n$\\epsilon_M$ is the given precision.\nIn case of a subtraction $a=b-c$, we have\n\n\n
\n\n$$\n\\begin{equation}\n fl(a)=fl(b)-fl(c) = a(1+\\epsilon_a),\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nor\n\n\n
\n\n$$\n\\begin{equation}\n fl(a)=b(1+\\epsilon_b)-c(1+\\epsilon_c),\n\\label{_auto9} \\tag{9}\n\\end{equation}\n$$\n\n## Loss of Precision\nThe above means that\n\n\n
\n\n$$\n\\begin{equation}\n fl(a)/a=1+\\epsilon_b\\frac{b}{a}- \\epsilon_c\\frac{c}{a},\n\\label{_auto10} \\tag{10}\n\\end{equation}\n$$\n\nand if $b\\approx c$ we see that there is a potential for an increased\nerror in $fl(a)$.\n\n\n\n## Loss of Precision\n\n**Machine Numbers.**\n\nDefine\nthe absolute error as\n\n\n
\n\n$$\n\\begin{equation}\n |fl(a)-a|,\n\\label{_auto11} \\tag{11}\n\\end{equation}\n$$\n\nwhereas the relative error is\n\n\n
\n\n$$\n\\begin{equation}\n \\frac{ |fl(a)-a|}{a} \\le \\epsilon_a.\n\\label{_auto12} \\tag{12}\n\\end{equation}\n$$\n\n## Loss of Precision\nThe above subraction is thus\n\n\n
\n\n$$\n\\begin{equation}\n \\frac{ |fl(a)-a|}{a}=\\frac{ |fl(b)-fl(c)-(b-c)|}{a},\n\\label{_auto13} \\tag{13}\n\\end{equation}\n$$\n\nyielding\n\n\n
\n\n$$\n\\begin{equation}\n \\frac{ |fl(a)-a|}{a}=\\frac{ |b\\epsilon_b- c\\epsilon_c|}{a}.\n\\label{_auto14} \\tag{14}\n\\end{equation}\n$$\n\nThe relative error\nis the quantity of interest in scientific work. Information about the\nabsolute error is normally of little use in the absence of the magnitude\nof the quantity being measured.\n\n\n\n## Loss of numerical precision\n\nSuppose we wish to evaluate the function\n\n$$\nf(x)=\\frac{1-\\cos(x)}{\\sin(x)},\n$$\n\nfor small values of $x$. Five leading digits. If we multiply the denominator and numerator\nwith $1+\\cos(x)$ we obtain the equivalent expression\n\n$$\nf(x)=\\frac{\\sin(x)}{1+\\cos(x)}.\n$$\n\nIf we now choose $x=0.007$ (in radians) our choice of precision results in\n\n$$\n\\sin(0.007)\\approx 0.69999\\times 10^{-2},\n$$\n\nand\n\n$$\n\\cos(0.007)\\approx 0.99998.\n$$\n\n## Loss of numerical precision\n\nThe first expression for $f(x)$ results in\n\n$$\nf(x)=\\frac{1-0.99998}{0.69999\\times 10^{-2}}=\\frac{0.2\\times 10^{-4}}{0.69999\\times 10^{-2}}=0.28572\\times 10^{-2},\n$$\n\nwhile the second expression results in\n\n$$\nf(x)=\\frac{0.69999\\times 10^{-2}}{1+0.99998}=\n\\frac{0.69999\\times 10^{-2}}{1.99998}=0.35000\\times 10^{-2},\n$$\n\nwhich is also the exact result. In the first expression, due to our\nchoice of precision, we have\nonly one relevant digit in the numerator, after the\nsubtraction. This leads to a loss of precision and a wrong result due to\na cancellation of two nearly equal numbers.\nIf we had chosen a precision of six leading digits, both expressions\nyield the same answer.\n\n## Loss of numerical precision\n\nIf we were to evaluate $x\\sim \\pi$, then the second expression for $f(x)$\ncan lead to potential losses of precision due to cancellations of nearly\nequal numbers.\n\nThis simple example demonstrates the loss of numerical precision due\nto roundoff errors, where the number of leading digits is lost\nin a subtraction of two near equal numbers.\nThe lesson to be drawn is that we cannot blindly compute a function.\nWe will always need to carefully analyze our algorithm in the search for\npotential pitfalls. There is no magic recipe however, the only guideline\nis an understanding of the fact that a machine cannot represent\ncorrectly *all* numbers.\n\n## Loss of precision can cause serious problems\n\n**Real Numbers.**\n\n\n * **Overflow**: When the positive exponent exceeds the max value, e.g., 308 for `DOUBLE PRECISION` (64 bits). Under such circumstances the program will terminate and some compilers may give you the warning `OVERFLOW`.\n\n * **Underflow**: When the negative exponent becomes smaller than the min value, e.g., -308 for `DOUBLE PRECISION`. Normally, the variable is then set to zero and the program continues. Other compilers (or compiler options) may warn you with the `UNDERFLOW` message and the program terminates.\n\n\n\n## Loss of precision, real numbers\n\n\n\n### Roundoff errors\n\nA floating point number like\n\n\n
\n\n$$\n\\begin{equation}\n x= 1.234567891112131468 = 0.1234567891112131468\\times 10^{1}\n\\label{_auto15} \\tag{15}\n\\end{equation}\n$$\n\nmay be stored in the following way. The exponent is small\nand is stored in full precision. However,\nthe mantissa is not stored fully. In double precision (64 bits), digits\nbeyond the\n15th are lost since the mantissa is normally stored in two words,\none which is the most significant one representing\n123456 and the least significant one containing 789111213. The digits\nbeyond 3 are lost. Clearly, if we are summing alternating series\nwith large numbers, subtractions between two large numbers may lead\nto roundoff errors, since not all relevant digits are kept.\nThis leads eventually to the next problem, namely\n\n## More on loss of precision\n\n**Real Numbers.**\n\n\n * **Loss of precision**: When one has to e.g., multiply two large numbers where one suspects that the outcome may be beyond the bonds imposed by the variable declaration, one could represent the numbers by logarithms, or rewrite the equations to be solved in terms of dimensionless variables. When dealing with problems in e.g., particle physics or nuclear physics where distance is measured in fm ($10^{-15}$ m), it can be quite convenient to redefine the variables for distance in terms of a dimensionless variable of the order of unity. To give an example, suppose you work with single precision and wish to perform the addition $1+10^{-8}$. In this case, the information containing in $10^{-8}$ is simply lost in the addition. Typically, when performing the addition, the computer equates first the exponents of the two numbers to be added. For $10^{-8}$ this has however catastrophic consequences since in order to obtain an exponent equal to $10^0$, bits in the mantissa are shifted to the right. At the end, all bits in the mantissa are zeros.\n\n\n\n## A problematic case\n\n**Three ways of computing $e^{-x}$.**\n\nBrute force:\n\n$$\n\\exp{(-x)}=\\sum_{n=0}^{\\infty}(-1)^n\\frac{x^n}{n!}\n$$\n\nRecursion relation for\n\n$$\n\\exp{(-x)}=\\sum_{n=0}^{\\infty}s_n=\\sum_{n=0}^{\\infty}(-1)^n\\frac{x^n}{n!}\n$$\n\n$$\ns_n=-s_{n-1}\\frac{x}{n},\n$$\n\n$$\n\\exp{(x)}=\\sum_{n=0}^{\\infty}s_n\n$$\n\n$$\n\\exp{(-x)}=\\frac{1}{\\exp{(x)}}\n$$\n\n## [Program](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program4.cpp) to compute $\\exp{(-x)}$\n\n**Brute Force.**\n\n // Program to calculate function exp(-x)\n // using straightforward summation with differing precision\n using namespace std\n #include \n #include \n // type float: 32 bits precision\n // type double: 64 bits precision\n #define TYPE double\n #define PHASE(a) (1 - 2 * (abs(a) % 2))\n #define TRUNCATION 1.0E-10\n // function declaration\n TYPE factorial(int);\n\n\n## Program to compute $\\exp{(-x)}$\n\n**Still Brute Force.**\n\n int main()\n {\n int n;\n TYPE x, term, sum;\n for(x = 0.0; x < 100.0; x += 10.0) {\n sum = 0.0; //initialization\n n = 0;\n term = 1;\n while(fabs(term) > TRUNCATION) {\n term = PHASE(n) * (TYPE) pow((TYPE) x,(TYPE) n)\n / factorial(n);\n sum += term;\n n++;\n } // end of while() loop\n\n\n## Program to compute $\\exp{(-x)}$\n\n**Oh, it never ends!**\n\n printf(\"\\nx = %4.1f exp = %12.5E series = %12.5E\n number of terms = %d\",\n x, exp(-x), sum, n);\n } // end of for() loop\n \n printf(\"\\n\"); // a final line shift on output\n return 0;\n } // End: function main()\n // The function factorial()\n // calculates and returns n!\n TYPE factorial(int n)\n {\n int loop;\n TYPE fac;\n for(loop = 1, fac = 1.0; loop <= n; loop++) {\n fac *= loop;\n \n return fac;\n } // End: function factorial()\n\n\n## Results $\\exp{(-x)}$\n\n**What is going on?**\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$x$ $\\exp{(-x)}$ Series Number of terms in series
0.0 0.100000E+01 0.100000E+01 1
10.0 0.453999E-04 0.453999E-04 44
20.0 0.206115E-08 0.487460E-08 72
30.0 0.935762E-13 -0.342134E-04 100
40.0 0.424835E-17 -0.221033E+01 127
50.0 0.192875E-21 -0.833851E+05 155
60.0 0.875651E-26 -0.850381E+09 171
70.0 0.397545E-30 NaN 171
80.0 0.180485E-34 NaN 171
90.0 0.819401E-39 NaN 171
100.0 0.372008E-43 NaN 171
\n\n\n\n\n## [Program](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program5.cpp) to compute $\\exp{(-x)}$\n\n // program to compute exp(-x) without exponentials\n using namespace std\n #include \n #include \n #define TRUNCATION 1.0E-10\n \n int main()\n {\n int loop, n;\n double x, term, sum;\n for(loop = 0; loop <= 100; loop += 10)\n {\n x = (double) loop; // initialization\n sum = 1.0;\n term = 1;\n n = 1;\n\n\n## Program to compute $\\exp{(-x)}$\n\n**Last statements.**\n\n while(fabs(term) > TRUNCATION)\n {\n \t term *= -x/((double) n);\n \t sum += term;\n \t n++;\n } // end while loop\n cout << \"x = \" << x << \" exp = \" << exp(-x) <<\"series = \"\n << sum << \" number of terms =\" << n << endl;\n } // end of for() loop\n \n cout << endl; // a final line shift on output\n \n } /* End: function main() */\n\n\n## Results $\\exp{(-x)}$\n\n**More Problems.**\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$x$ $\\exp{(-x)}$ Series Number of terms in series
0.000000 0.10000000E+01 0.10000000E+01 1
10.000000 0.45399900E-04 0.45399900E-04 44
20.000000 0.20611536E-08 0.56385075E-08 72
30.000000 0.93576230E-13 -0.30668111E-04 100
40.000000 0.42483543E-17 -0.31657319E+01 127
50.000000 0.19287498E-21 0.11072933E+05 155
60.000000 0.87565108E-26 -0.33516811E+09 182
70.000000 0.39754497E-30 -0.32979605E+14 209
80.000000 0.18048514E-34 0.91805682E+17 237
90.000000 0.81940126E-39 -0.50516254E+22 264
100.000000 0.37200760E-43 -0.29137556E+26 291
\n\n\n\n## Most used formula for derivatives\n\n**3 point formulae.**\n\nFirst derivative ($f_0 = f(x_0)$, $f_{-h}=f(x_0-h)$ and $f_{+h}=f(x_0+h)$\n\n$$\n\\frac{f_h-f_{-h}}{2h}=f'_0+\\sum_{j=1}^{\\infty}\\frac{f_0^{(2j+1)}}{(2j+1)!}h^{2j}.\n$$\n\nSecond derivative\n\n$$\n\\frac{ f_h -2f_0 +f_{-h}}{h^2}=f_0''+2\\sum_{j=1}^{\\infty}\\frac{f_0^{(2j+2)}}{(2j+2)!}h^{2j}.\n$$\n\n## Error Analysis\n\n$$\n\\epsilon=log_{10}\\left(\\left|\\frac{f''_{\\mbox{computed}}-f''_{\\mbox{exact}}}\n {f''_{\\mbox{exact}}}\\right|\\right),\n$$\n\n$$\n\\epsilon_{\\mbox{tot}}=\\epsilon_{\\mbox{approx}}+\\epsilon_{\\mbox{ro}}.\n$$\n\nFor the computed second derivative we have\n\n$$\nf_0''=\\frac{ f_h -2f_0 +f_{-h}}{h^2}-2\\sum_{j=1}^{\\infty}\\frac{f_0^{(2j+2)}}{(2j+2)!}h^{2j},\n$$\n\nand the truncation or approximation error goes like\n\n$$\n\\epsilon_{\\mbox{approx}}\\approx \\frac{f_0^{(4)}}{12}h^{2}.\n$$\n\n## Error Analysis\n\nIf we were not to worry about loss of precision, we could in principle\nmake $h$ as small as possible.\nHowever, due to the computed expression in the above program example\n\n$$\nf_0''=\\frac{ f_h -2f_0 +f_{-h}}{h^2}=\\frac{ (f_h -f_0) +(f_{-h}-f_0)}{h^2},\n$$\n\nwe reach fairly quickly a limit for where loss of precision due to the subtraction\nof two nearly equal numbers becomes crucial.\n\nIf $(f_{\\pm h} -f_0)$ are very close, we have\n$(f_{\\pm h} -f_0)\\approx \\epsilon_M$, where $|\\epsilon_M|\\le 10^{-7}$ for single and\n$|\\epsilon_M|\\le 10^{-15}$ for double precision, respectively.\n\nWe have then\n\n$$\n\\left|f_0''\\right|=\n \\left|\\frac{ (f_h -f_0) +(f_{-h}-f_0)}{h^2}\\right|\\le \\frac{ 2 \\epsilon_M}{h^2}.\n$$\n\n## Error Analysis\n\nOur total error becomes\n\n\n
\n\n$$\n\\left|\\epsilon_{\\mbox{tot}}\\right|\\le \\frac{2 \\epsilon_M}{h^2} +\n \\frac{f_0^{(4)}}{12}h^{2}.\n\\label{eq:experror} \\tag{16}\n$$\n\nIt is then natural to ask which value of $h$ yields the smallest\ntotal error. Taking the derivative of $\\left|\\epsilon_{\\mbox{tot}}\\right|$\nwith respect to $h$ results in\n\n$$\nh= \\left(\\frac{ 24\\epsilon_M}{f_0^{(4)}}\\right)^{1/4}.\n$$\n\nWith double precision and $x=10$ we obtain\n\n$$\nh\\approx 10^{-4}.\n$$\n\nBeyond this value, it is essentially the loss of numerical precision\nwhich takes over.\n\n\n\n## Error Analysis\n\nDue to the subtractive cancellation in the expression\nfor $f''$ there is a pronounced detoriation in accuracy as $h$ is made smaller\nand smaller.\n\nIt is instructive in this analysis to rewrite the numerator of\nthe computed derivative as\n\n$$\n(f_h -f_0) +(f_{-h}-f_0)=(e^{x+h}-e^{x}) + (e^{x-h}-e^{x}),\n$$\n\nas\n\n$$\n(f_h -f_0) +(f_{-h}-f_0)=e^x(e^{h}+e^{-h}-2),\n$$\n\nsince it is the difference $(e^{h}+e^{-h}-2)$ which causes\nthe loss of precision.\n\n\n\n## Error Analysis\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$x$ $h=0.01$ $h=0.001$ $h=0.0001$ $h=0.0000001$ Exact
0.0 1.000008 1.000000 1.000000 1.010303 1.000000
1.0 2.718304 2.718282 2.718282 2.753353 2.718282
2.0 7.389118 7.389057 7.389056 7.283063 7.389056
3.0 20.085704 20.085539 20.085537 20.250467 20.085537
4.0 54.598605 54.598155 54.598151 54.711789 54.598150
5.0 148.414396 148.413172 148.413161 150.635056 148.413159
\n\n\n\n## Error Analysis\n\nThe results for $x=10$ are shown in the Table\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$h$ $e^{h}+e^{-h}$ $e^{h}+e^{-h}-2$
$10^{-1}$ 2.0100083361116070 $1.0008336111607230\\times 10^{-2}$
$10^{-2}$ 2.0001000008333358 $1.0000083333605581\\times 10^{-4}$
$10^{-3}$ 2.0000010000000836 $1.0000000834065048\\times 10^{-6}$
$10^{-5}$ 2.0000000099999999 $1.0000000050247593\\times 10^{-8}$
$10^{-5}$ 2.0000000001000000 $9.9999897251734637\\times 10^{-11}$
$10^{-6}$ 2.0000000000010001 $9.9997787827987850\\times 10^{-13}$
$10^{-7}$ 2.0000000000000098 $9.9920072216264089\\times 10^{-15}$
$10^{-8}$ 2.0000000000000000 $0.0000000000000000\\times 10^{0}$
$10^{-9}$ 2.0000000000000000 $1.1102230246251565\\times 10^{-16}$
$10^{-10}$ 2.0000000000000000 $0.0000000000000000\\times 10^{0}$
\n\n\n\n## Technical Matter in C/C++: Pointers\n\nA pointer specifies where a value resides in the computer's memory (like a house number specifies where a particular family resides on a street).\n\nA pointer points to an address not to a data container of any kind!\n\nSimple example declarations:\n\n using namespace std; // note use of namespace\n int main()\n {\n // what are the differences?\n int var;\n cin >> var;\n int *p, q;\n int *s, *t;\n int * a new[var]; // dynamic memory allocation\n delete [] a;\n }\n\n\n## Technical Matter in C/C++: [Pointer example I](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program7.cpp)\n\n using namespace std; // note use of namespace\n int main()\n {\n int var;\n int *p;\n p = &var;\n var = 421;\n printf(\"Address of integer variable var : %p\\n\",&var);\n printf(\"Its value: %d\\n\", var);\n printf(\"Value of integer pointer p : %p\\n\",p);\n printf(\"The value p points at : %d\\n\",*p);\n printf(\"Address of the pointer p : %p\\n\",&p);\n return 0;\n }\n\n\n## Dissection: Pointer example I\n\n**Discussion.**\n\n int main()\n {\n int var; // Define an integer variable var\n int *p; // Define a pointer to an integer\n p = &var; // Extract the address of var\n var = 421; // Change content of var\n printf(\"Address of integer variable var : %p\\n\", &var);\n printf(\"Its value: %d\\n\", var); // 421\n printf(\"Value of integer pointer p : %p\\n\", p); // = &var\n // The content of the variable pointed to by p is *p\n printf(\"The value p points at : %d\\n\", *p);\n // Address where the pointer is stored in memory\n printf(\"Address of the pointer p : %p\\n\", &p);\n return 0;\n }\n\n\n## [Pointer example II](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/IntroProgramming/cpp/program8.cpp)\n\n int matr[2];\n int *p;\n p = &matr[0];\n matr[0] = 321;\n matr[1] = 322;\n printf(\"\\nAddress of matrix element matr[1]: %p\",&matr[0]);\n printf(\"\\nValue of the matrix element matr[1]; %d\",matr[0]);\n printf(\"\\nAddress of matrix element matr[2]: %p\",&matr[1]);\n printf(\"\\nValue of the matrix element matr[2]: %d\\n\", matr[1]);\n printf(\"\\nValue of the pointer p: %p\",p);\n printf(\"\\nThe value p points to: %d\",*p);\n printf(\"\\nThe value that (p+1) points to %d\\n\",*(p+1));\n printf(\"\\nAddress of pointer p : %p\\n\",&p);\n\n\n## Dissection: Pointer example II\n\n int matr[2]; // Define integer array with two elements\n int *p; // Define pointer to integer\n p = &matr[0]; // Point to the address of the first element in matr\n matr[0] = 321; // Change the first element\n matr[1] = 322; // Change the second element\n printf(\"\\nAddress of matrix element matr[1]: %p\", &matr[0]);\n printf(\"\\nValue of the matrix element matr[1]; %d\", matr[0]);\n printf(\"\\nAddress of matrix element matr[2]: %p\", &matr[1]);\n printf(\"\\nValue of the matrix element matr[2]: %d\\n\", matr[1]);\n printf(\"\\nValue of the pointer p: %p\", p);\n printf(\"\\nThe value p points to: %d\", *p);\n printf(\"\\nThe value that (p+1) points to %d\\n\", *(p+1));\n printf(\"\\nAddress of pointer p : %p\\n\", &p);\n\n\n## Output of Pointer example II\n\n Address of the matrix element matr[1]: 0xbfffef70\n Value of the matrix element matr[1]; 321\n Address of the matrix element matr[2]: 0xbfffef74\n Value of the matrix element matr[2]: 322\n Value of the pointer: 0xbfffef70\n The value pointer points at: 321\n The value that (pointer+1) points at: 322\n Address of the pointer variable : 0xbfffef6c\n\n\n## File handling; C-way\n\n using namespace std;\n #include \n int main(int argc, char *argv[])\n {\n FILE *in_file, *out_file;\n if( argc < 3) {\n printf(\"The programs has the following structure :\\n\");\n printf(\"write in the name of the input and output files \\n\");\n exit(0);\n }\n in_file = fopen( argv[1], \"r\");// returns pointer to the input file\n if( in_file == NULL ) { // NULL means that the file is missing\n printf(\"Can't find the input file %s\\n\", argv[1]);\n exit(0);\n\n\n## File handling; C way cont.\n\n out_file = fopen( argv[2], \"w\"); // returns a pointer to the output file\n if( out_file == NULL ) { // can't find the file\n printf(\"Can't find the output file%s\\n\", argv[2]);\n exit(0);\n }\n fclose(in_file);\n fclose(out_file);\n return 0;\n\n\n## File handling, C++-way\n\n #include \n \n // input and output file as global variable\n ofstream ofile;\n ifstream ifile;\n\n\n## File handling, C++-way\n\n int main(int argc, char* argv[])\n {\n char *outfilename;\n //Read in output file, abort if there are too\n //few command-line arguments\n if( argc <= 1 ){\n cout << \"Bad Usage: \" << argv[0] <<\n \" read also output file on same line\" << endl;\n exit(1);\n }\n else{\n outfilename=argv[1];\n }\n ofile.open(outfilename);\n .....\n ofile.close(); // close output file\n\n\n## File handling, C++-way\n\n void output(double r_min , double r_max, int max_step,\n double *d)\n {\n int i;\n ofile << \"RESULTS:\" << endl;\n ofile << setiosflags(ios::showpoint | ios::uppercase);\n ofile <<\"R_min = \" << setw(15) << setprecision(8) <> a >> b >> c; // skips white space in between\n \n // Can test on success of reading:\n \n if (!(ifile >> a >> b >> c)) ok = 0;\n\n\n## Call by value or reference\nC++ allows the programmer to use solely call by reference (note that call by reference is implemented as pointers). To see the difference between C and C++, consider the following simple examples.\nIn C we would write\n\n int n; n =8;\n func(&n); /* &n is a pointer to n */\n ....\n void func(int *i)\n {\n *i = 10; /* n is changed to 10 */\n ....\n }\n\n\nwhereas in C++ we would write\n\n int n; n =8;\n func(n); // just transfer n itself\n ....\n void func(int& i)\n {\n i = 10; // n is changed to 10\n ....\n }\n\n\n## Call by value or reference\nThe reason why we emphasize the difference between call by value and call\nby reference is that it allows the programmer to avoid pitfalls\nlike unwanted changes of variables. However, many people feel that this\nreduces the readability of the code.\n\n\n## Call by value and reference, F90/95\n\nIn Fortran we can use `INTENT(IN)`, `INTENT(OUT)`, `INTENT(INOUT)` to let the\nprogram know which values should or should not be changed.\n\n SUBROUTINE coulomb_integral(np,lp,n,l,coulomb)\n USE effective_interaction_declar\n USE energy_variables\n USE wave_functions\n IMPLICIT NONE\n INTEGER, INTENT(IN) :: n, l, np, lp\n INTEGER :: i\n REAL(KIND=8), INTENT(INOUT) :: coulomb\n REAL(KIND=8) :: z_rel, oscl_r, sum_coulomb\n ...\n\n\nThis hinders unwanted changes and increases readability.\n\n\n## [Example codes in c++, dynamic memory allocation](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/Classes/cpp/program4.cpp)\n\n #include \n #include \n using namespace std; // note use of namespace \n int main (int argc, char* argv[])\n {\n int i = atoi(argv[1]); \n // Dynamic memory allocation: need tp declare -a- as a pointer\n // You can use double *a = new double[i]; or \n double *a;\n a = new double[i];\n // the first of element of a, a[0], and its address is the \n // value of the pointer. \n /* This is a longer comment\n if we want a static memory allocation \n this is the way to do it\n */\n cout << \" bytes for i=\" << sizeof(i) << endl;\n for (int j = 0; j < i; j++) {\n a[j] = j*exp(2.0);\n cout << \"a=\" << a[j] << endl;\n }\n // freeing memory\n delete [] a;\n // to check for memory leaks, use the software called -valgrind-\n return 0; /* success execution of the program */\n }\n \n\n\n## [Example codes in c++, writing to file and dynamic allocation for arrays](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/Classes/cpp/program5.cpp)\n\n #include \n #include \n #include \n #include \n using namespace std; // note use of namespace \n \n // output file as global variable\n \n ofstream ofile; \n \n // Begin of main program \n \n int main(int argc, char* argv[])\n {\n char *outfilename;\n // Read in output file, abort if there are too few command-line arguments\n if( argc <= 2 ){\n cout << \"Bad Usage: \" << argv[0] << \n \" read also output file and number of elements on same line\" << endl;\n exit(1);\n }\n else{\n outfilename=argv[1];\n }\n \n // opening a file for the program\n ofile.open(outfilename); \n int i = atoi(argv[2]); \n // int *a;\n //a = new int[i];\n double *a = new double[i]; \n cout << \" bytes for i=\" << sizeof(i) << endl;\n for (int j = 0; j < i; j++) {\n a[j] = j*exp(2.0);\n // ofile instead of cout\n ofile << setw(15) << setprecision(8) << \"a=\" << a[j] << endl;\n }\n delete [] a; // free memory\n ofile.close(); // close output file\n return 0;\n }\n \n\n\n## [Example codes in c++, transfer of data using call by value and call by reference](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/Classes/cpp/program6.cpp)\n\n #include \n using namespace std;\n // Declare functions before main\n void func(int, int*);\n int main(int argc, char *argv[]) \n {\n int a; \n int *b;\n a = 10;\n b = new int[10];\n for(int i = 0; i < 10; i++) {\n b[i] = i;\n cout << b[i] << endl;\n }\n // the variable a is transferred by call by value. This means\n // that the function func cannot change a in the calling function\n func( a,b);\n \n delete [] b ; \n return 0;\n } // End: function main()\n \n void func( int x, int *y) \n {\n // a becomes locally x and it can be changed locally\n x+=7;\n // func gets the address of the first element of y (b)\n // it changes y[0] to 10 and when returning control to main\n // it changes also b[0]. Call by reference\n *y += 10; // *y = *y+10;\n // explicit element \n y[6] += 10;\n // in this function y[0] and y[6] have been changed and when returning \n // control to main this means that b[0] and b[6] are changed. \n return;\n } // End: function func()\n \n\n\n## [Example codes in c++, operating on several arrays and printing time used](https://github.com/CompPhysics/ComputationalPhysicsMSU/blob/master/doc/Programs/LecturePrograms/programs/Classes/cpp/program7.cpp)\n\n #include \n #include \n #include \n #include \n #include \"time.h\" \n \n using namespace std; // note use of namespace \n int main (int argc, char* argv[])\n {\n int i = atoi(argv[1]); \n double *a, *b, *c;\n a = new double[i]; \n b = new double[i]; \n c = new double[i]; \n \n clock_t start, finish;\n start = clock();\n for (int j = 0; j < i; j++) {\n a[j] = cos(j*1.0);\n b[j] = sin(j+3.0);\n c[j] = 0.0;\n }\n for (int j = 0; j < i; j++) {\n c[j] = a[j]+b[j];\n }\n finish = clock();\n double timeused = (double) (finish - start)/(CLOCKS_PER_SEC );\n cout << setiosflags(ios::showpoint | ios::uppercase);\n cout << setprecision(10) << setw(20) << \"Time used for vector addition=\" << timeused << endl;\n delete [] a;\n delete [] b;\n delete [] c;\n return 0; /* success execution of the program */\n }\n \n\n", "meta": {"hexsha": "ceeb6733018d9e10923faee6ae4b48ea4bf897fc", "size": 82237, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/languages/ipynb/languages.ipynb", "max_stars_repo_name": "tiffanmc/compphys", "max_stars_repo_head_hexsha": "f37ebc905878d6b8cc5c597604d5523850df52d8", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/pub/languages/ipynb/languages.ipynb", "max_issues_repo_name": "tiffanmc/compphys", "max_issues_repo_head_hexsha": "f37ebc905878d6b8cc5c597604d5523850df52d8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pub/languages/ipynb/languages.ipynb", "max_forks_repo_name": "tiffanmc/compphys", "max_forks_repo_head_hexsha": "f37ebc905878d6b8cc5c597604d5523850df52d8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.9053480475, "max_line_length": 1061, "alphanum_fraction": 0.4831766723, "converted": true, "num_tokens": 16408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.28776780354463427, "lm_q2_score": 0.5117166047041654, "lm_q1q2_score": 0.14725556337303553}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport control as control\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nfrom ipywidgets import widgets\nfrom ipywidgets import interact\nimport scipy.signal as signal\nimport sympy as sym\n\n```\n\n## Mehanski sistemi\n\n#### Model mase, vzmeti in du\u0161ilke\n> Model masa-vzmet-du\u0161ilka v splo\u0161nem sestoji iz lo\u010denih vozli\u0161\u010d mas, razporejenih po celotnem sistemu in medsebojno povezanih z vzmetmi in du\u0161ilkami. Model se uporablja za modeliranje sistemov s kompleksnimi lastnostmi, kot sta nelinearnost in viskoelasti\u010dnost (vir: [Wikipedia](https://en.wikipedia.org/wiki/Mass-spring-damper_model \"Mass-spring-model\"))\n#### \u010cetrtinski model avtomobila\n> \u010cetrtinski model avtomobila se uporablja za analizo kakovosti vo\u017enje pri uporabi razli\u010dnih sistemov vzmetenja avtomobilob. Masa $m_1$ je ti. obremenjena masa, ki predstavlja eno \u010detrtino mase avtomobila in jo podpira sistem vzmetenja. Masa $m_2$ predstavlja ti. neobremenjeno maso, tj. skupno maso kolesa in polosnega sklopa ter bla\u017eilnika z vzmetenjem. Togost in du\u0161enje sistema za vzmetenja sta modelirana z idealno konstanto vzmeti $k_1$ in koeficientom du\u0161enja $B$. Togost pnevmatike je modelirana s konstanto vzmeti $k_2$. (vir: [Chegg Study](https://www.chegg.com/homework-help/questions-and-answers/figure-p230-shows-1-4-car-model-used-analyze-ride-quality-automotive-suspension-systems-ma-q26244005 \"1/4 car model\"))\n\n---\n\n### Kako upravljati s tem interaktivnim primerom?\n\n1. Preklaplja\u0161 lahko med sistemoma *masa-vzmet-du\u0161ilka* in *\u010detritnski model avtomobila* z izbiro ustreznega gumba.\n2. Za vhodno funkcijo $F$ lahko izbira\u0161 med *kora\u010dno funkcijo*, *impulzno funkcijo*, *rampo* in *sinusno funkcijo. \n3. S premikanjem drsnikov lahko spreminja\u0161 vrednosti mas ($m$; $m_1$ and $m_2$), koeficientov vzmeti ($k$; $k_1$ and $k_2$), koeficientov du\u0161enja ($B$), oja\u010danje vstopnega signala in za\u010detne pogoje ($x_0$, $\\dot{x}_0$, $y_0$, $\\dot{y}_0$).\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n
Masa-vzmet-du\u0161ilka\u010cetrtinski model avtomobila
\n\n\n```python\n# create figure\nfig = plt.figure(figsize=(9.8, 4),num='Mehanski sistemi')\n\n# add sublot\nax = fig.add_subplot(111)\nax.set_title('\u010casovni odziv')\nax.set_ylabel('vhod, izhod')\nax.set_xlabel('$t$ [s]')\n\nax.grid(which='both', axis='both', color='lightgray')\n\ninputf, = ax.plot([], [])\nresponsef, = ax.plot([], [])\nresponsef2, = ax.plot([], [])\narrowf, = ax.plot([],[])\n\nstyle = {'description_width': 'initial','button_width':'180px'}\n\nselectSystem=widgets.ToggleButtons(\n options=[('masa-vzmet-du\u0161ilka',0),('\u010detrtinski model avtomobila',1)],\n description='Izberi sistem: ', style=style) # define toggle buttons\nselectForce = widgets.ToggleButtons(\n options=[('kora\u010dna funkcija', 0), ('impulzna funkcija', 1), ('rampa', 2), ('sinusna funkcija', 3)],\n description='Izberi $F$: ', style=style)\ndisplay(selectSystem)\ndisplay(selectForce)\n\ndef build_model(M,K,B,M1,M2,B1,K1,K2,amp,x0,xpika0,y0,ypika0,select_System,index):\n \n num_of_samples = 1000\n total_time = 25\n t = np.linspace(0, total_time, num_of_samples) # time for which response is calculated (start, stop, step)\n \n global inputf, responsef, responsef2, arrowf\n \n if select_System==0:\n \n system0 = control.TransferFunction([1], [M, B, K])\n \n if index==0:\n inputfunc = np.ones(len(t))*amp\n inputfunc[0]=0\n time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0]) \n \n elif index==1:\n inputfunc=signal.unit_impulse(1000, 0)*amp\n time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0]) \n \n elif index==2:\n inputfunc=t;\n time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0]) \n \n elif index==3:\n inputfunc=np.sin(t)*amp\n time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0]) \n \n elif select_System==1:\n \n system1=control.TransferFunction([M2, B1, K1+K2], [M1*M2, M1*B1+M2*B1, M2*K1+M1*(K1+K2), K2*B1, K1*K2])\n system2=control.TransferFunction([B1*K1*M2**2, B1**2*K1*M2, B1*K1**2*M2 + 2*B1*K1*K2*M2,\n B1**2*K1*K2, B1*K1**2*K2 + B1*K1*K2**2],\n [M1**2*M2**2, B1*M1**2*M2 + 2*B1*M1*M2**2, \n B1**2*M1*M2 + B1**2*M2**2 + K1*M1**2*M2 + 2*K1*M1*M2**2 + 2*K2*M1**2*M2 + K2*M1*M2**2,\n 2*B1*K1*M1*M2 + 2*B1*K1*M2**2 + B1*K2*M1**2 + 5*B1*K2*M1*M2 + B1*K2*M2**2,\n B1**2*K2*M1 + 2*B1**2*K2*M2 + K1**2*M1*M2 + K1**2*M2**2 + K1*K2*M1**2 + 5*K1*K2*M1*M2 + K1*K2*M2**2 + K2**2*M1**2 + 2*K2**2*M1*M2,\n 2*B1*K1*K2*M1 + 4*B1*K1*K2*M2 + 3*B1*K2**2*M1 + 2*B1*K2**2*M2,\n B1**2*K2**2 + K1**2*K2*M1 + 2*K1**2*K2*M2 + 3*K1*K2**2*M1 + 2*K1*K2**2*M2 + K2**3*M1,\n 2*B1*K1*K2**2 + B1*K2**3,\n K1**2*K2**2 + K1*K2**3])\n if index==0:\n inputfunc = np.ones(len(t))*amp\n inputfunc[0]=0 \n time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])\n time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])\n \n elif index==1:\n inputfunc=signal.unit_impulse(1000, 0)*amp\n time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])\n time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])\n \n elif index==2:\n inputfunc=t;\n time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])\n time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])\n \n elif index==3:\n inputfunc=np.sin(t)*amp\n time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])\n time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])\n\n \n ax.lines.remove(responsef)\n ax.lines.remove(inputf)\n ax.lines.remove(responsef2)\n ax.lines.remove(arrowf)\n \n inputf, = ax.plot(t,inputfunc,label='$F$',color='C0')\n responsef, = ax.plot(time, response,label='$x$',color='C3')\n \n if select_System==1:\n responsef2, = ax.plot(time, response2,label='$y$',color='C2')\n elif select_System==0:\n responsef2, = ax.plot([],[])\n \n if index==1:\n if amp>0:\n arrowf, = ax.plot([-0.1,0,0.1],[amp-((amp*0.05)/2),amp,amp-((amp*0.05)/2)],color='C0',linewidth=4)\n elif amp==0:\n arrowf, = ax.plot([],[])\n elif amp<0:\n arrowf, = ax.plot([-0.1,0,0.1],[amp-((amp*0.05)/2),amp,amp-(amp*(0.05)/2)],color='C0',linewidth=4)\n else:\n arrowf, = ax.plot([],[])\n \n ax.relim()\n ax.autoscale_view()\n \n ax.legend() \n \ndef update_sliders(index):\n global m1_slider, b1_slider, k1_slider, m21_slider, m22_slider, b2_slider, k21_slider, k22_slider\n global x0_slider, xpika0_slider, y0_slider, ypika0_slider\n\n m1val = [0.1,0.1,0.1,0.1]\n k1val = [1,1,1,1]\n b1val = [0.1,0.1,0.1,0.1]\n m21val = [0.1,0.1,0.1,0.1]\n m22val = [0.1,0.1,0.1,0.1]\n b2val = [0.1,0.1,0.1,0.1]\n k21val = [1,1,1,1]\n k22val = [1,1,1,1]\n x0val = [0,0,0,0]\n xpika0val = [0,0,0,0]\n y0val = [0,0,0,0]\n ypika0val = [0,0,0,0]\n \n m1_slider.value = m1val[index]\n k1_slider.value = k1val[index]\n b1_slider.value = b1val[index]\n m21_slider.value = m21val[index]\n m22_slider.value = m22val[index]\n b2_slider.value = b2val[index]\n k21_slider.value = k21val[index]\n k22_slider.value = k22val[index]\n x0_slider.value = x0val[index]\n xpika0_slider.value = xpika0val[index]\n y0_slider.value = y0val[index]\n ypika0_slider.value = ypika0val[index] \n \ndef draw_controllers(type_select,index):\n \n global m1_slider, b1_slider, k1_slider, m21_slider, m22_slider, b2_slider, k21_slider, k22_slider\n global x0_slider, xpika0_slider, y0_slider, ypika0_slider\n \n \n x0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,\n description='$x_0$ [dm]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n xpika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,\n description='${\\dot{x}}_0$ [dm/s]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n \n if type_select==0:\n \n amp_slider = widgets.FloatSlider(value=1.,min=-2.,max=2.,step=0.1,\n description='oja\u010danje vstopnega signala:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',style=style)\n \n m1_slider = widgets.FloatSlider(value=.1, min=.01, max=1., step=.01,\n description='$m$ [kg]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n k1_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,\n description='$k$ [N/m]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.1f',)\n b1_slider = widgets.FloatSlider(value=.1,min=0.0,max=0.5,step=.01,\n description='$B$ [Ns/m]:',disabled=False,continuous_update=False,\n rientation='horizontal',readout=True,readout_format='.2f',)\n m21_slider = widgets.FloatSlider(value=.1,min=.01,max=1.,step=.01,\n description='$m_1$ [kg]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',\n )\n m22_slider = widgets.FloatSlider(value=.1,min=.0,max=1.,step=.01,\n description='$m_2$ [kg]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',\n )\n b2_slider = widgets.FloatSlider(value=.1,min=0.0,max=2,step=.01,\n description='$B$ [Ns/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',\n )\n k21_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,\n description='$k_1$ [N/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',\n )\n k22_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,\n description='$k_2$ [N/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',\n )\n \n y0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,\n description='$y_0$ [dm]:',disabled=True,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n ypika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,\n description='${\\dot{y}}_0$ [dm/s]:',disabled=True,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n elif type_select==1:\n \n amp_slider = widgets.FloatSlider(value=1.,min=-2.,max=2.,step=0.1,\n description='oja\u010danje vstopnega signala:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',style=style)\n \n m1_slider = widgets.FloatSlider(value=.1, min=.01, max=1., step=.01,\n description='$m$ [kg]:',disabled=True,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n k1_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,\n description='$k$ [N/m]:',disabled=True,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.1f',)\n b1_slider = widgets.FloatSlider(value=.1,min=0.0,max=0.5,step=.01,\n description='$B$ [Ns/m]:',disabled=True,continuous_update=False,\n rientation='horizontal',readout=True,readout_format='.2f',)\n m21_slider = widgets.FloatSlider(value=.1,min=.01,max=1.,step=.01,\n description='$m_1$ [kg]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',\n )\n m22_slider = widgets.FloatSlider(value=.1,min=.0,max=1.,step=.01,\n description='$m_2$ [kg]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',\n )\n b2_slider = widgets.FloatSlider(value=.1,min=0.0,max=2,step=.01,\n description='$B$ [Ns/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',\n )\n k21_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,\n description='$k_1$ [N/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',\n )\n k22_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,\n description='$k_2$ [N/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',\n )\n \n y0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,\n description='$y_0$ [dm]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n ypika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,\n description='${\\dot{y}}_0$ [dm/s]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',) \n input_data = widgets.interactive_output(build_model, {'M':m1_slider, 'K':k1_slider, 'B':b1_slider, 'M1':m21_slider,\n 'M2':m22_slider, 'B1':b2_slider, 'K1':k21_slider, 'K2':k22_slider, 'amp':amp_slider,\n 'x0':x0_slider,'xpika0':xpika0_slider,'y0':y0_slider,'ypika0':ypika0_slider, \n 'select_System':selectSystem,'index':selectForce}) \n \n input_data2 = widgets.interactive_output(update_sliders, {'index':selectForce})\n \n box_layout = widgets.Layout(border='1px solid black',\n width='auto',\n height='',\n flex_flow='row',\n display='flex')\n\n buttons1=widgets.HBox([widgets.VBox([amp_slider],layout=widgets.Layout(width='auto')),\n widgets.VBox([x0_slider,xpika0_slider]),\n widgets.VBox([y0_slider,ypika0_slider])],layout=box_layout)\n display(widgets.VBox([widgets.Label('Izberi vrednosti oja\u010danja vstopnega signala in za\u010detnih pogojev:'), buttons1]))\n display(widgets.HBox([widgets.VBox([m1_slider,k1_slider,b1_slider], layout=widgets.Layout(width='45%')),\n widgets.VBox([m21_slider,m22_slider,k21_slider,k22_slider,b2_slider], layout=widgets.Layout(width='45%'))]), input_data)\n \nwidgets.interactive_output(draw_controllers, {'type_select':selectSystem,'index':selectForce})\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Izberi sistem: ', options=(('masa-vzmet-du\u0161ilka', 0), ('\u010detrtinski model avtomobila\u2026\n\n\n\n ToggleButtons(description='Izberi $F$: ', options=(('kora\u010dna funkcija', 0), ('impulzna funkcija', 1), ('rampa'\u2026\n\n\n\n Output()\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "fed55cf3d5382ef5c554d00c5eb677cfae3e527f", "size": 141807, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_si/examples/02/TD-03-Mehanski_sistemi.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_si/examples/02/.ipynb_checkpoints/TD-03-Mehanski_sistemi-checkpoint.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_si/examples/02/.ipynb_checkpoints/TD-03-Mehanski_sistemi-checkpoint.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 113.4456, "max_line_length": 82627, "alphanum_fraction": 0.7729237626, "converted": true, "num_tokens": 5484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4882833952958347, "lm_q2_score": 0.3007455726738824, "lm_q1q2_score": 0.14684906934539352}} {"text": "```python\n%matplotlib inline\n```\n\n\nWord Embeddings: Encoding Lexical Semantics\n===========================================\n\nWord embeddings are dense vectors of real numbers, one per word in your\nvocabulary. In NLP, it is almost always the case that your features are\nwords! But how should you represent a word in a computer? You could\nstore its ascii character representation, but that only tells you what\nthe word *is*, it doesn't say much about what it *means* (you might be\nable to derive its part of speech from its affixes, or properties from\nits capitalization, but not much). Even more, in what sense could you\ncombine these representations? We often want dense outputs from our\nneural networks, where the inputs are $|V|$ dimensional, where\n$V$ is our vocabulary, but often the outputs are only a few\ndimensional (if we are only predicting a handful of labels, for\ninstance). How do we get from a massive dimensional space to a smaller\ndimensional space?\n\nHow about instead of ascii representations, we use a one-hot encoding?\nThat is, we represent the word $w$ by\n\n\\begin{align}\\overbrace{\\left[ 0, 0, \\dots, 1, \\dots, 0, 0 \\right]}^\\text{|V| elements}\\end{align}\n\nwhere the 1 is in a location unique to $w$. Any other word will\nhave a 1 in some other location, and a 0 everywhere else.\n\nThere is an enormous drawback to this representation, besides just how\nhuge it is. It basically treats all words as independent entities with\nno relation to each other. What we really want is some notion of\n*similarity* between words. Why? Let's see an example.\n\nSuppose we are building a language model. Suppose we have seen the\nsentences\n\n* The mathematician ran to the store.\n* The physicist ran to the store.\n* The mathematician solved the open problem.\n\nin our training data. Now suppose we get a new sentence never before\nseen in our training data:\n\n* The physicist solved the open problem.\n\nOur language model might do OK on this sentence, but wouldn't it be much\nbetter if we could use the following two facts:\n\n* We have seen mathematician and physicist in the same role in a sentence. Somehow they\n have a semantic relation.\n* We have seen mathematician in the same role in this new unseen sentence\n as we are now seeing physicist.\n\nand then infer that physicist is actually a good fit in the new unseen\nsentence? This is what we mean by a notion of similarity: we mean\n*semantic similarity*, not simply having similar orthographic\nrepresentations. It is a technique to combat the sparsity of linguistic\ndata, by connecting the dots between what we have seen and what we\nhaven't. This example of course relies on a fundamental linguistic\nassumption: that words appearing in similar contexts are related to each\nother semantically. This is called the `distributional\nhypothesis `__.\n\n\n### Getting Dense Word Embeddings\n\nHow can we solve this problem? That is, how could we actually encode\nsemantic similarity in words? Maybe we think up some semantic\nattributes. For example, we see that both mathematicians and physicists\ncan run, so maybe we give these words a high score for the \"is able to\nrun\" semantic attribute. Think of some other attributes, and imagine\nwhat you might score some common words on those attributes.\n\nIf each attribute is a dimension, then we might give each word a vector,\nlike this:\n\n\\begin{align}q_\\text{mathematician} = \\left[ \\overbrace{2.3}^\\text{can run},\n \\overbrace{9.4}^\\text{likes coffee}, \\overbrace{-5.5}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\n\\begin{align}q_\\text{physicist} = \\left[ \\overbrace{2.5}^\\text{can run},\n \\overbrace{9.1}^\\text{likes coffee}, \\overbrace{6.4}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\nThen we can get a measure of similarity between these words by doing:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = q_\\text{physicist} \\cdot q_\\text{mathematician}\\end{align}\n\nAlthough it is more common to normalize by the lengths:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = \\frac{q_\\text{physicist} \\cdot q_\\text{mathematician}}\n {\\| q_\\text{physicist} \\| \\| q_\\text{mathematician} \\|} = \\cos (\\phi)\\end{align}\n\nWhere $\\phi$ is the angle between the two vectors. That way,\nextremely similar words (words whose embeddings point in the same\ndirection) will have similarity 1. Extremely dissimilar words should\nhave similarity -1.\n\n\nYou can think of the sparse one-hot vectors from the beginning of this\nsection as a special case of these new vectors we have defined, where\neach word basically has similarity 0, and we gave each word some unique\nsemantic attribute. These new vectors are *dense*, which is to say their\nentries are (typically) non-zero.\n\nBut these new vectors are a big pain: you could think of thousands of\ndifferent semantic attributes that might be relevant to determining\nsimilarity, and how on earth would you set the values of the different\nattributes? Central to the idea of deep learning is that the neural\nnetwork learns representations of the features, rather than requiring\nthe programmer to design them herself. So why not just let the word\nembeddings be parameters in our model, and then be updated during\ntraining? This is exactly what we will do. We will have some *latent\nsemantic attributes* that the network can, in principle, learn. Note\nthat the word embeddings will probably not be interpretable. That is,\nalthough with our hand-crafted vectors above we can see that\nmathematicians and physicists are similar in that they both like coffee,\nif we allow a neural network to learn the embeddings and see that both\nmathematicians and physicists have a large value in the second\ndimension, it is not clear what that means. They are similar in some\nlatent semantic dimension, but this probably has no interpretation to\nus.\n\n\nIn summary, **word embeddings are a representation of the *semantics* of\na word, efficiently encoding semantic information that might be relevant\nto the task at hand**. You can embed other things too: part of speech\ntags, parse trees, anything! The idea of feature embeddings is central\nto the field.\n\n\n### Word Embeddings in Pytorch\n\nBefore we get to a worked example and an exercise, a few quick notes\nabout how to use embeddings in Pytorch and in deep learning programming\nin general. Similar to how we defined a unique index for each word when\nmaking one-hot vectors, we also need to define an index for each word\nwhen using embeddings. These will be keys into a lookup table. That is,\nembeddings are stored as a $|V| \\times D$ matrix, where $D$\nis the dimensionality of the embeddings, such that the word assigned\nindex $i$ has its embedding stored in the $i$'th row of the\nmatrix. In all of my code, the mapping from words to indices is a\ndictionary named word\\_to\\_ix.\n\nThe module that allows you to use embeddings is torch.nn.Embedding,\nwhich takes two arguments: the vocabulary size, and the dimensionality\nof the embeddings.\n\nTo index into this table, you must use torch.LongTensor (since the\nindices are integers, not floats).\n\n\n\n\n\n```python\n# Author: Robert Guthrie\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)\n```\n\n\n\n\n \n\n\n\n\n```python\nword_to_ix = {\"hello\": 0, \"world\": 1}\nembeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings\nlookup_tensor = torch.tensor([word_to_ix[\"hello\"]], dtype=torch.long)\nhello_embed = embeds(lookup_tensor)\nprint(hello_embed)\n```\n\n tensor([[ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519]],\n grad_fn=)\n\n\nAn Example: N-Gram Language Modeling\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRecall that in an n-gram language model, given a sequence of words\n$w$, we want to compute\n\n\\begin{align}P(w_i | w_{i-1}, w_{i-2}, \\dots, w_{i-n+1} )\\end{align}\n\nWhere $w_i$ is the ith word of the sequence.\n\nIn this example, we will compute the loss function on some training\nexamples and update the parameters with backpropagation.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2\nEMBEDDING_DIM = 10\n\n# We will use Shakespeare Sonnet 2\ntest_sentence = \"\"\"When forty winters shall besiege thy brow,\nAnd dig deep trenches in thy beauty's field,\nThy youth's proud livery so gazed on now,\nWill be a totter'd weed of small worth held:\nThen being asked, where all thy beauty lies,\nWhere all the treasure of thy lusty days;\nTo say, within thine own deep sunken eyes,\nWere an all-eating shame, and thriftless praise.\nHow much more praise deserv'd thy beauty's use,\nIf thou couldst answer 'This fair child of mine\nShall sum my count, and make my old excuse,'\nProving his beauty by succession thine!\nThis were to be new made when thou art old,\nAnd see thy blood warm when thou feel'st it cold.\"\"\".split()\n# we should tokenize the input, but we will ignore that for now\n# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)\ntrigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])\n for i in range(len(test_sentence) - 2)]\n# print the first 3, just so you can see what they look like\nprint(trigrams[:3])\n\nvocab = set(test_sentence)\nword_to_ix = {word: i for i, word in enumerate(vocab)}\n\n\nclass NGramLanguageModeler(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(NGramLanguageModeler, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n self.linear2 = nn.Linear(128, vocab_size)\n\n def forward(self, inputs):\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n\nlosses = []\nloss_function = nn.NLLLoss()\nmodel = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\nfor epoch in range(10):\n total_loss = 0\n for context, target in trigrams:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in tensors)\n context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n\n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a tensor)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # The loss decreased every iteration over the training data!\n```\n\n [(['When', 'forty'], 'winters'), (['forty', 'winters'], 'shall'), (['winters', 'shall'], 'besiege')]\n\n\n /home/david/anaconda3/envs/pytorch_tutorial/lib/python3.7/site-packages/torch/autograd/__init__.py:132: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)\n allow_unreachable=True) # allow_unreachable flag\n\n\n [523.2164900302887, 520.5518922805786, 517.9080045223236, 515.2833998203278, 512.6777231693268, 510.08880162239075, 507.51598358154297, 504.9569056034088, 502.41165256500244, 499.8801546096802]\n\n\n### Exercise: Computing Word Embeddings: Continuous Bag-of-Words\n\nThe Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep\nlearning. It is a model that tries to predict words given the context of\na few words before and a few words after the target word. This is\ndistinct from language modeling, since CBOW is not sequential and does\nnot have to be probabilistic. Typcially, CBOW is used to quickly train\nword embeddings, and these embeddings are used to initialize the\nembeddings of some more complicated model. Usually, this is referred to\nas *pretraining embeddings*. It almost always helps performance a couple\nof percent.\n\nThe CBOW model is as follows. Given a target word $w_i$ and an\n$N$ context window on each side, $w_{i-1}, \\dots, w_{i-N}$\nand $w_{i+1}, \\dots, w_{i+N}$, referring to all context words\ncollectively as $C$, CBOW tries to minimize\n\n\\begin{align}-\\log p(w_i | C) = -\\log \\text{Softmax}(A(\\sum_{w \\in C} q_w) + b)\\end{align}\n\nwhere $q_w$ is the embedding for word $w$.\n\nImplement this model in Pytorch by filling in the class below. Some\ntips:\n\n* Think about which parameters you need to define.\n* Make sure you know what shape each operation expects. Use .view() if you need to\n reshape.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2 # 2 words to the left, 2 to the right\nraw_text = \"\"\"We are about to study the idea of a computational process.\nComputational processes are abstract beings that inhabit computers.\nAs they evolve, processes manipulate other abstract things called data.\nThe evolution of a process is directed by a pattern of rules\ncalled a program. People create programs to direct processes. In effect,\nwe conjure the spirits of the computer with our spells.\"\"\".split()\n\n# By deriving a set from `raw_text`, we deduplicate the array\nvocab = set(raw_text)\nvocab_size = len(vocab)\n\nword_to_ix = {word: i for i, word in enumerate(vocab)}\ndata = []\nfor i in range(2, len(raw_text) - 2):\n context = [raw_text[i - 2], raw_text[i - 1],\n raw_text[i + 1], raw_text[i + 2]]\n target = raw_text[i]\n data.append((context, target))\nprint(data[:5])\n\n\nclass CBOW(nn.Module):\n\n def __init__(self, embedding_dim, context_size, vocab_size, dropout=0.2):\n super().__init__()\n self.embedding = nn.Embedding(vocab_size, embedding_dim)\n self.dropout = nn.Dropout(dropout)\n self.linear = nn.Linear(embedding_dim, vocab_size)\n\n def forward(self, inputs):\n emb = self.embedding(inputs)\n mean = emb.mean(dim=1)\n out = self.linear(self.dropout(mean))\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n# create your model and train. here are some functions to help you make\n# the data ready for use by your module\n\n\ndef make_context_vector(context, word_to_ix):\n idxs = [word_to_ix[w] for w in context]\n return torch.tensor(idxs, dtype=torch.long)\n\ncbow = CBOW(embedding_dim=128,\n context_size=CONTEXT_SIZE,\n vocab_size=vocab_size,\n dropout=0.3\n )\n\n# make_context_vector(data[0][0], word_to_ix) # example\n\nlosses = []\nloss_function = nn.NLLLoss()\noptimizer = optim.SGD(cbow.parameters(), lr=0.01)\n\nfor epoch in range(40):\n total_loss = 0\n accuracy = 0\n \n for context, target in data:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in tensors)\n context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n context_idxs.unsqueeze_(0)\n target_idx = word_to_ix[target]\n\n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n cbow.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = cbow(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a tensor)\n loss = loss_function(log_probs, torch.tensor([target_idx], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n accuracy += int(log_probs[0, target_idx].item() == log_probs.max().item())\n print(epoch, total_loss, accuracy / len(data))\n# losses.append(total_loss)\n# print(losses) # The loss decreased every iteration over the training data!\n```\n\n [(['We', 'are', 'to', 'study'], 'about'), (['are', 'about', 'study', 'the'], 'to'), (['about', 'to', 'the', 'idea'], 'study'), (['to', 'study', 'idea', 'of'], 'the'), (['study', 'the', 'of', 'a'], 'idea')]\n 0 229.44551348686218 0.034482758620689655\n 1 212.36824202537537 0.15517241379310345\n 2 198.85704636573792 0.25862068965517243\n 3 180.80183136463165 0.41379310344827586\n 4 166.52524602413177 0.5344827586206896\n 5 152.50107324123383 0.6379310344827587\n 6 139.02809083461761 0.7413793103448276\n 7 127.54271537065506 0.7758620689655172\n 8 117.17048120498657 0.7586206896551724\n 9 105.72454649209976 0.8793103448275862\n 10 96.6032275557518 0.9137931034482759\n 11 93.59943437576294 0.896551724137931\n 12 83.87378299236298 0.9482758620689655\n 13 79.1644244492054 0.9137931034482759\n 14 70.42980808019638 0.9655172413793104\n 15 65.1251906901598 0.9310344827586207\n 16 61.93721008300781 0.9655172413793104\n 17 57.70122781395912 0.9655172413793104\n 18 52.533589869737625 0.9827586206896551\n 19 50.15311957895756 1.0\n 20 45.04634390771389 0.9827586206896551\n 21 41.83073575794697 1.0\n 22 41.64148707687855 1.0\n 23 36.46293383836746 1.0\n 24 38.303029738366604 1.0\n 25 34.484288811683655 1.0\n 26 32.70081126317382 0.9827586206896551\n 27 31.872529692947865 1.0\n 28 32.101080395281315 1.0\n 29 27.523721009492874 1.0\n 30 27.361480586230755 0.9827586206896551\n 31 25.17284446209669 1.0\n 32 23.987557873129845 1.0\n 33 23.426055818796158 1.0\n 34 23.307042837142944 0.9827586206896551\n 35 23.03582063317299 1.0\n 36 18.611751787364483 1.0\n 37 19.797190058976412 1.0\n 38 18.018043760210276 1.0\n 39 21.121414752677083 1.0\n\n\n\n```python\n\n```\n\n\n\n\n True\n\n\n\n\n```python\nimport pandas as pd\n\nwith torch.no_grad():\n context, target = data[0]\n probs = cbow.forward(make_context_vector(context, word_to_ix).unsqueeze(0)).exp()\n probs = pd.Series(probs[0,:], index=pd.Series(word_to_ix).sort_values().index)\n probs.sort_values(ascending=False, inplace=True)\n print(context, target)\n print(probs.head())\n```\n\n ['We', 'are', 'to', 'study'] about\n about 0.369349\n to 0.060599\n direct 0.046123\n study 0.044743\n abstract 0.035333\n dtype: float32\n\n\n\n```python\ncv = make_context_vector(data[0][0], word_to_ix)\ncv\n```\n\n\n\n\n tensor([47, 9, 17, 26])\n\n\n\n\n```python\nx = torch.stack((torch.tensor([0, 1, 2, 3], dtype=torch.long), torch.tensor([1, 2, 3, 4], dtype=torch.long)))\nx\n```\n\n\n\n\n tensor([[0, 1, 2, 3],\n [1, 2, 3, 4]])\n\n\n\n\n```python\ncbow.forward(x)\n```\n\n\n\n\n tensor([[-4.4479, -4.2165, -4.5860, -4.5395, -3.7475, -4.5314, -3.7213, -4.0955,\n -4.8205, -4.0684, -3.6109, -3.5902, -3.8569, -3.9649, -4.1915, -3.2609,\n -3.0084, -3.5542, -3.7935, -3.9100, -4.0027, -3.7444, -3.4058, -4.6186,\n -4.1976, -4.5493, -4.2081, -3.8594, -4.0403, -3.9142, -4.2514, -3.1725,\n -3.4688, -3.8114, -4.1353, -4.5739, -3.9951, -3.8958, -4.5362, -3.5117,\n -4.3187, -4.1432, -3.3839, -3.8355, -3.2985, -4.3572, -4.1291, -3.7987,\n -4.4616],\n [-4.4558, -4.0547, -4.5895, -4.1631, -3.5207, -4.3959, -3.6764, -4.1521,\n -4.5325, -3.9598, -3.7226, -3.6905, -3.6675, -3.5535, -4.5920, -3.3486,\n -2.9139, -3.3886, -3.7945, -3.9487, -4.0804, -3.6835, -3.5717, -4.4634,\n -4.1868, -4.5007, -4.1608, -4.0200, -3.9595, -4.2383, -4.4310, -3.4103,\n -3.4146, -3.5726, -4.1631, -4.5518, -3.9424, -4.0735, -4.5185, -3.6759,\n -3.9827, -4.4178, -3.5523, -4.0912, -3.3532, -4.3526, -4.5390, -3.5197,\n -4.5749]], grad_fn=)\n\n\n\n\n```python\ne = cbow.embedding(x)\n```\n\n\n```python\ne[0,], e[1,]\n```\n\n\n\n\n (tensor([[-0.4304, -0.6461, 1.0047],\n [ 0.1560, 1.4759, 1.0045],\n [-1.5860, -0.7552, 1.1779],\n [ 0.2058, 1.3878, 0.4951]], grad_fn=),\n tensor([[ 0.1560, 1.4759, 1.0045],\n [-1.5860, -0.7552, 1.1779],\n [ 0.2058, 1.3878, 0.4951],\n [ 1.4094, -0.0457, -0.9737]], grad_fn=))\n\n\n\n\n```python\ne.mean(dim=1)\n```\n\n\n\n\n tensor([[-0.4136, 0.3656, 0.9205],\n [ 0.0463, 0.5157, 0.4259]], grad_fn=)\n\n\n", "meta": {"hexsha": "0aa0f439e2194dcc2323ee21f4d1f6f41691f7e0", "size": 28934, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "beginner_source/nlp/3 word_embeddings_tutorial.ipynb", "max_stars_repo_name": "davidlove/pytorch_tutorials", "max_stars_repo_head_hexsha": "72cccf8754bf799ac1270a1ad9e05f8006b39e21", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "beginner_source/nlp/3 word_embeddings_tutorial.ipynb", "max_issues_repo_name": "davidlove/pytorch_tutorials", "max_issues_repo_head_hexsha": "72cccf8754bf799ac1270a1ad9e05f8006b39e21", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "beginner_source/nlp/3 word_embeddings_tutorial.ipynb", "max_forks_repo_name": "davidlove/pytorch_tutorials", "max_forks_repo_head_hexsha": "72cccf8754bf799ac1270a1ad9e05f8006b39e21", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.1715039578, "max_line_length": 366, "alphanum_fraction": 0.5799059929, "converted": true, "num_tokens": 6148, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657963619520866, "lm_q2_score": 0.30735800417608683, "lm_q1q2_score": 0.1464805658119249}} {"text": "\n\n
\n GEOS 657 Microwave Remote Sensing \n\n Lab 3: SAR Imaging Theory and Processing Methods -- [20 Points] \n
\nAssignment Due Date: March 05, 2019 \n\n

\n Paul A Rosen with modifications by Franz J Meyer \n
\n Date: Feb 14, 2021 \n
\n\n\n
\n THIS NOTEBOOK INCLUDES A HOMEWORK ASSIGNMENT! \n
\n The homework assignments in this lab are indicated by markdown fields with red background. Please complete these assignments in a separate Word / Latex / PDF document and submit your completed assignment via the GEOS 657 Blackboard page. \n\nContact me at fjmeyer@alaska.edu should you run into any problems.\n\n
\n
\n\n\n```python\nimport url_widget as url_w\nnotebookUrl = url_w.URLWidget()\ndisplay(notebookUrl)\n```\n\n\n```python\nfrom IPython.display import Markdown\nfrom IPython.display import display\n\nnotebookUrl = notebookUrl.value\nuser = !echo $JUPYTERHUB_USER\nenv = !echo $CONDA_PREFIX\nif env[0] == '':\n env[0] = 'Python 3 (base)'\nif env[0] != '/home/jovyan/.local/envs/rtc_analysis':\n display(Markdown(f'WARNING:'))\n display(Markdown(f'This notebook should be run using the \"rtc_analysis\" conda environment.'))\n display(Markdown(f'It is currently using the \"{env[0].split(\"/\")[-1]}\" environment.'))\n display(Markdown(f'Select the \"rtc_analysis\" from the \"Change Kernel\" submenu of the \"Kernel\" menu.'))\n display(Markdown(f'If the \"rtc_analysis\" environment is not present, use Create_OSL_Conda_Environments.ipynb to create it.'))\n display(Markdown(f'Note that you must restart your server after creating a new environment before it is usable by notebooks.'))\n```\n\n# Prepare the Notebook\n\n\n```python\nimport warnings\nwarnings.filterwarnings('ignore')\n\nbShowInline = True # Set = False for document generation\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nparams = {'legend.fontsize': 'x-large',\n 'figure.figsize': (15, 5),\n 'axes.labelsize': 'x-large',\n 'axes.titlesize':'x-large',\n 'xtick.labelsize':'x-large',\n 'ytick.labelsize':'x-large'}\nplt.rcParams.update(params)\n\ndef makeplot( plt, figlabel, figcaption):\n figname = figlabel+'.png'\n\n plt.savefig(figname)\n\n if bShowInline:\n plt.show()\n else:\n plt.close()\n\n strLatex=\"\"\"\n \\\\begin{figure}[b]\n \\centering\n \\includegraphics[totalheight=10.0cm]{%s}\n \\caption{%s}\n \\label{fig:%s}\n \\end{figure}\"\"\"%(figname, figcaption, figlabel) \n return display(Latex(strLatex)) \n\ndef sinc_interp(x, s, u):\n # x is the vector to be interpolated\n # s is a vector of sample points of x\n # u is a vector of the output sample points for the interpolation\n \n if len(x) != len(s):\n raise ValueError('x and s must be the same length')\n \n # Find the period \n T = s[1] - s[0]\n \n sincM = np.tile(u, (len(s), 1)) - np.tile(s[:, np.newaxis], (1, len(u)))\n y = np.dot(x, np.sinc(sincM/T))\n return y\n\n%matplotlib widget\nplt.rcParams.update({'font.size': 11})\n```\n\n# Overview\n\n In this notebook, we will demonstrate the generation of a raw synthetic aperture radar data set for a collection of point scatterers on an otherwise dark background, and then demonstrate two methods of processing the data, simple back projection, and range-doppler processing.\n\n1.0 [Background](#section-1)
\n> 1.1 [SAR Geometry](#section-1.1)
\n> 1.2 [Antenna Patterns](#section-1.2)
\n> 1.3 [Beamwidth and Swath](#section-1.3)
\n> 1.4 [Phase and Doppler Frequency in the synthetic aperture](#section-1.4)
\n> 1.5 [Resolution of the synthetic aperture](#section-1.5)
\n> 1.6 [The Radar Equation](#section-1.6)
\n\n2.0 [Simulating SAR data with point targets](#section-2)
\n> 2.1 [Simulating the transmitted pulse](#section-2.1)
\n> 2.2 [Simulating the Received Echoes](#section-2.2)
\n\n3.0 [Focusing SAR data - Range](#section-3)
\n> 3.1 [Correlation to achieve fine range resolution - time domain](#section-3.1)
\n> 3.2 [Correlation to achieve fine range resolution - frequency domain](#section-3.2) \n\n4.0 [Focusing SAR data - Azimuth](#section-4)
\n> 4.1 [Azimuth reference function](#section-4.2)
\n> 4.2 [Correlation to achieve fine azimuth resolution - time domain](#section-4.2)
\n> 4.3 [Correlation to achieve fine azimuth resolution - frequency domain](#section-4.3)
\n> 4.4 [Backprojection](#section-4.4)\n
\n\n# 1.0 Background \n\n## 1.1 SAR Geometry \n\n\n\n\n\nTo simplify the problem, we assume a spacecraft flying at fixed altitude $h_{sc}$ and constant velocity $v_{sc}$, observing points on a flat earth. The geometry of the observation is depicted in Figure 1. The radar antenna is assumed to be a flat rectangular aperture with dimensions of length $L_a$ in the along-track dimension (also known as \"azimuth\" for historical reasons), and width \"W_a\" in the cross-track dimension (also known as the elevation dimension). The range $\\rho$ is the distance from the spacecraft antenna to a point on the ground. The \"range vector\" or \"look vector\" is the vector pointing in this direction, with magnitude $\\rho$. At this range, the look angle, defined as the angle from nadir to the range vector is $\\theta$. At the antenna boresight, which is the direction where the antenna pattern has its peak gain, we define the boresight reference range $\\rho_l$, and corresponding look angle $\\theta_l$. Figure 2 illustrates the case where the antenna is pointed forward toward the velocity vector. In this configuration, we define squint angle $\\theta_{sq}$ as the angle of rotation about the nadir vector in the ground plane.\n\nTable 1 lists the assumed spacecraft, radar, and surface point targets characteristics.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SAR Geometry
Figure 1. Basic SAR Geometry Figure 2. Squinted SAR Geometry
\n\n\n**Table 1. Radar and Spacecraft Parameters**\n\n| Parameter | Symbol | Value | Comment |\n| --- | --- | --- | --- | \n| Wavelength | $\\lambda $ | 0.24 m | (L-band) \n| Antenna Length | $L_a$ | 10 m | | \n| Antenna Width | $W_a$ | 2 m | | \n| Off-nadir boresight angle | $\\theta_l$ | 30$^\\circ$ | |\n| Azimuth squint of boresight angle | $\\theta_{sq}$ | 0$^\\circ$ | |\n| Spacecraft Velocity | $v_{sc}$ | 7,500 m/s | Assumed constant |\n| Spacecraft Altitude | $h_{sc}$ | 750,000 m | Assumed constant | \n| Radar Range Bandwidth | $B_r$ | 10 MHz | |\n| Radar Pulse Duration | $\\tau_r$ | 20 $\\mu$s | Determines average power |\n| Nominal Pulse Rate | $f_p$ | 1600 Hz | Determines average power and ambiguity levels |\n| Peak Power on Transmit | $P_T$ | 4,000 W | Determines SNR |\n| Radar Noise Temperature | $T_r$ | 300 K | Determines SNR |\n| Corner Reflector Dimension | $L_{cr}$ | 2.4 m | Determines SNR |\n\n\n```python\nimport numpy as np\nLambda = 0.24\nL_a = 10.\nW_a = 2. \ntheta_l = 30. * np.pi/180.\ntheta_sq = 0. * np.pi/180.\nv_sc = 7500.\nh_sc = 750000. \nB_r = 20.e6\ntau_r = 10.e-6\nf_p = 1600.\nP_T = 4000.\nT_r = 300.\nL_cr =2.4\n```\n\n**Table 2. Other Constants**\n\n| Parameter | Symbol | Value | Comment |\n| --- | --- | --- | --- | \n| Speed of light | $c $ | 299792456 m/s | | \n| Boltzman constant | $k$ | 1.38064852 $\\times$ 10$^{-23}$ m$^2$ kg s$^{-2}$ K$^{-1}$ | -228.6 dB | \n| Gravitational Constant | $G$ | 6.672 $\\times$ 10$^{-11}$ m$^3$ kg$^{\u22121}$ s$^{\u22122}$ | | \n| Earth's Mass | $M_E$ | 5.9742 $\\times$ 10$^{24}$ kg | |\n\n\n```python\nc = 299792456 \nk = 1.38064852e-23\nG = 6.672e-11\nM_E = 5.9742e24\n```\n\n
\n
\n ASSIGNMENT #1: Calculate Expected Range Resolution \n\nBased on the variables shown in Tables 1 and 2 you can calculate the expected range resolution $\\rho_r$ of this simulated SAR data set. \n\nPlease answer the following questions:\n\n
    \n
    \n
  1. Question 1.1: List the variables (with their values) from Tables 1 and 2 that are needed to calculate the range resolution$\\rho_r$.
  2. \n
    \n
  3. Question 1.2: Provide the equation for calculating $\\rho_r$ and provide your calculated value in the units of [meters].
  4. \n
\n
\n
\n\n## 1.2 The Antenna and Its Radiation Pattern\n\n\n\nThe radar antenna directs the radar signal toward a particular area on the ground. Generally, the larger the antenna, the more directed the energy is toward a particular direction. Most SAR radar antennas are rectangular planar antennas, though there are exceptions. A simple model for a planar antenna's power radiation pattern is a sin x / x function:\n\n\\begin{equation}\nS(\\theta_{az}, \\theta_{el}; \\theta_l, \\theta_{sq}) = \n\\bigg [\\frac{\n\\sin \\pi\\big (\\frac{\\theta_{az}-\\theta_{sq}}{\\theta_{L_a}}\\big )\n}\n{\n\\pi \\big (\\frac{\\theta_{az}-\\theta_{sq}}{\\theta_{L_a}}\\big )\n} \\bigg ]^2\n\\bigg [\\frac{\n\\sin \\pi\\big (\\frac{\\theta_{el}-\\theta_{l}}{\\theta_{W_a}}\\big )\n}\n{\n\\pi\\big (\\frac{\\theta_{el}-\\theta_{l}}{\\theta_{W_a}}\\big )\n} \\bigg ]^2\n\\end{equation}\n\nThe \"half-power beamwidth\" of the antenna with respect to its Length (along velocity vector) and with respect to its Width (direction perpendicular to the velocity vector and to the off-nadir boresight)\n\n\\begin{equation}\n\\theta_{L_a} = 0.87 \\frac{\\lambda}{L_a}\n\\end{equation}\n\n\\begin{equation}\n\\theta_{W_a} = 0.87 \\frac{\\lambda}{W_a}\n\\end{equation}\n\nAt these angles, the power of the signal at this angular extent has been reduced half, or 3 dB. This is also called the 3dB beamwidth.\n\nA SAR antenna points to one side of the flight track or another, usually with an angle greater than 20$^\\circ$. This is to ensure a unique relationship between the time of return and the distance from the spacecraft to the ground. If the energy from the radar illuminates both sides of the radar track, there will be a left-right ambiguity in range for a given time. In practice the antenna sidelobes in the sin x / x pattern will lead to some energy everywhere, but the farther off-nadir the antenna is pointed, the lower this energy is from unwanted directions. Angles larger than 20$^\\circ$ also help avoid excessive foreshortening of the observations. \n\n\n```python\ntheta_L_a = 0.866 * Lambda/L_a \ntheta_W_a = 0.866 * Lambda/W_a\n```\n\nWe can see the values in degrees in these two dimensions:\n\n\n```python\nprint(\" Along track half-power beamwidth =\",\"{:.2f}\".format(theta_L_a * 180. / np.pi),\"degrees\")\nprint(\" Elevation half-power beamwidth =\",\"{:.2f}\".format(theta_W_a * 180. / np.pi),\"degrees\")\n```\n\n\n```python\ntheta_az=np.linspace(-np.pi/32., np.pi/32., 400)\ntheta_el=np.linspace(-np.pi/8., np.pi/8., 400)+theta_l\nSaz = (np.sinc((theta_az-theta_sq)/theta_L_a))**2 \nSel = (np.sinc((theta_el-theta_l )/theta_W_a))**2\n```\n\n\n```python\ndef S_p (th_az, th_el):\n return (np.sinc((th_az-theta_sq)/theta_L_a))**2 * (np.sinc((th_el-theta_l)/theta_W_a))**2\n\nTheta_az, Theta_el = np.meshgrid(theta_az,theta_el)\nS = S_p(Theta_az, Theta_el)\n```\n\n\n```python\nplt.style.use('seaborn-whitegrid')\n\nplt.figure(figsize=(13, 5))\nplt.subplot(1,2,1)\nplt.plot(180.*theta_az/np.pi,Saz,label='az')\nplt.plot(180.*theta_el/np.pi,Sel,label='el')\nplt.legend(loc='best')\nplt.title(\"Idealized 1-d El/Az \\n Radiation Patterns of a Planar Antenna\")\nplt.xlabel(\"Beam Angles $(^\\circ)$\")\nplt.ylabel(\"Power\")\n\nplt.subplot(1,2,2)\nplt.contourf(180.*Theta_az/np.pi, 180.*Theta_el/np.pi, 10.*np.log10(S), 20, cmap='magma')\nplt.colorbar(label='Power (dB)')\nplt.title(\"Idealized 2-d \\nRadiation Pattern of a Planar Antenna\")\nplt.xlabel(\"Along track Beam Angle $ (^\\circ)$\")\nplt.ylabel(\"Elevation Beam Angle $ (^\\circ)$\")\n#plt.subplots_adjust(left=-0.1,top=0.9)\n```\n\n## 1.3 Beam Extent and Swath\n\n\n\nFrom the elevation beamwidth and other geometric parameters described above, we can calculate the range $\\rho$ and ground range $\\rho_g$ where the boresight and the 3-dB beam edges intersect the flat Earth. Specifically, we define:\n\n| Parameter | Symbol | \n| --- | --- | \n| Generic Range | $\\rho $ |\n| Generic Ground Range | $\\rho_g$ |\n| Range at Boresight | $\\rho_l$ |\n| Ground Range at Boresight | $\\rho_{l,g}$ | \n| Range at Near Beam Edge | $\\rho_n$ |\n| Ground Range at Near Beam Edge | $\\rho_{n,g}$ | \n| Range at Far Beam Edge | $\\rho_f$ |\n| Ground Range at Far Beam Edge | $\\rho_{f,g}$ | \n| Reference azimuth for calculations | $s_0$ | \n| Reference range for calculations | $\\rho_0$ | \n\n\n\n```python\nrho_l = h_sc / np.cos(theta_l)\nrho_lg = h_sc * np.sin(theta_l)\nrho_n = h_sc / np.cos(theta_l-theta_W_a/2)\nrho_ng = h_sc * np.sin(theta_l-theta_W_a/2)\nrho_f = h_sc / np.cos(theta_l+theta_W_a/2)\nrho_fg = h_sc * np.sin(theta_l+theta_W_a/2)\nrho_sw = rho_fg-rho_ng\nDelta_rho = c / (2. * B_r)\nDelta_rho_ng = Delta_rho / np.sin(theta_l-theta_W_a/2)\nDelta_rho_fg = Delta_rho / np.sin(theta_l+theta_W_a/2)\nn_rs=int(np.round(rho_sw/Delta_rho))\nrho_v=np.linspace(rho_n,rho_f,n_rs)\ns_0 = 0. # reference azimuth for defining calculations\nrho_0 = rho_l \n```\n\nFrom these ranges, we can calculate the swath extent in meters on the ground $\\rho_{fg}-\\rho_{ng}$.\n\n\n```python\nprint(\"Boresight range: \",\"{:.2f}\".format(rho_l),\"m\")\nprint(\"Range swath: \",\"{:.2f}\".format(rho_sw),\"m\")\n```\n\nThe along track beam extent on the ground in meters is given by $\\rho \\theta_{L_a}$, where $\\rho$ varies across the swath. In the near range, the azimuth beam extent is\n\n\n```python\nprint(\"Near range azimuth beam extent: \",\"{:.2f}\".format(rho_n * theta_L_a),\"m\")\n```\n\nwhile in the far range, the azimuth beam extent is\n\n\n```python\nprint(\"Far range azimuth beam extent: \",\"{:.2f}\".format(rho_f * theta_L_a),\"m\")\n```\n\nWe will use the far range azimuth beamwidth to define the simulation extent in azimuth. Let's specify an extent that is 3 beamwidths to get a number of full synthetic apertures. \n\n| Parameter | Symbol | \n| --- | --- | \n| Along Track Position Half Beamwidth In Advance of $s_0$ | $s_{s,{\\rm hb}} $ |\n| Along Track Position Half Beamwidth After $s_0$ | $s_{e,{\\rm hb}} $ |\n| Along Track Position At Simulation Start | $s_{s,{\\rm sim}}$ |\n| Along Track Position at Simulation End | $s_{e,{\\rm sim}}$ | \n\nwhere\n\\begin{equation}\n\\begin{array}{lr}\ns_{s,{\\rm hb}} & = & s_0 - \\rho_f \\theta_{L_a} / 2 \\\\\ns_{e,{\\rm hb}} & = & s_0 + \\rho_f \\theta_{L_a} / 2 \\\\\ns_{s,{\\rm sim}} & = & s_0 - 3 \\rho_f \\theta_{L_a} / 2 \\\\\ns_{e,{\\rm sim}} & = & s_0 + 3 \\rho_f \\theta_{L_a} / 2 \\\\\n\\end{array}\n\\end{equation}\n\n\n\n```python\ns_s_hb = s_0 - rho_f * theta_L_a / 2. # half beamwidth\ns_e_hb = s_0 + rho_f * theta_L_a / 2. # half beamwidth\ns_s_sim = s_0 - 3. * rho_f * theta_L_a / 2. # total of 3 beamwidths for simulation\ns_e_sim = s_0 + 3. * rho_f * theta_L_a / 2. # total of 3 beamwidths for simulation\n```\n\n## 1.4 Phase and Doppler Frequency \n\n\n\n\nLet's pick a bright point on the ground, say at $(\\rho_0, s_0)$, or equivalently $(\\rho_{0g}, s_0)$. As the spacecraft flies along track and observes the point, the distance from the spacecraft to the point is changing hyperbolically: \n\n\n\\begin{equation}\n\\rho(s;\\rho_0,s_0) = \\sqrt{(s-s_0)^2+\\rho_0^2}\n\\end{equation}\n\n\nThe phase of the wave that travels from the spacecraft to the ground point and back is $-\\frac{4 \\pi}{\\lambda} \\rho(s)$. Over the extent of time that this point is illuminated, the range at which the point will appear in the echo, and the phase of the point, will vary as plotted below. To make the plot we need to understand how to properly sample the function we are plotting. Since the functions are hyperbolic, the range and phase increase quasi-quadratically. The derivative of the phase is the frequency, and this then varies quasi-linearly. This implies that there is bandwidth associated with the received signal in azimuth, related to the fact that the radar is moving relative to the point on the ground, so there is a Doppler shift of the signal that varies as the azimuth aspect angle changes. We will see later that the Doppler bandwidth is given approximately by the velocity and the azimuth antenna length: $B_d = 2 v_{sc} / L_a$.\n\n\n```python\nB_d = 2. * v_sc / L_a\n```\n\n\n```python\nprint(\"Doppler Bandwidth: \",\"{:.2f}\".format(B_d),\"Hz\") # in Hz or cycles/second\n```\n\nTherefore if we want to sample a signal properly, we need to sample at this frequency for complex signals, or twice for real signals according to the Nyquist criterion. \n\nThe azimuth aperture time for any target is related to the azimuth extent on the ground: $t_a = \\rho * \\theta_{L_a} / v_{sc}$.\n\n\n```python\nt_af = rho_f * theta_L_a / v_sc\n```\n\n\n```python\nprint(\"Synthetic Aperture time in far range: \",\"{:.2f}\".format(t_af),\"sec\")\n```\n\nThe time-bandwidth product gives the number of points needed to adequately represent the signal over this frequency range.\n\n\n```python\nn_af = int(np.round(B_d * t_af))\n```\n\nTo examine the function, we need to pick a point for the point target. Let's assume $s_0=0$ and $\\rho_0 = \\rho_l$. We also remove the large offset phase $-4 \\pi \\rho_0 / \\lambda$, since the absolute phase is difficult to measure and arbitrary.\n\n\\begin{equation}\n\\phi_{az}(s;\\rho_0,s_0) = -\\frac{4\\pi}{\\lambda} (\\rho(s;\\rho_0,s_0) - \\rho_0) = -\\frac{4\\pi}{\\lambda} (\\sqrt{(s-s_0)^2+\\rho_0^2} - \\rho_0)\n\\end{equation}\n\nAssuming $(s-s_0) << \\rho_0$, we can expand the square root by Taylor expansion to obtain\n\n\\begin{equation}\n\\phi_{az}(s;\\rho_0,s_0) \\approx -\\frac{4\\pi}{\\lambda} \\frac{1}{2}\\frac{(s-s_0)^2}{\\rho_0}\n\\end{equation}\n\nwhich illustrates the quadratic nature of the phase to first order. \n\nThe spatial frequency in radians is then its derivative with $s$\n\n\\begin{equation}\n\\omega_{az}(s;\\rho_0,s_0) = -\\frac{4\\pi}{\\lambda} \\frac{(s-s_0)}{\\rho_0}\n\\end{equation}\n\nor in cycles\n\n\\begin{equation}\nf_{az}(s;\\rho_0,s_0) = -\\frac{2}{\\lambda} \\frac{(s-s_0)}{\\rho_0}\n\\end{equation}\n\nor in Hertz\n\n\\begin{equation}\nf_{az,hz}(s;\\rho_0,s_0) = -\\frac{2 v_{sc}}{\\lambda} \\frac{(s-s_0)}{\\rho_0}\n\\end{equation}\n\n\n\n\n\n```python\ns = np.linspace(s_s_hb, s_e_hb, n_af)\nphi_az = - ( 4. * np.pi * (np.sqrt(np.square(s-s_0)+rho_0*rho_0) / Lambda) - 4. * np.pi * rho_0 / Lambda)\nphi_az_approx = -4. * np.pi * np.square(s-s_0) /(2*Lambda*rho_0)\nf_az_hz = - (2. * v_sc / Lambda) * (s-s_0) / rho_0\n```\n\nIn the plot on the left below, the exact and quadratic expressions (Eqs. 6 and 7) are plotted. At this scale, the exact and approximate curves are indistinquishable. The plot on the right plots the difference on a scale where the impact of the approximation can be seen. It is a small fraction of the wavelength over the synthetic aperture. \n\n\n```python\nfig = plt.figure(figsize=(13, 6))\n\nax = fig.add_subplot(1,2,1)\nax.plot(s, phi_az, 'b', label=\"exact\")\nax.plot(s, phi_az_approx, 'r', label=\"quadratic approximation\")\nax.legend(loc='best')\nax.set_title(\"Along Track Phase History of an Illuminated Target\")\nax.set_xlabel(\"Along track position, s (m)\")\nax.set_ylabel(\"Phase (rad)\")\n\nax = fig.add_subplot(1,2,2)\nax.plot(s, (phi_az-phi_az_approx)/(2.*np.pi))\nax.set_title(\"Along Track Phase History Error of an Illuminated Target\")\nax.set_xlabel(\"Along track position, s (m)\")\nax.set_ylabel(\"Phase Error (wavelengths)\")\nplt.tight_layout()\n```\n\nThe Doppler bandwidth would be the range of the frequency function over this azimuth extent.\n\n\n```python\nfig = plt.figure(figsize=(13, 6))\n\nplt.title(\"Doppler History of an Illuminated Target\")\nplt.xlabel(\"Along track position, s (m)\")\nplt.ylabel(\"Doppler Frequency (Hz)\")\n\nplt.plot(s, f_az_hz)\nf_az_hz_bw = np.abs(f_az_hz[-1] - f_az_hz[0])\ndb_str = str(int(np.round(f_az_hz_bw)))\nplt.text(-7500.,-500.,\"Doppler Bandwidth = \"+db_str,fontsize=16);\n```\n\n## 1.5 Azimuth Resolution of the Synthetic Aperture\n\n\n\nGiven this bandwidth, what does this imply for resolution in azimuth? The time resolution is simply the reciprocal bandwidth: $ 1/f_{az,hz,bw}$, where $f_{az,hz,bw} = f_{az,hz}(s_{e,\\rm hb}) - f_{az,hz}(s_{s,\\rm hb})$. The spatial resolution would then be the velocity times this quantity: $ v_{sc}/f_{az,hz,bw}$.\n\n\n```python\nprint('Azimuth Resolution based on Doppler Bandwidth = ',np.round(100.*v_sc/f_az_hz_bw)/100.,' m')\n```\n\nThe theoretical resolution is typically quoted as $L_a/2$, half the antenna length in azimuth, independent of range and frequency. This can be seen by evaluating $f_{az,hz,bw}$ as follows:\n\n\\begin{eqnarray}\nf_{az,bw} &=& | f_{az}(s_{e,\\rm hb}) - f_{azz}(s_{s,\\rm hb}) |\\\\\n &=& \\frac{2}{\\lambda} \\frac{(s_{e,\\rm hb}-s_{s,\\rm hb})}{\\rho_0}\\\\\n &=& \\frac{2}{\\lambda} \\theta_{L_a} = -\\frac{2}{\\lambda} * 0.88 \\frac{\\lambda}{L_a} \n &=& 0.88 \\frac{2}{L_a}\n\\end{eqnarray}\n\n\n\n```python\nprint('Azimuth Resolution from calculation = ',np.round(100.*L_a/1.76)/100.,' m')\n```\n\nThe \"L/2\" azimuth resolution rule is an approximation, that will depend on the exact shape of the antenna pattern. But it is a good first approximation.\n\n## 1.6 The Radar Equation \n\n\n\nBefore we proceed with simulating an image with point targets, let's take a small diversion to develop an intuition about imaging performance for a radar with particular characteristics. This is typically accomplished through the radar equation, which calculates the signal-to-noise ratio of a system for a given scatterer on the ground. In this exercise, we'll consider a corner reflector as a scatterer.\n\nThe Radar Equation can be expressed as follows:\n\n\\begin{equation}\nP_R = P_T \\cdot G_T \\cdot \\frac{1}{4 \\pi \\rho^2} \\cdot \\sigma \\cdot \\frac{1}{4 \\pi \\rho^2} \\cdot A_a \\cdot \\epsilon\n\\end{equation}\n\nwhere the terms are defined as follows:\n\n| Parameter | Symbol | \n| --- | --- :| \n| Received Power | $P_R$ |\n| Transmitted Power | $P_T$ |\n| Antenna Transmit Gain | $G_T$ |\n| Range of Target | $\\rho$ |\n| Radar Cross Section of Target | $\\sigma$ |\n| Receive Antenna Area |$ A_a$ |\n| System Losses Fudge Factor | $\\epsilon$ |\n\nThis equation from left to right follows the transmitted signal through its echo path. The Antenna radiates a total power of $P_T$ from the aperture. That power is directed into the antenna beam by virtue of its size, and therefore has directivity: a concentration, or gain $G_T$ in a particular direction. This power then propagates a distance $\\rho$, spreading out over a spherically shaped surface within the beam. At the target, the power density then is \n\n$P_T \\cdot G_T \\cdot \\frac{1}{4 \\pi \\rho^2}$.\n\nThe target presents a reflecting surface which is characterized by its radar cross section. The radar cross section is the effective area that would lead to the observed total power reflected from a target hit with an incident power density. Thus, the reflected power at the target is\n\n$P_T \\cdot G_T \\cdot \\frac{1}{4 \\pi \\rho^2} \\cdot \\sigma$. \n\nThis power then propagates back to the radar as a spherical wave, such that the power density at the radar is \n\n$P_T \\cdot G_T \\cdot \\frac{1}{4 \\pi \\rho^2} \\cdot \\sigma \\cdot \\frac{1}{4 \\pi \\rho^2}. $\n\nThis power density hits the receive aperture, which collects the power over its area $A_a$. The system losses fudge factor accounts for losses of power in the receive chain before the signal is detected and may include transmit chain losses as well, depending on the definition of $P_T$ (is it the radiated power, or the power generated by the amplifiers which then needs to work its way through the antenna system to be radiated?). Typical system losses include: circulator losses, radiation inefficiency of the antenna, and antenna feed losses. These losses can be many factors of 2 loss in overall power generated by the radar power system. (There also are inefficiencies in getting the power from the spacecraft power system to the radar, but those are not included here.) \n\nTo calculate the received power, we need to define a target. In this tutorial, we are looking at corner reflectors. A corner reflector has a radar cross section \n\n\\begin{equation}\n\\sigma_{cr} = \\frac{4 \\pi L_{cr}^4}{3\\lambda^2}.\n\\end{equation}\n\nWe also need an expression for the gain of the antenna and the receive aperture size, which will be dependent on the look direction. The gain in the boresight direction $G_{Tl}$ is characterized by the beamwidths of the antenna:\n\n\\begin{equation}\nG_{Tl} = \\frac{4 \\pi}{\\theta_{L_a}\\theta_{W_a}}.\n\\end{equation}\n\nOff boresight, this gain will be reduced by the shape of the beam pattern on both transmit and on receive $S(\\theta_{az}, \\theta_{el}; \\theta_l, \\theta_{sq})^2$ (as opposed to just $S$).\n\n\\begin{equation}\nG_T(\\theta_{az}, \\theta_{el}; \\theta_l, \\theta_{sq}) = G_{Tl} S^2(\\theta_{az}, \\theta_{el}; \\theta_l, \\theta_{sq}).\n\\end{equation}\n\n\n\n```python\nG_Tl = 4 * np.pi /(theta_L_a * theta_W_a)\n```\n\n\n```python\nprint (\"Transmit antenna gain = \",\"{:.2f}\".format(10.*np.log10(G_Tl)),\"dB\") # since all quantities are powers already, this is the power gain in dB.\n```\n\nFor a corner reflector target located on ground at the boresight angle, we can calculate the receive power:\n\n\n```python\nsigma_cr = 4. * np.pi *L_cr**4/(3.*Lambda**2) \nA_a = L_a * W_a \nepsilon = 10.**(-5./10.) # assume 5 dB overall losses\nP_R = P_T * G_Tl * (1./(4.*np.pi*rho_l**2)) * sigma_cr * (1./(4.*np.pi*rho_l**2)) * A_a * epsilon\n```\n\n\n```python\nprint (\"Received power of corner reflector = \",\"{:.2f}\".format(10.*np.log10(P_R)),\"dB\") # in dB\n```\n\nIn order for the instrument to detect such a small amount of power, the noise level of the system must be commensurately small. The noise of an electronic system is given by \n\n\\begin{equation}\nP_N = k T_r B_r\n\\end{equation}\n\nwhere $k$ is the Boltzman constant, $T_r$ is the noise temperature of the radar, and $B_r$ is the bandwidth of the radar (Skolnik, Merrill I., Radar Handbook (2nd Edition). McGraw-Hill, 1990. ISBN 978-0-07-057913-2). The noise temperature is not necessarily the physical temperature. It is a combination of noise introduced by electron motion in electronics above absolute zero temperature and other noise sources. A reasonable noise temperature would be around 300 K.\n\n\n```python\nP_N = k* T_r * B_r\n```\n\n\n```python\nprint (\"Noise power = \",\"{:.2f}\".format(10.*np.log10(P_N)),\"dB\")\n```\n\n\n```python\nSNR = P_R/P_N\n```\n\n\n```python\nprint(\"SNR of corner reflector in raw data = \",\"{:.2f}\".format(10.*np.log10(SNR)),\"dB\")\n```\n\nIt looks like the SNR of a bright point target is well below the noise floor in this radar, and that is in general true in the raw data. The signals from individual scatterers or resolution cells in the raw data are quite dim. It is not until we focus the image that we concentrate the energy into a single point and build adequate SNR.\n\n# 2.0 Simulating SAR data with point targets \n\n## 2.1 Simulating the transmitted pulse\n\n\n\n\nOur radar will transmit pulses of energy at a pulse rate $f_r$ sufficient to sample the Doppler spectrum, the bandwidth of which was computed above. For our purposes, the pulse rate is set slightly higher than the Doppler bandwidth, which lowers aliasing of the energy outside this area of the spectrum. For any given pulse, we transmit a pulse of duration $\\tau_r$, sweeping the frequency linearly, to generate a signal with the required bandwidth $B_r$. The time-bandwidth product determine the number of samples required in this complex signal. Note: in reality we transmit and receive real-valued waveforms as currents excited or detected on the antenna. However, radar systems are coherent by nature, and the received signals are typically converted in hardware or on the ground to complex-valued waveforms. For this tutorial, we imagine that the transmit waveform is complex for simplicity. The frequency-swept, or \"chirp,\" waveform can be expressed as\n\n$C_r(t) = e^{i \\phi_r(t)} {\\rm rect}\\big(\\frac{t}{\\tau_r}\\big)$ where $\\phi_r(t) = \\pi \\frac{B_r}{\\tau_r} t^2 = \\pi \\frac{B_r}{\\tau_r} (2\\rho/c)^2 = 4 \\pi \\frac{B_r}{c^2\\tau_r} \\rho^2$, and the rect function is 1 on the interval 0 to 1, and 0 elsewhere. First, let's define these functions:\n\n\n```python\ndef rect(x):\n return np.abs(x) <= 0.5 \ndef win(x):\n return 1. # rectangular window\n# return 0.54 - 0.46 * np.cos(2.*np.pi*(x+0.5)) # hamming window defined on [-0.5,0.5] applied to suppress sidelobes.\n\ndef C_r_r(r):\n phi_r_r = 4. * np.pi * B_r / (c**2 * tau_r) * r**2\n return (np.cos(phi_r_r) + 1j * np.sin(phi_r_r)) * rect((r- c*tau_r/4.)/(c * tau_r/2.)) * win((r- c*tau_r/4.)/(c * tau_r/2.))\n```\n\nNow let's evaluate the chirp over the pulse length and examine some of its properties. The required number of samples is again the time-bandwidth product of the chirp.\n\n\n```python\nn_r = int(np.round(B_r * tau_r)) \nt_c = np.linspace(0.,tau_r,2*n_r) # create the arrays with twice the required points so real functions don't alias\nrho_c = c * t_c / 2.\nC_r = C_r_r(rho_c)\nphi_r = 4. * np.pi * B_r / (c**2 * tau_r) * rho_c**2\n```\n\n\n```python\nfig = plt.figure(figsize=(13, 5))\n\nax = fig.add_subplot(1,3,1)\nax.plot(rho_c,phi_r)\nax.set_title(\"Phase of the \\n Complex Chirp Signal\")\nax.set_xlabel(\"Range Distance along pulse (m)\")\nax.set_ylabel(\"Phase (Radians)\")\n\nax = fig.add_subplot(1,3,2)\nax.plot(rho_c,C_r.real)\nax.set_title(\"Real Part of the \\n Complex Chirp Signal\")\nax.set_xlabel(\"Range Distance along pulse (m)\")\nax.set_ylabel(\"Signal Magnitude\")\n\nax = fig.add_subplot(1,3,3)\nFC_r = np.fft.fft(C_r)\nfreq = np.fft.fftfreq(FC_r.shape[-1])\nnplts = int(np.round(FC_r.shape[-1]/2)) # only need the positive frequencies since it is a complex signal.\nax.plot(freq[0:nplts]*2.*B_r/1.e6, np.absolute(FC_r[0:nplts]))\nax.set_title(\"Spectrum of \\n Complex Chirp Signal\")\nax.set_xlabel(\"Frequency (MHz)\")\nax.set_ylabel(\"Magnitude\")\nplt.tight_layout()\n\nplt.show();\n```\n\nFeel free to play with the plotting limits of the array to explore the shape of the curve. Note that the complex chirp as created with twice the number of needed samples so that plotting the real or imaginary part, which is sinusoidal with increasing frequency, looks properly sampled in the plot. In reality, it is well sampled in the complex domain with half the point density.\n\n## 2.2 Simulating the Received Echoes\n\n\n\n\nNow we have a chirp signal, and when it encounters our corner reflector target, it will reflect some energy back to the radar. The echo signature of a corner reflector will be a delayed version of itself, with amplitude adjusted based on the shape of the antenna pattern and the losses as calculated in the radar equation above, and a phase shift proportional to the round-trip distance $2 \\rho$. Specifically, for a corner refector at $s_{cr},\\rho_{cr}$, the received echo will be:\n\n\\begin{equation}\nE_{cr}(s,\\rho; s_{cr},\\rho_{cr}) = \\sqrt{G_T\\big(\\theta_{az,cr}(s;s_{cr},\\rho_{cr}), \\theta_{el,cr}(\\rho_{cr}); \\theta_{l}, \\theta_{sq}\\big )} e^{-i 4\\pi(\\rho_{\\rm sc-cr}-\\rho_{l})/\\lambda}C_r\\big(2(\\rho-\\rho_{\\rm sc-cr})/c\\big)\n\\end{equation}\n\nwhere \n\n$\\rho_{\\rm sc-cr;}(s; s_{cr},\\rho_{cr}) = \\sqrt{(s-s_{cr})^2+\\rho_{cr}^2}$ is the distance from the spacecraft to the corner reflector,\n\n$\\theta_{az,cr} = \\sin^{-1} \\frac{s-s_{cr}}{\\rho_{\\rm sc-cr}}$ \n\nand\n\n$\\theta_{el,cr} = \\cos^{-1}\\frac{h_{sc}}{\\rho_{cr}}$\n\nand we have arbitrarily removed a large phase offset $4 \\pi \\rho_l / \\lambda$ to make the phase numbers more manageable. The rect function indicates that the chirp extent only covers the range defined by the pulse length.\n\n\n```python\ndef E_cr(sv,rhov,s_cr,rho_cr):\n rho_sc_cr = np.sqrt((sv-s_cr)**2+rho_cr**2)\n th_el_cr = np.arccos(h_sc/rho_cr)\n th_az_cr = np.arcsin((sv-s_cr)/rho_sc_cr)\n return np.sqrt(P_R) * S_p(th_az_cr,th_el_cr) * np.exp(-1j * 4. * np.pi * (rho_sc_cr-rho_l)/Lambda) * C_r_r (rhov-rho_sc_cr)\n\n```\n\nTo simulate the image, we must specify the location of the corner reflectors. First, we define the array of locations: 3 reflectors, each at different ranges and along-track positions. The along track positions will be at $s_0$ and a half beamwidth before and after $s_0$. The range position will be at the boresight range $\\rho_l$, and an eighth of the swath before and after $\\rho_l$. For the purpose of speed and flexibility, the simulation allows using any or all of the three corner reflectors through an index vector.\n\n$P_{\\rm cr} = \\big [\\big (s_{s,{\\rm hb}},\\rho_l-\\frac{\\rho_f-\\rho_n}{8}\\big),(s_0,\\rho_l),\\big(s_{e,{\\rm hb}},\\rho_l+\\frac{\\rho_f-\\rho_n}{8}\\big) \\big ]$\n\n\n\n```python\nS_cr = np.array([s_s_hb,s_0,s_e_hb],dtype='float64')\nRho_cr = np.array([rho_l-(rho_f-rho_n)/8.,rho_l,rho_l+(rho_f-rho_n)/8.],dtype='float64')\nInd_cr=[1]\n```\n\nNow it is time to define the grid for computing the simulated data; this will be over a portion range that covers the corner reflectors (to save computation time) and the along track extend define $s_{s,{\\rm sim}}$ and $s_{s,{\\rm sim}}$, which is specified in terms of the number of along-track beamwidths. The along track sample spacing is nominally set by the PRF $f_p$ as $\\Delta s = v_{sc}/f_p$. The range spacing is nominally set by the range bandwidth $\\Delta\\rho = c / 2B_R$. For both dimensions, we allow an oversampling factor so that we can easily examine the results without interpolating the results.\n\n\n```python\ns_ov = 1.\nrho_ov = 4.\nDelta_s = v_sc/f_p\nn_s_sim = int(np.round((s_e_sim-s_s_sim)*s_ov/Delta_s))\ns_sim = np.linspace(s_s_sim, s_e_sim, n_s_sim)\nrho_mean = (rho_f+rho_n)/2.\nrho_s_sim = rho_mean - (rho_f-rho_n)/4. # central half of swath\nrho_e_sim = rho_mean + (rho_f-rho_n)/4. # central half of swath\n#this is the default for range extent. We can narrow further to just surrounding the CRs\nRho_cr_min = rho_f\nRho_cr_max = rho_n\nfor i in range(len(Ind_cr)):\n Rho_cr_min=np.minimum(Rho_cr[Ind_cr[i]],Rho_cr_min)\nfor i in range(len(Ind_cr)):\n Rho_cr_max=np.maximum(Rho_cr[Ind_cr[i]],Rho_cr_max)\nrho_s_sim = Rho_cr_min - 4.* c* tau_r/2.\nrho_e_sim = Rho_cr_max + 4.* c* tau_r/2.\nn_rho_sim = int(np.round((rho_e_sim-rho_s_sim)*rho_ov/Delta_rho))\n\nrho_sim = np.linspace(rho_s_sim, rho_e_sim, n_rho_sim)\nS_sim, Rho_sim = np.meshgrid(s_sim,rho_sim)\n```\n\nLet's look at the corner reflectors on the grid:\n\n\n```python\nfig = plt.figure(figsize=(10, 7))\n\nplt.scatter(S_cr,Rho_cr, cmap='magma')\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Corner Reflector Locations\")\nplt.xlim(s_s_sim,s_e_sim)\nplt.ylim(rho_s_sim,rho_e_sim);\nplt.tight_layout()\n```\n\nWith the grid defined, we can simply evaluate $E_{cr}$ over the grid.\n\n\n```python\n%%time\nE_cr_sim = np.zeros(S_sim.shape,dtype=np.complex128)\nprint (\"Field initialized\")\nfor i in range(len(Ind_cr)):\n E_cr_sim += E_cr(S_sim,Rho_sim,S_cr[Ind_cr[i]],Rho_cr[Ind_cr[i]])\n print(\"Completed CR\",i)\n```\n\n\n```python\nfig = plt.figure(figsize=(12, 6))\n#plt.contourf(S_sim, Rho_sim, 10.*np.log10(np.absolute(E_cr_sim)), 20, cmap='RdGy')\n#plt.pcolormesh(S_sim, Rho_sim, 10.*np.log10(np.absolute(E_cr_sim)),cmap='RdGy')\nextent = [s_s_sim, s_e_sim, rho_s_sim, rho_e_sim]\nplt.imshow(10.*np.log10(np.abs(E_cr_sim)), cmap='magma', extent=extent, origin='lower', aspect='auto')\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Magnitude of Simulated Echoes\");\nplt.tight_layout()\n```\n\nNow let's look at these echoes in the presence of the thermal noise signature, with a field with noise power $k T B_r$.\n\n\n\n```python\nnp.random.seed(1)\nE_cr_sim_w_noise = E_cr_sim + np.random.normal(loc=0.,scale=np.sqrt(P_N/2.),size=S_sim.shape) + 1j * np.random.normal(loc=0.,scale=np.sqrt(P_N/2.),size=S_sim.shape)\nprint (\"Random field calculated\")\n```\n\n\n```python\nfig = plt.figure(figsize=(12, 6))\nplt.imshow(10.*np.log10(np.abs(E_cr_sim_w_noise)), cmap='magma', extent=extent, origin='lower', aspect='auto')\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Magnitude of Simulated Echoes with Noise\");\nplt.tight_layout()\n```\n\nAs can be seen, these signals are buried in the noise. They don't pop out until we focus the image, taking advantage of the redundancy in observing any single target over a wide beam in azimuth, and a wide pulse in range. So it is time to focus the data.\n\n# 3.0 Focusing SAR Data - Range\n\n\n\nThe straightforward approach to focusing data is to develop a means for any given target to compensate the phase variations in range and azimuth induced by the range chirp and the azimuth hyperbolic range variability and then sum up that energy. For the isolated corner reflector target responses shown above this is easy to visualize, in that we are plotting the magnitude. If the phase for each of these points was a constant value, it is easy to see that integrating the points in the non-zero areas would give a big integration gain. Of course if the phase were constant everywhere, one could not distinguish one corner reflector from any other, so there would be no integration advantage. Because the phase history in range and azimuth is unique for each scatterer on the ground, the process of compensating the phase for that specific point, then integrating the energy localizes the return from that specific point. This can be partitioned into the phase compensation and integration in range, called \"range compression,\" or \"range correlation,\" followed by azimuth processing.\n\n## 3.1 Range Correlation - time domain\n\n\n\n\nThe range signal that matches the range response received echo of a corner reflector, is simply \n\\begin{equation}\nC_r\\big(2(\\rho-\\rho_{\\rm sc-cr})/c\\big) = e^{i 4 \\pi \\frac{B_r}{c^2\\tau_r} (\\rho-\\rho_{\\rm sc-cr} )^2} {\\rm rect}\\big(\\frac{\\rho-\\rho_{\\rm sc-cr}}{c \\tau_r/2}\\big)\n\\end{equation}\n\nwhich is the same as the received echo $E_{cr}(s,\\rho; s_{cr}, \\rho_{cr})$ but without the propagation-related amplitude scale factor and phase components of the signal.\n\nTo recover the signal in range as a point, we would compute the conjugate function and integrate for each range point\n\n\n\\begin{equation}\nE_{cr,rc}(s,\\rho; s_{cr}, \\rho_{cr}) = \\displaystyle\\int E_{cr}(s,\\rho'; s_{cr}, \\rho_{cr}) C^*_r(s,\\rho'+\\rho; s_{cr}, \\rho_{cr}) d\\rho'\n\\end{equation}\n\nThis is by definition the cross correlation of these two functions. For corner reflector, this correlation integral can be evaluated\n\n$\n\\begin{array}{ll}\nE_{cr,rc} & = & \\sqrt{G_T} e^{-i 4\\pi(\\rho_{\\rm sc-cr}-\\rho_{l})/\\lambda} \\displaystyle\\int C_r(s,\\rho'; s_{cr}, \\rho_{cr}) C^*_r(s,\\rho'+\\rho; s_{cr}, \\rho_{cr}) d\\rho' \\\\\n& = & \\sqrt{G_T} e^{-i 4\\pi(\\rho_{\\rm sc-cr}-\\rho_{l})/\\lambda} \\displaystyle\\int e^{i 4 \\pi \\frac{B_r}{c^2\\tau_r} (\\rho'-\\rho_{\\rm sc-cr} )^2} {\\rm rect}\\big(\\frac{\\rho'-\\rho_{\\rm sc-cr}}{c \\tau_r/2}\\big) e^{-i 4 \\pi \\frac{B_r}{c^2\\tau_r} (\\rho'+\\rho-\\rho_{\\rm sc-cr} )^2} {\\rm rect}\\big(\\frac{\\rho'+\\rho-\\rho_{\\rm sc-cr}}{c \\tau_r/2}\\big) d\\rho' \\\\\n& = & \\sqrt{G_T} e^{-i 4\\pi(\\rho_{\\rm sc-cr}-\\rho_{l})/\\lambda} e^{i 4 \\pi \\frac{B_r}{c^2\\tau_r} (\\rho^2-\\rho_{\\rm sc-cr}^2)} \\displaystyle\\int e^{i 4 \\pi \\frac{B_r}{c^2\\tau_r} (\\rho'-\\rho_{\\rm sc-cr} )^2} {\\rm rect}\\big(\\frac{\\rho'-\\rho_{\\rm sc-cr}}{c \\tau_r/2}\\big) {\\rm rect}\\big(\\frac{\\rho'+\\rho-\\rho_{\\rm sc-cr}}{c \\tau_r/2}\\big) d\\rho'\n\\end{array}\n$\n\n\n\n```python\n%%time\nn_r = int(np.round(rho_ov * B_r * tau_r)) \nrho_c_ov = np.linspace(0.,c*tau_r/2.,n_r) # properly oversample to match simulation oversampling\nC_r_ref = C_r_r(rho_c_ov) # compute the properly oversampled reference function\nrange_shape=np.correlate(E_cr_sim[:,0],C_r_ref,mode='valid').shape[0] # perform one correlation to determine the length of the output\nE_cr_rc = np.zeros((range_shape,E_cr_sim.shape[1]),dtype=np.complex128) # initialize\nfor i in range(E_cr_sim.shape[1]): # correlate\n E_cr_rc[:,i] = np.correlate(E_cr_sim[:,i],C_r_ref,mode='valid')\n```\n\nHere is the range-correlated signal over the simulation domain.\n\n\n```python\nn_rho_rc = E_cr_rc.shape[0]\nrho_s_rc = rho_s_sim\nrho_e_rc = rho_sim[n_rho_rc-1]\nextent = [s_s_sim, s_e_sim, rho_s_rc, rho_e_rc]\nfig = plt.figure(figsize=(8, 6))\nplt.imshow(20.*np.log10(np.abs(E_cr_rc)), cmap='magma', extent=extent, origin='lower', aspect='auto',vmax=-100.,vmin=-200.)\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Power of Range Compressed Echoes - Using Time-domain Correlation\")\nplt.colorbar(label='Power (dB)');\n\n```\n\nLet's look at the range correlated output centered on the along-track position of the first corner reflector in the simulation list (may be different from the first defined depending on the index array above).\n\n\n```python\nind_s_cr=int(np.round((S_cr[Ind_cr[0]]-s_s_sim)*s_ov/Delta_s))\nind_rho_cr=int(np.round((Rho_cr[Ind_cr[0]]-rho_s_sim)*rho_ov/Delta_rho))\nE_cr_rc_1rl = E_cr_rc[:,ind_s_cr]\n```\n\n\n```python\nfig = plt.figure(figsize=(13, 5))\n\nax = fig.add_subplot(1,2,1)\nax.plot(rho_sim[0:E_cr_rc_1rl.shape[0]],20.*np.log10(np.abs(E_cr_rc_1rl)))\nax.set_title(\"Range Compressed Signal - One Range Line\")\nax.set_xlabel(\"Range (m)\")\nax.set_ylabel(\"Power\")\n\nax = fig.add_subplot(1,2,2)\nsr=ind_rho_cr-50\ner=ind_rho_cr+50\nax.plot(rho_sim[sr:er],20.*np.log10(np.abs(E_cr_rc[sr:er,ind_s_cr]))-np.max(20.*np.log10(np.abs(E_cr_rc[sr:er,ind_s_cr]))))\nax.set_title(\"Range Compressed Signal - One Range Line - One CR\")\nax.set_xlabel(\"Range (m)\")\nax.set_ylabel(\"Signal Power Relative to Peak\")\n\nplt.show();\nplt.tight_layout()\n```\n\n## 3.2 Range Correlation - frequency domain\n\n\n\n\nThis can also be accomplished with FFT-based circular convolution, and it runs considerably faster. To accomplish this most straightforwardly, we can create a version of the chirp that is the same length as the range vector; then when we take the FFT, both will be the same length. Subscript \"rl\" stands for \"range line.\" Subscript \"fd\" stands for \"frequency domain.\" First compute the reference function's spectrum:\n\n\n```python\nC_r_ref_rl = np.zeros(n_rho_sim) + 1j *np.zeros(n_rho_sim)\nC_r_ref_rl[0:n_r] = C_r_ref\nC_r_REF_rl = np.conjugate(np.fft.fft(C_r_ref_rl))\nfreq = np.fft.fftfreq(C_r_REF_rl.shape[-1])\nfig = plt.figure(figsize=(8, 5))\nplt.plot(freq,np.absolute(C_r_REF_rl))\nplt.xlabel(\"Frequency (inverse samples)\")\nplt.ylabel(\"Spectral Power\")\nplt.title(\"Power of Range Reference Function Spectrum\");\n```\n\nNow perform the correlation through circular convolution in the frequency domain.\n\n\n```python\n%%time\nE_cr_rc_fd = np.zeros((E_cr_sim.shape),dtype=np.complex128)\nfor i in range(E_cr_sim.shape[1]):\n E_cr_rc_fd[:,i] = np.fft.ifft(np.fft.fft(E_cr_sim[:,i])*C_r_REF_rl)\n```\n\n\n```python\nplt.style.use('seaborn-whitegrid')\n\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nextent = [s_s_sim, s_e_sim, rho_s_sim, rho_e_sim]\nplt.imshow(20.*np.log10(np.abs(E_cr_rc_fd)), cmap='magma', extent=extent, origin='lower', aspect='auto',vmax=-100.,vmin=-200.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Power of Range Compressed Echoes \\n Using FFT Correlation\");\n\nplt.subplot(1,2,2)\ntemp = 20.*np.log10(np.abs(E_cr_rc_fd[sr:er,int(np.round(E_cr_rc_fd.shape[1]/2))]))\nplt.plot(rho_sim[sr:er],temp - np.max(temp))\nplt.title(\"Range Compressed Signal \\n One Range Line - One CR - Using FFT Correlation\")\nplt.xlabel(\"Range (m)\")\nplt.ylabel(\"Signal Power Relative to Peak\");\nplt.tight_layout()\n#plt.show();\nfigname = \"RangeCompressed_FFT.png\"\nplt.savefig(figname, dpi=300, transparent='false')\n```\n\n
\n
\n ASSIGNMENT #2: Discuss Range Compression Result \n\nAnswer the following questions regarding the range-focused data set: \n
    \n
    \n
  1. Question 2.1: Download and present this figure in your document. Provide a Figure caption describing what is shown in the two panels of this figure.
  2. \n
    \n
  3. Question 2.2: The range compressed echo in the left panel shows how the range to the target is significantly changing as the sensor passes by the target along its orbit (range cell migration). Please zoom into the figure and provide a rough estimate for the range cell migration within the main beam of the antenna (main beam extents roughly from along-track position -15km to along-track position +15km. Please provide range cell migration in the units of [meters].
  4. \n
    \n
  5. Question 2.3: Measure the achieved range resolution by zooming into the panel on the right and measure the width of the focused peak at the -3dB power position. Please provide your estimate for range resolution in the unit of [meters].
  6. \n
\n
\n
\n\n# 4.0 Focusing SAR Data - Azimuth \n\n\n\n\nNow that we understand the range correlation, it is time to do the same thing in the along-track, or azimuth direction. The complication in azimuth is that each target on the ground expresses its reflected energy at different ranges in each pulse. This can be seen easily in the 2d plots above where the bright return in the range compressed data from a single corner reflector \"migrates\" through the range as a function of azimuth position - called range migration. This is addressed typically in two ways:\n\n1. Correlate each point on the ground with its exact hyperbolic replica, calculable from the knowledge of the radar motion. This is called back-projection or time-domain processing, and can be quite computationally expensive.\n2. Approximate the time-domain approach by working in the frequency domain to compensate for the migration, then perform a circular convolution.\n\nWe'll take these two approaches one at a time. First let's do the frequency domain approach as it can be built step by step to illustrate the impacts of the approximations.\n\n\n## 4.1 Azimuth reference function \n\n\n\n\nFirst let's look at the signal in azimuth at the peak of the range response of the first corner reflector.\n\n\n```python\nrho_ind_cr = int(np.round((Rho_cr[Ind_cr[0]]-rho_s_sim)*rho_ov/Delta_rho))\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.title(\"Range Compressed Signal - One Azimuth Line - One CR\")\nplt.xlabel(\"Range (m)\")\nplt.ylabel(\"Signal Power\")\nplt.plot(s_sim,20.*np.log10(np.abs(E_cr_rc[rho_ind_cr,:])));\nplt.tight_layout()\n```\n\nThe power of the received signal from a corner reflector in azimuth should follow the antenna pattern, but instead the power at a fixed range has a plateau around the corner reflector, then diminishes in a oscillatory fashion. These oscillations are a consequence of the fact that the range from the spacecraft to the corner reflector is changing with $s$. When the range change is such that the peak of the response migrates out of the range bin, we will begin to see the sidelobes of the range response. The observed null would occur at the $s$ location at which the range changes by the distance in range to the first null of the range sidelobes, which is $\\Delta\\rho$. Thus\n\n$s_{null} = \\sqrt{(\\rho_{cr}+\\Delta\\rho)^2 - \\rho^2_{cr}}$\n\n\n\n```python\ns_null = np.sqrt((Rho_cr[Ind_cr[0]]+Delta_rho)**2-(Rho_cr[Ind_cr[0]])**2)\nprint('Azimuth null position due to range migration =',np.round(100.*s_null)/100.,'m')\n```\n\nwhich is about where it is observed in the figure above. Let's instead track the expected location of the peak in azimuth. This is accomplished by noting that\n\n$ \\rho(s) = \\sqrt{(s-s_{cr})^2+\\rho^2_{cr}}$\n\nWe can use this to look up the range for any give $s$ in the range compressed data. The left panel below shows the range hyperbola as a function of along-track position. The central panel shows then the signal power in the range compressed signal along this hyperbola, which now looks like the antenna pattern as one would expect, indicating we are tracking the range migration well. The phase along this curve should have a hyperbolic variation of phase across azimuth. The right panel plots this phase; it looks like a wrapped hyperbolic function, so all is well. This phase will be the basis for the azimuth reference function.\n\n\n```python\nfig = plt.figure(figsize=(13, 5))\n\nax = fig.add_subplot(1,3,1)\nrho_of_s = np.sqrt((s_sim-S_cr[Ind_cr[0]])**2 + Rho_cr[Ind_cr[0]]**2)\nax.plot(s_sim,rho_of_s)\nax.set_title(\"Range as a function of azimuth for \\n First CR\")\nax.set_xlabel(\"Azimuth (m)\")\nax.set_ylabel(\"Range (m)\")\n\nax = fig.add_subplot(1,3,2)\nind_rho_of_s = np.round((rho_of_s-rho_s_sim)*rho_ov/Delta_rho).astype(int)\nE_cr_rc_1az = np.zeros(n_s_sim) + 1j *np.zeros(n_s_sim)\nfor i_s in range(n_s_sim):\n E_cr_rc_1az[i_s] = E_cr_rc[ind_rho_of_s[i_s],i_s]\nax.plot(s_sim,20.*np.log10(np.abs(E_cr_rc_1az)))\nax.set_title(\"Range Compressed Signal \\n One Azimuth Line Following range \\n One CR\")\nax.set_xlabel(\"Azimuth (m)\")\nax.set_ylabel(\"Signal Power\")\n\nax = fig.add_subplot(1,3,3)\nss = int(n_s_sim/2)-100\nse = int(n_s_sim/2)+100\n\nax.set_title(\"Range Compressed Signal \\n One Azimuth Line Following range \\n One CR\")\nax.set_xlabel(\"Azimuth (m)\")\nax.set_ylabel(\"Phase (rad)\")\nax.plot(s_sim[ss:se],np.arctan2(np.imag(E_cr_rc_1az[ss:se]),np.real(E_cr_rc_1az[ss:se])));\nplt.tight_layout()\n```\n\n## 4.2 Azimuth Focusing - time domain\n\n### 4.2.1 Coarsest Approximation: Straight Time-domain Correlation with a Constant Reference Function\n\n\n\n\nThe idea here is to assume that the azimuth reference function is as simple and constant as the range reference function. In this case, we can just perform a simple time domain correlation in azimuth. This ignores range migration, and the variation in range migration magnitude as a function of range. For small synthetic apertures and coarse range resolution, this may be adequate. Let's see how bad it can be. Let's use the mid-range as our reference and calculate the azimuth response there over the synthetic aperture. We'll then use that single function to correlate each of the range bins in the simulated range compressed pulse sequence and see what happens.\n\n\n\n```python\n%%time\n\ns_s_sa = rho_l * np.tan(theta_sq-theta_L_a/2.) # half beamwidth, w/ squint, relative to s_im\ns_e_sa = rho_l * np.tan(theta_sq+theta_L_a/2.) # half beamwidth, w/ squint, relative to s_im\nn_s_sa = np.round((s_e_sa-s_s_sa)* s_ov/Delta_s).astype(int)\n\ns_sa = np.linspace(s_s_sa,s_e_sa,n_s_sa)\nrho_sa = np.sqrt(rho_l**2+(s_sa)**2)\nC_az_ref = np.exp(-1j*4.*np.pi*rho_sa/Lambda) # correlate of observed phase history\naz_shape=np.correlate(E_cr_rc[0,:],C_az_ref,mode='full').shape[0] # perform one correlation to determine the length of the output\nE_cr_rcac = np.zeros((E_cr_rc.shape[0],az_shape),dtype=np.complex128) # initialize\nfor i in range(E_cr_rc.shape[0]): # correlate\n E_cr_rcac[i,:] = np.correlate(E_cr_rc[i,:],C_az_ref,mode='full')\n```\n\n\n```python\n# starting s of correlation array will be extended by half the azimuth reference function length\nind_s_cr_ac=int(np.round((S_cr[Ind_cr[0]]-s_s_sim)*s_ov/Delta_s))+int(np.round(n_s_sa/2))\nind_rho_cr_ac=ind_rho_cr\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nextent = [s_s_sim-Delta_s*n_s_sa/2./s_ov, s_e_sim+Delta_s*n_s_sa/2./s_ov, rho_s_rc, rho_e_rc]\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac)), cmap='magma', extent=extent, origin='lower', aspect='auto')\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Power of Image - Using Correlation\");\n\nplt.subplot(1,2,2)\nplt.title(\"Power of Image - Using Correlation - Zoom\")\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac[ind_rho_cr_ac-250:ind_rho_cr_ac+250,ind_s_cr_ac-500:ind_s_cr_ac+500])), cmap='magma', origin='lower', aspect='auto')\nplt.colorbar(label='Power (dB)');\n\n#plt.show();\nplt.tight_layout()\n```\n\nNot exactly a perfect corner reflector response, but energy was certainly concentrated. Note also the extra energy focused at 1/4 and 3/4 of the distance across the image. These are azimuth ambiguities, where the energy present outside the main lobe of the antenna is focused at an amplitude attenuated by the antenna pattern. These should occur at the azimuth location where the next stationary phase point is in the aliased signal, which is dependent on the exact sampling rate relative to the phase rate of the signal. Their appearance at ~ +/- 21 km is about right.\n\n### 4.2.2 Next Coarsest Approximation: Straight Time-domain Correlation with a Range-variable Reference Function\n\n\nHere we emphasize the notion that the reference function varies with range. Even though we are not taking into account range migration, we have a better match to the phase at a given range. We calculate the range variable reference function as a matrix, and keep it handy for the circular convolution step to come. This is a little tricky since the limits of the synthetic aperture change with range and we want the convolutions to be consistently aligned across range. So we must first create an array of the same length in azimuth as the number of pulses, then populate the array accordingly.\n\nThe synthetic aperture extents vary with range, and need to take into account any azimuth squint. For a point at $(s_{im},\\rho_{im})$ observed with squint $\\theta_{sq}$, the range of closest approach of the spacecraft when the boresight intersects this point is given by\n\n$ \\rho_{ca} = \\rho_{im} \\cos\\theta_{sq}$ \n\nand the along-track position of this point relative to closest approach is\n\n$s_{ca} = \\rho_{ca} \\tan \\theta_{sq}$\n\nThe limits of the synthetic aperture then are given by the angle subtended around this squint angle:\n\n$s_{s,sa} = \\rho_{ca} \\tan (\\theta_{sq} - \\theta_{L_a}/2) \\qquad s_{e,sa} = \\rho_{ca} \\tan (\\theta_{sq} + \\theta_{L_a}/2)$\n\n$\\rho_{s,sa} = \\rho_{ca} / \\cos(\\theta_{sq}-\\theta_{L_a}/2) \\qquad \\rho_{e,sa} = \\rho_{ca} / \\cos(\\theta_{sq}+\\theta_{L_a}/2)$\n\nWith these limits, the task is now to fill an array with a full set of range-dependent reference functions. The tricky part is calculating the functions on a regular grid with limits that vary with range. There is a lot of indexing and limit checking consequently.\n\n\n\n```python\n# calculate closest approach range and azimuth position for squinted geometry\n\nrho_rc = rho_sim[0:n_rho_rc]\nrho_ca = rho_rc * np.cos(theta_sq)\ns_ca = rho_ca * np.tan(theta_sq)\n\n#define the synthetic aperture extents across range\n\ns_s_sa_v_rho = rho_ca * np.tan(theta_sq-theta_L_a/2.) # half beamwidth, w/ squint, relative to s_im\ns_e_sa_v_rho = rho_ca * np.tan(theta_sq+theta_L_a/2.) # half beamwidth, w/ squint, relative to s_im\nn_s_sa_v_rho = np.round((s_e_sa_v_rho-s_s_sa_v_rho)* s_ov/Delta_s).astype(int)\n\nrho_s_sa_v_rho = rho_ca / np.cos(theta_sq-theta_L_a/2)\nrho_e_sa_v_rho = rho_ca / np.cos(theta_sq+theta_L_a/2)\nn_rho_sa_v_rho = np.round((rho_e_sa_v_rho-rho_s_sa_v_rho)* s_ov/Delta_s).astype(int)\n\n# calculate the indices along track that define the limits for each range.\n\nind_s_s_sa_v_rho = np.round((s_s_sa_v_rho-s_0)/(Delta_s/s_ov)).astype(int)\nind_s_e_sa_v_rho = np.round((s_e_sa_v_rho-s_0)/(Delta_s/s_ov)).astype(int)\n```\n\n\n```python\n# to define the reference function array, find the maximum extent needed\n\nn_s_sa_v_rho = (ind_s_e_sa_v_rho-ind_s_s_sa_v_rho)+1\nn_s_sa_v_rho_max = np.max(n_s_sa_v_rho)\nind_s_s_sa_v_rho_min = np.min(ind_s_s_sa_v_rho)\nind_s_e_sa_v_rho_max = np.max(ind_s_e_sa_v_rho)\n\n# now calculate the reference function placed consistently in the oversized array\n\ns_s_sa = s_0 + ind_s_s_sa_v_rho*Delta_s/s_ov\ns_e_sa = s_0 + ind_s_e_sa_v_rho*Delta_s/s_ov\n\n# initialize the reference array\n\nC_az_ref = np.zeros((n_rho_rc,n_s_sa_v_rho_max),dtype=np.complex128)\n\n# populate the reference array\n\nfor i in range(n_rho_rc): \n s_sa = np.linspace(s_s_sa[i],s_e_sa[i],n_s_sa_v_rho[i])\n rho_sa = np.sqrt(rho_ca[i]**2+(s_sa)**2)\n ssind = ind_s_s_sa_v_rho[i]-ind_s_s_sa_v_rho_min\n seind = ind_s_e_sa_v_rho[i]-ind_s_s_sa_v_rho_min\n C_az_ref[i,ssind:seind+1] = np.exp(-1j*4.*np.pi*rho_sa/Lambda) # correlate of observed phase history\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nplt.imshow((np.abs(C_az_ref)), cmap='magma', origin='lower', aspect='auto')\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Azimuth Reference Function Image\");\n\nindc = int(np.round(C_az_ref.shape[1]/2.))\n\nplt.subplot(1,2,2)\nplt.imshow(np.arctan2(np.imag(C_az_ref[:,indc-200:indc+200]),np.real(C_az_ref[:,indc-200:indc+200])), origin='lower', aspect='auto')\nplt.colorbar(label='Phase (rad)');\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Phase of Azimuth Reference Function Image\");\n\nplt.tight_layout()\n```\n\nThe jittery appearance of this image is due to the fact that the phase reference is changing rapidly as a function of range, and given the sampling in range, the sampling is not regular relative to the phase wrapping rate. Therefore to see a smoother version of this image, one needs to unwrap the image. \n\n\n```python\n%%time\n\n# finally do the correlation\n\naz_shape=np.correlate(E_cr_rc[0,:],C_az_ref[0,:],mode='full').shape[0] # perform one correlation to determine the length of the output\nE_cr_rcac2 = np.zeros((n_rho_rc,az_shape),dtype=np.complex128) # initialize\nfor i in range(n_rho_rc): # correlate\n E_cr_rcac2[i,:] = np.correlate(E_cr_rc[i,:],C_az_ref[i,:],mode='full')\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nextent = [s_s_sim-Delta_s*n_s_sa_v_rho_max/2./s_ov, s_e_sim+Delta_s*n_s_sa_v_rho_max/2./s_ov, rho_s_rc, rho_e_rc]\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Power of Image - Using Correlation\")\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac2)), cmap='magma', extent=extent, origin='lower', aspect='auto',vmax=-40.,vmin=-80.)\nplt.colorbar(label='Power (dB)')\n\nplt.subplot(1,2,2)\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac2[ind_rho_cr_ac-250:ind_rho_cr_ac+250,ind_s_cr_ac-500:ind_s_cr_ac+500])), cmap='magma', origin='lower', aspect='auto',vmax=-40.,vmin=-80.)\nplt.colorbar(label='Power (dB)')\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Image - Using Correlation - Zoom\")\n\nplt.tight_layout()\n\n```\n\n## 4.3 Azimuth Focusing - frequency domain\n\n### 4.3.1 Preparation for Range Migration Correction\n\n\n\n\nNote we have three corner reflectors located at unique ranges and azimuth locations. It is a well known property of Fourier Transforms that a translation of position in one domain is equivalent to a phase ramp in the other domain.\n\n$ \\displaystyle\\int f(t+\\delta t) e^{-i \\omega t} dt = \\displaystyle\\int f(t') e^{-i \\omega (t'-\\delta t)} dt = F(\\omega)e^{i \\omega\\delta t} $\n\nTherefore, by performing the Fourier transform of the range-compressed pulses in the azimuth direction will align the range-migration history *as a function of Doppler Frequency* of all ground points at a given range. Therefore if we express range as a function of Doppler frequency, then we can map the energy along the range curve to a constant range, nominally the closest approach range for broadside imaging, but it could be any constant range. This will allow us to exploit the convolutional properties of Fourier transforms in the azimuth direction.\n\n\n```python\nE_cr_rc_azfd = np.zeros((E_cr_rc.shape),dtype=np.complex128)\n\nfor i in range(E_cr_rc.shape[0]): # correlate\n E_cr_rc_azfd[i,:] = np.fft.fft(E_cr_rc[i,:])\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.imshow(20.*np.log10(np.abs(E_cr_rc_azfd)), cmap='magma', origin='lower', aspect='auto',vmax=-60.,vmin=-100.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track Frequency (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Azimuth Spectrum \");\n\n```\n\n### 4.3.2 Next Coarsest Approximation: Frequency-domain Correlation with a Range-variable Reference Function; no Range Migration Correction\n\n\nNow that we are in the frequency domain, we perform the equivalent of the time domain correlation by also transforming the reference function matrix, multiplying, then inverse transforming. In the correlation line using FFTs, we have combined many operations in one line:\n\n* Shifting the Azimuth chirp by half its length to center it at 0 delay. This centers the convolution properly. (roll function) \n* Perform FFT of it to put it in the spectral domain (remember it changes with each range)\n* Multiply by the spectrum of the data at that range\n* Inverse FFT to bring it back to the time domain. \n\n\n\n```python\n%%time\n\n# the correlation by FFT; no range migration\n\nE_cr_rcac_fd = np.zeros((E_cr_rc.shape),dtype=np.complex128) # initialize\nC_az_REF_al = np.zeros(E_cr_rc.shape[1],dtype=np.complex128)\n\nfor i in range(n_rho_rc): # correlate\n C_az_REF_al = np.zeros(E_cr_rc.shape[1],dtype=np.complex128)\n C_az_REF_al[0:C_az_ref.shape[1]] = np.conjugate(C_az_ref[i,:])\n E_cr_rcac_fd[i,:] = np.fft.ifft(np.fft.fft(E_cr_rc[i,:])*\n np.fft.fft(np.roll(C_az_REF_al,\n -int(C_az_ref.shape[1]/2))))\n \n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nextent = [s_s_sim, s_e_sim, rho_s_rc, rho_e_rc]\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac_fd)), cmap='magma', extent=extent, origin='lower', aspect='auto',vmax=-40.,vmin=-80.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Power of Image - Using FFT Correlation\")\n\nplt.subplot(1,2,2)\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac_fd[ind_rho_cr-250:ind_rho_cr+250,ind_s_cr-500:ind_s_cr+500])), cmap='magma', origin='lower', aspect='auto',vmax=-40.,vmin=-80.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Image - Using FFt Correlation - Zoom\");\nplt.tight_layout()\n```\n\n### 4.3.3 Range Migration Correction\n\n\nNow in the Fourier domain in azimuth above, we can see that there is a migration of the brightness to larger range as frequency increases (zero is at the left and right edges of the above image). This will be the case independent of the number and azimuth location of the corner reflectors. They all collapse back to a migration curve centered on zero frequency at the appropriate range. If there is squint, the migration curve will still be centered on zero frequency, but there will only be energy in the portion of the spectrum dictated by the antenna beam. We can use the range-Doppler relationship to shift energy at a given Doppler frequency to its appropriate range bins. As described in Eq. (6)\n\n\\begin{equation}\n\\phi_{az}(s;\\rho_0,s_0) = -\\frac{4\\pi}{\\lambda} \\big(\\rho(s;\\rho_0,s_0) - \\rho_0\\big) = -\\frac{4\\pi}{\\lambda} \\bigg (\\sqrt{(s-s_0)^2+\\rho_0^2})- \\rho_0\\bigg)\n\\end{equation}\n\nNoting that in our specialized geometry $s=v_{sc} t$, the time derivative of $\\phi_{az}$ can be written\n\n\\begin{equation}\n\\omega_{az} = \\frac{\\partial}{\\partial t} \\phi_{az}(s;\\rho_0,s_0) = -\\frac{4\\pi}{\\lambda} \\cdot \\frac{1}{2}\\big ((s-s_0)^2+\\rho_0^2 \\big)^{-1/2}\\cdot 2(s-s_0) \\cdot v_{sc}\n\\end{equation}\n\nConsolidating terms, and noting that $(s-s_0)^2 = \\rho^2-\\rho_0^2$, the Doppler frequency in Hertz is:\n\n\\begin{equation}\nf_{hz,az} = \\frac{\\omega_{az}}{2\\pi} = -\\frac{2}{\\lambda} \\cdot \\frac{1}{\\rho} \\big (\\rho^2-\\rho_0^2\\big )^{1/2} \\cdot v_{sc}\n\\end{equation}\n\nor \n\n\\begin{equation}\nf^2_{hz,az} = -\\frac{4}{\\lambda^2} \\cdot \\frac{1}{\\rho^2} \\big (\\rho^2-\\rho_0^2\\big ) \\cdot v^2_{sc}\n\\end{equation}\n\nThis equation can be rearranged to solve for range as a function of doppler frequency:\n\n\\begin{equation}\n\\rho(f_{hz,az}) = \\rho_0 \\bigg (1-\\frac{\\lambda^2 f^2_{hz,az}}{4 v^2_{sc}}\\bigg)^{-1/2}\n\\end{equation}\n\nThis can be reduced by Taylor expansion to the more familiar expression:\n\n\\begin{equation}\n\\rho(f_{hz,az}) \\approx \\rho_0 \\bigg (1+\\frac{\\lambda^2 f^2_{hz,az}}{8 v^2_{sc}}\\bigg)\n\\end{equation}\n\n\n\nEquation (26) can be used to apply the correction in the azimuth frequency domain. For each Doppler Frequency bin, we need to differentially shift the range position of each point by a range dependent amount that brings all points on a migration curve to the same range bin. This will allow proper compression of the energy in azimuth with no range migration loss. \n\nBecause the Fourier Transform is on sampled data, the azimuth spectrum is circular, so if there is significant squint, we need to be careful to compute the range migration curve with respect to the beam edges in the Doppler domain. The reference range will be the range at beam center, $\\rho_{dc}$, not the closest approach range. The Doppler centroid as a function of range is given by\n\n\\begin{equation}\nf_{dc}(\\rho_{dc}) = \\frac{2 v_{sc}}{\\lambda} \\sin\\theta \\sin\\theta_{sq} = \\frac{2 v_{sc}}{\\lambda} \\sin\\bigg(\\cos^{-1}\\bigg(\\frac{h_{sc}}{\\rho_{dc}}\\bigg)\\bigg) \\sin\\theta_{sq}\n\\end{equation}\nand the limits in the azimuth spectrum where it has significant energy is:\n\n\\begin{equation}\nf_{dc,\\pm}(\\rho_{dc}) = f_{dc}(\\rho_{dc}) \\pm \\frac{f_{az,bw}}{2}\n\\end{equation}\n\nwhereas the locations where the azimuth spectrum of the signal would have wrap points relative to the centroid are given by:\n\n\\begin{equation}\nf_{dc,\\pm,f_s}(\\rho_{dc}) = f_{dc}(\\rho_{dc}) \\pm \\frac{f_s}{2}\n\\end{equation}\n\nNote that if there is a large squint, and the azimuth spectrum is critically sampled, then the indexing into the azimuth buffer is messy, because the wrap points must be computed in the circular array modulo the buffer length of the array, while the range migration curve must be computed in an absolute sense. Note also that the negative frequencies are in the upper half of the array, so once the index is calculated, it must be adjusted. If the azimuth spectrum is oversampled to begin with, one still must address the proper interpretation of positive and negative frequencies in the buffer, but wrapping of the spectrum would be avoided. For the sake if simplicity in this tutorial, we will assume that we don't need to address the spectral wrapping, i.e. that we are either at zero squint or the spectrum is oversampled. If this is not the case, the range migration correction will not be correct.\n\n\n\n```python\n# calculate centroid as a function of range\n\ndef f_dc(rho):\n return 2. * v_sc * np.sin(np.arccos(h_sc/rho)) * np.sin(theta_sq) / Lambda\n\nf_az_bw = v_sc * s_ov / Delta_s\n\n# determine the range of frequencies in the spectrogram and define the meshgrid\n# assumes that the spectrum array will be rotated to have -f_bw/2 at an array index if [0]\n\nf_im = np.linspace(-f_az_bw/2.,f_az_bw/2.,E_cr_rc.shape[1])\nF_im, Rho_rc = np.meshgrid(f_im,rho_rc)\n\n# calculate the ambiguity of each of the frequencies in the spectrogram and add it to the frequency of each bin\n\nF_abs = np.zeros((F_im.shape),dtype=np.int)\nF_abs = F_im + f_az_bw * np.round((f_dc(Rho_rc)-F_im)/ (v_sc * s_ov / Delta_s)) \n\n```\n\n\n```python\nplt.figure(figsize=(13, 5))\nplt.imshow(F_abs, cmap='magma', origin='lower', aspect='auto')\nplt.colorbar(label='Frequency (Hz)')\nplt.xlabel(\"Along track Spectrum bin (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Absolute Frequency at each Frequency in Spectrogram\");\n```\n\n\n```python\n# rotate the spectrum to work more easily in this domain\nE_cr_rc_azfd_shift = np.zeros((E_cr_rc_azfd.shape),dtype=np.complex128)\nfor i in range(E_cr_rc.shape[0]): # correlate\n E_cr_rc_azfd_shift[i,:] = np.fft.fftshift(E_cr_rc_azfd[i,:])\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.imshow(20.*np.log10(np.abs(E_cr_rc_azfd_shift)), cmap='magma', origin='lower', aspect='auto',vmax=-60.,vmin=-100.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track Frequency (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Rotated Azimuth Spectrum \");\n```\n\nNow that we have the absolute frequency calculated for each range, and the spectrum rotated to match it, we are a position to perform the range migration correction for each frequency.\n\n\n\n```python\nRho_rm = np.zeros((F_im.shape),dtype=np.int)\nRho_rm = Rho_rc * (np.cos(theta_sq) / np.sqrt(1.- Lambda**2 * F_abs**2/ (4. * v_sc**2)))\nRho_rm_nn = np.zeros((F_im.shape),dtype=np.complex128)\nRho_rm_nn = np.round((Rho_rm-rho_rc[0])*rho_ov/Delta_rho).astype(int)\n\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.imshow(Rho_rm_nn, cmap='magma', origin='lower', aspect='auto')\nplt.colorbar(label='Range Migration (pixel)');\nplt.xlabel(\"Along track Frequency (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Range Migration Shift\");\n```\n\nWe are now ready to use this range migration map to resample the spectrum. The spectral wrap cut (which will not appear in the zero squint geometry) presents a bookkeeping challenge. For this tutorial we simply ignore it and interpolate across it. This will introduce artifacts, but using a nearest neighbor interpolator will mitigate some of the edge effects. \n\n\n```python\n%%time\n\n# for each frequency, use sinc interpolator to move data in range.\nE_cr_rc_azfd_rm = np.zeros((E_cr_rc_azfd.shape),dtype=np.complex128)\nfor i in range(E_cr_rc_azfd_rm.shape[0]):\n for j in range(E_cr_rc_azfd_rm.shape[1]):\n E_cr_rc_azfd_rm[i,j] = E_cr_rc_azfd_shift[min(Rho_rm_nn[i,j],E_cr_rc_azfd_rm.shape[0]-1),j]\n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.imshow(20.*np.log10(np.abs(E_cr_rc_azfd_rm)), cmap='magma', origin='lower', aspect='auto',vmax=-60.,vmin=-100.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track Frequency (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Range-Migrated Azimuth Spectrum \");\n```\n\n### 4.3.4 Reference Function Application and Inverse Azimuth Transform to the Image\n\nIt clear that the energy has been migrated to constant range, which will should improve the correlation result. Let's see. \n\n\n```python\n# rotate the spectrum back to original position\nE_cr_rc_azfd_rm_shift = np.zeros((E_cr_rc_azfd.shape),dtype=np.complex128)\nfor i in range(E_cr_rc.shape[0]): # correlate\n E_cr_rc_azfd_rm_shift[i,:] = np.fft.fftshift(E_cr_rc_azfd_rm[i,:])\n```\n\n\n```python\n%%time\n\n# the correlation by FFT; with range migration\n\nE_cr_rcac_fd_rm = np.zeros((E_cr_rc.shape),dtype=np.complex128) # initialize\nC_az_REF_al = np.zeros(E_cr_rc.shape[1],dtype=np.complex128)\n\nfor i in range(n_rho_rc): # correlate\n C_az_REF_al = np.zeros(E_cr_rc.shape[1],dtype=np.complex128)\n C_az_REF_al[0:C_az_ref.shape[1]] = np.conjugate(C_az_ref[i,:])\n E_cr_rcac_fd_rm[i,:] = np.fft.ifft(E_cr_rc_azfd_rm_shift[i,:]*\n np.fft.fft(np.roll(C_az_REF_al,\n -int(C_az_ref.shape[1]/2))))\n \n```\n\n\n```python\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nextent = [s_s_sim, s_e_sim, rho_s_rc, rho_e_rc]\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac_fd_rm)), cmap='magma', extent=extent, origin='lower', aspect='auto',vmax=-40.,vmin=-80.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (m)\")\nplt.ylabel(\"Range Position (m)\")\nplt.title(\"Power of Image - Using FFT Correlation with RM\");\n\nplt.subplot(1,2,2)\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac_fd_rm[ind_rho_cr-250:ind_rho_cr+250,ind_s_cr-500:ind_s_cr+500])), cmap='magma', origin='lower', aspect='auto',vmax=-40.,vmin=-80.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Image - Using FFT Correlation with RM - Zoom\")\nplt.tight_layout()\n```\n\n\n```python\nplt.figure(figsize=(10, 8))\nplt.imshow(20.*np.log10(np.abs(E_cr_rcac_fd_rm[ind_rho_cr-100:ind_rho_cr+100,ind_s_cr-50:ind_s_cr+50])), cmap='magma', origin='lower', aspect='auto',vmax=-30.,vmin=-70.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Image - Using FFT Correlation - Zoom\");\nfigname = \"FocusedinclRCM_FFT.png\"\nplt.savefig(figname, dpi=300, transparent='false')\n```\n\n### 4.3.5 Measuring Achieved Range and Azimuth Resolution\n\nIn the following code cell we create plots of the focused corner reflector along azimuth and range. Based on these cuts, the achieved azimuth and range resolution can be measured. \n\n\n```python\nazimuthcut = 20.*np.log10(np.abs(E_cr_rcac_fd_rm[ind_rho_cr,ind_s_cr-50:ind_s_cr+50]))\nrangecut = 20.*np.log10(np.abs(E_cr_rcac_fd_rm[ind_rho_cr-50:ind_rho_cr+50,ind_s_cr]))\n\nplt.figure(figsize=(13, 6))\nplt.subplot(1,2,1)\nplt.plot(S_sim[ind_rho_cr,ind_s_cr-50:ind_s_cr+50], azimuthcut-np.max(azimuthcut))\nplt.title(\"Focused Image \\n Cut through CR along Azimuth\")\nplt.xlabel(\"Azimuth (m)\")\nplt.ylabel(\"Signal Power\");\nplt.subplot(1,2,2)\nplt.plot(Rho_sim[ind_rho_cr-50:ind_rho_cr+50,ind_s_cr], rangecut-np.max(rangecut))\nplt.title(\"Focused Image \\n Cut through CR along Range\")\nplt.xlabel(\"Range (m)\")\nplt.ylabel(\"Signal Power\");\nplt.tight_layout()\n#plt.show();\n#figname = \"RangeCompressed_FFT.png\"\n#plt.savefig(figname, dpi=300, transparent='false')'''\n```\n\nAs can be seen, the energy is much better focused with range migration correction than without. The noise structure in the sidelobes is due to the poor nearest neighbor interpolator in azimuth migration. This would be improved with a better interpolator, since as a sinc interpolator. The sinc interpolator included in this notebook runs very slowly, however.\n\n
\n
\n ASSIGNMENT #3: Discussed Fully-Focused Image \n\nAnswer the following questions regarding the fully-focused data set: \n
    \n
    \n
  1. Question 3.1: Download and present Figures 25 and 26 in your document. Provide figure captions describing what is shown in the respective figures.
  2. \n
    \n
  3. Question 3.2: Measure the achieved azimuth resolution by zooming into the left panel in the above figure and measure the width of the focused peak at the -3dB power position. Please provide your estimate for azimuth resolution in the unit of [meters]. Additionally, please compare your measurement to the azimuth resolution numbers quoted in Section 1.5.
  4. \n
    \n
  5. Question 3.3: Measure the achieved range resolution by zooming into the panel on the right and measure the width of the focused peak at the -3dB power position. Please provide your estimate for range resolution in the unit of [meters]. Additionally, please compare your measurement to the theoretical range resolution you calculated in Assignment 1.
  6. \n
\n
\n
\n\n## 4.4 Optional: Back Projection Time-Domain Processing in Azimuth\n\n
\n The next Section is Optional: \n\n the next section of the notebook performs SAR focusing using a backprojection processing approach. In this approach, two-dimensional reference functions for each resolution cell are calculated followed by a focusing step using time-domain correlation. This is the most accurate method of SAR image focusing but also the most time consuming. \n \nNote: Running this next step will take approximately 1 hour.\n\n
\n\n\n\n\nThe range compressed response of a corner reflector was described above as:\n\n\\begin{equation}\nE_{cr,rc}(s,\\rho; s_{cr}, \\rho_{cr}) = K \\sqrt{G_T} e^{-i 4\\pi(\\rho_{\\rm sc-cr}-\\rho_{l})/\\lambda} {\\rm sinc}\\big(\\frac{\\rho-\\rho_{\\rm sc-cr}}{\\Delta\\rho}\\big)\n\\end{equation}\n\nwhere $\\rho_{\\rm sc-cr}$ would be the appropriate range for when the point was observed (including squint), and $s_cr$ the corresponding azimuth position. Let's focus on the complex hyperbolic phase that we explored above when discussing the phase history of a point over time.\n\n\\begin{equation}\nE_{cr,rc}(s,\\rho; s_{cr}, \\rho_{cr}) = K \\sqrt{G_T} e^{-i 4\\pi\\big(\\sqrt{(s-s_{cr})^2+\\rho_{cr}^2}-\\rho_{l}\\big)/\\lambda} {\\rm sinc}\\big(\\frac{\\rho-\\rho_{\\rm sc-cr}}{\\Delta\\rho}\\big)\n\\end{equation}\n\nIf we generalize $s_{cr}, \\rho_{cr}$ to any image point $[s_i,\\rho_i]$, this relationship of range to azimuth remains:\n\n\\begin{equation}\n\\rho (s) = \\sqrt{(s-s_i)^2+\\rho_i^2}\n\\end{equation}\n\nIn our idealized geometry, with the spacecraft flying a straight line above a flat earth, $\\rho_i$ is not a function of $s_i$ (no topography, no variable distance from orbit to ground), so $\\rho(s)$ is the same function for any $s_i$ at a given $\\rho_i$, but varies with $\\rho_i$.\n\nTherefore, to gather the energy in azimuth to focus the image at point $(s_i,\\rho_i)$, we simply look up the sample in the range compressed echoes corresponding for each $s$ and $\\rho$ in the synthetic aperture, compensate the propagation phase delay at each point, then sum all points in the synthetic aperture. Since the sample points are not necessarily perfectly aligned, we would ideally interpolate the echoes to get the exact values at $(s,\\rho)$. However, for the purpose of this tutorial, we will just take the nearest neighbor echo sample. First we define the image grid to be smaller than the simulation grid sufficiently to avoid edge effects.\n\n\n\n```python\n# define the image grid - trim relative to simulation\n# in range, trim by pulse duration\n\nrho_s_im = rho_s_sim + c * tau_r/2.\nrho_e_im = rho_e_sim - c * tau_r/2.\nn_rho_im = int(np.round((rho_e_im-rho_s_im)* rho_ov/Delta_rho))\nrho_im = np.linspace(rho_s_im,rho_e_im,n_rho_im)\n\n# in azimuth, trim by half a beamwidth\ns_s_im = s_s_sim + rho_f * theta_L_a / 2.\ns_e_im = s_e_sim - rho_f * theta_L_a / 2. \nn_s_im = int(np.round((s_e_im-s_s_im)* s_ov/Delta_s)) \ns_im = np.linspace(s_s_im,s_e_im,n_s_im)\nS_im, Rho_im = np.meshgrid(s_im,rho_im)\n```\n\nNext define the synthetic aperture extents. These vary with range, and need to take into account any azimuth squint. We did this above when defining the azimuth reference function for the correlation approach. Once again, for a point at $(s_{im},\\rho_{im})$ observed with squint $\\theta_{sq}$, the range of closest approach of the spacecraft when the boresight intersects this point is given by\n\n$ \\rho_{ca} = \\rho_{im} \\cos\\theta_{sq}$ \n\nand the along-track position of this point relative to closest approach is\n\n$s_{ca-rel} = \\rho_{ca} \\tan \\theta_{sq}$\n\nThe limits of the synthetic aperture then are given by the angle subtended around this squint angle:\n\n$s_{s,sa} = \\rho_{ca} \\tan (\\theta_{sq} - \\theta_{L_a}/2) \\qquad s_{e,sa} = \\rho_{ca} \\tan (\\theta_{sq} + \\theta_{L_a}/2)$\n\n$\\rho_{s,sa} = \\rho_{ca} / \\cos(\\theta_{sq}-\\theta_{L_a}/2) \\qquad \\rho_{e,sa} = \\rho_{ca} / \\cos(\\theta_{sq}+\\theta_{L_a}/2)$\n\n\n\n```python\n# define the synthetic aperture extents across range\n# first the closest approach range and azimuth for a given squinted slant range\nrho_ca = rho_im * np.cos (theta_sq)\ns_carel = rho_ca * np.tan(theta_sq)\n\n# Next the start and end extents for each of these points relative to s_im\ns_s_sa = rho_ca * np.tan(theta_sq-theta_L_a/2.) # half beamwidth, w/ squint, relative to s_im\ns_e_sa = rho_ca * np.tan(theta_sq+theta_L_a/2.) # half beamwidth, w/ squint, relative to s_im\nn_s_sa = np.round((s_e_sa-s_s_sa)* s_ov/Delta_s).astype(int)\n\nrho_s_sa = rho_ca / np.cos(theta_sq-theta_L_a/2)\nrho_e_sa = rho_ca / np.cos(theta_sq+theta_L_a/2)\nn_rho_sa = np.round((rho_e_sa-rho_s_sa)* s_ov/Delta_s).astype(int)\n```\n\nFor a given image point, $(s_{im},\\rho_{im})$ in the image of points $[s_{im},\\rho_{im}]$, coordinates over which to integrate the echoes are $(s,\\rho(s)) = (s,\\sqrt{(s-s_{im})^2+\\rho^2_{im}})$. Thus the time domain back projection processing will be\n\\begin{equation}\nE_{cr,td}(s_{im},\\rho_{im}) = \\displaystyle \\int_{s_{im}+s_{s,sa}}^{s_{im}+s_{e,sa}} E_{cr,rc}(s,\\rho(s)) e^{i 4 \\pi \\rho(s) /\\lambda} ds\n\\end{equation}\n\nwith $s_{s,sa}$ and $s_{e,sa}$ defined above as image point-relative extents of the synthetic aperture. Note we can get away with calculating the $\\rho(s)$ function once per range bin because of the regular rectilinear motion with no topography. You will rapidly find if you execute the next block that it takes *forever* due to the triple loop and the python-interpreted indexing. You will need to interrupt the run to uncomment the limits specified around each of the corner reflectors to perform only the necessary calculations.\n\n\n```python\n%%time\n\nE_cr_im = np.zeros((n_rho_im,n_s_im),dtype=np.complex128) # initialize output grid\n\n# Calculate computable limits +/- 200 m\nbp_win = 200.\n#\nns_s_im_cr = np.round((S_cr - bp_win - s_s_im)*s_ov/Delta_s).astype(int)\nne_s_im_cr = np.round((S_cr + bp_win - s_s_im)*s_ov/Delta_s).astype(int)\nns_rho_im_cr = np.round((Rho_cr - bp_win - rho_s_im)*rho_ov/Delta_rho).astype(int)\nne_rho_im_cr = np.round((Rho_cr + bp_win - rho_s_im)*rho_ov/Delta_rho).astype(int)\nfor ncr in range(len(Ind_cr)):\n print (\"Reflector \",Ind_cr[ncr],ns_s_im_cr[Ind_cr[ncr]],\n ne_s_im_cr[Ind_cr[ncr]],ns_rho_im_cr[Ind_cr[ncr]],ne_rho_im_cr[Ind_cr[ncr]])\n\n for rho_im_b in range(ns_rho_im_cr[Ind_cr[ncr]],ne_rho_im_cr[Ind_cr[ncr]]):\n if(np.mod(rho_im_b,10)==0): print (rho_im_b)\n s_sa = np.linspace(s_s_sa[rho_im_b],s_e_sa[rho_im_b],n_s_sa[rho_im_b])\n rho_sa = np.sqrt(rho_im[rho_im_b]**2+(s_sa)**2)\n azref = np.exp(1j*4.*np.pi*rho_sa/Lambda) # conjugate of observed phase history\n for s_im_b in range(ns_s_im_cr[Ind_cr[ncr]],ne_s_im_cr[Ind_cr[ncr]]):\n E_cr_rc_bp=np.zeros(n_s_sim,dtype=np.complex128)\n azref_bp=np.zeros(n_s_sim,dtype=np.complex128)\n for sb in range(len(s_sa)):\n sb_rc = int(np.round((s_im[s_im_b]+s_sa[sb]-s_s_sim)*s_ov/Delta_s))\n rhob_rc = int(np.round((rho_sa[sb]-rho_s_sim)*rho_ov/Delta_rho))\n E_cr_rc_bp[sb_rc]=E_cr_rc[rhob_rc,sb_rc]\n azref_bp[sb_rc]=azref[sb]\n E_cr_im[rho_im_b,s_im_b] += np.dot(E_cr_rc_bp,azref_bp)\n```\n\n\n```python\nplt.figure(figsize=(8, 5))\nplt.imshow(20.*np.log10(np.abs(E_cr_im[ns_rho_im_cr[Ind_cr[ncr]]:ne_rho_im_cr[Ind_cr[ncr]],ns_s_im_cr[Ind_cr[ncr]]:ne_s_im_cr[Ind_cr[ncr]]])), cmap='magma', origin='lower', aspect='auto',vmax=-30.,vmin=-70.)\nplt.colorbar(label='Power (dB)');\nplt.xlabel(\"Along track position (pixel)\")\nplt.ylabel(\"Range Position (pixel)\")\nplt.title(\"Power of Image - Using Time Domain Back Projection\");\n```\n\n# Summary\n\n\n This tutorial covers the following topics:\n\n* SAR Geometry\n* Antenna Patterns\n* The Radar Equation\n* Doppler and Phase in the synthetic aperture\n* Range reference function and correlation to achieve fine range resolution\n - Range correlation in the time domain \n - Range correlation using FFTs to perform circular correlation\n* Azimuth reference function and correlation \n - Azimuth correlation in the time domain \n - Azimuth migration correction\n - Azimuth correlation using FFTs to perform circular correlation\n - Backprojection in the time domain\n \nWhile the geometry is idealized, through this step-by-step approach with python code to simulate radar echoes from point targets and a variety of methods to process the data, the notebook illustrates the meaning of the synthetic aperture, the explicit signal properties of the return echoes, and how the varying range of a target from pulse to pulse necessitates a some resampling to align the energy with a regular grid. \n\nThe notebook is designed to allow the student to adjust parameters to alter resolution, squint, geometry, radar elements such as antenna dimensions, and other factors. Some of these can be done locally, others must be done at the beginning of the notebook. Once the student is familiar with the cell dependencies, these will become clear. For example, resolution of the simulation and other geometric parameters must be set at the beginning. Plot dimensions, and processing choices once the simulation is established can be set locally. A successful learning outcome would be confidence in understanding where parameters need to change to affect the tutorial in a particular way. \n \n", "meta": {"hexsha": "a416c62279656207752e6dd70324c57cc9ff63a9", "size": 132481, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2021/GEOS 657-Lab3-SARProcessing.ipynb", "max_stars_repo_name": "uafgeoteach/GEOS657_MRS", "max_stars_repo_head_hexsha": "682d9d936e058c692d3f3f1492c243e569cd0f6f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-11-02T04:02:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T20:44:53.000Z", "max_issues_repo_path": "2021/GEOS 657-Lab3-SARProcessing.ipynb", "max_issues_repo_name": "uafgeoteach/GEOS657_MRS", "max_issues_repo_head_hexsha": "682d9d936e058c692d3f3f1492c243e569cd0f6f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2021/GEOS 657-Lab3-SARProcessing.ipynb", "max_forks_repo_name": "uafgeoteach/GEOS657_MRS", "max_forks_repo_head_hexsha": "682d9d936e058c692d3f3f1492c243e569cd0f6f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-11-30T16:12:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T20:02:33.000Z", "avg_line_length": 37.9167143675, "max_line_length": 1217, "alphanum_fraction": 0.5968101086, "converted": true, "num_tokens": 25387, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46101677931231594, "lm_q2_score": 0.31742626558767584, "lm_q1q2_score": 0.14633883463036615}} {"text": "\n\n# Lambda School Data Science Module 143\n\n## Introduction to Bayesian Inference\n\n!['Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'](https://imgs.xkcd.com/comics/frequentists_vs_bayesians.png)\n\n*[XKCD 1132](https://www.xkcd.com/1132/)*\n\n\n## Prepare - Bayes' Theorem and the Bayesian mindset\n\nBayes' theorem possesses a near-mythical quality - a bit of math that somehow magically evaluates a situation. But this mythicalness has more to do with its reputation and advanced applications than the actual core of it - deriving it is actually remarkably straightforward.\n\n### The Law of Total Probability\n\nBy definition, the total probability of all outcomes (events) if some variable (event space) $A$ is 1. That is:\n\n$$P(A) = \\sum_n P(A_n) = 1$$\n\nThe law of total probability takes this further, considering two variables ($A$ and $B$) and relating their marginal probabilities (their likelihoods considered independently, without reference to one another) and their conditional probabilities (their likelihoods considered jointly). A marginal probability is simply notated as e.g. $P(A)$, while a conditional probability is notated $P(A|B)$, which reads \"probability of $A$ *given* $B$\".\n\nThe law of total probability states:\n\n$$P(A) = \\sum_n P(A | B_n) P(B_n)$$\n\nIn words - the total probability of $A$ is equal to the sum of the conditional probability of $A$ on any given event $B_n$ times the probability of that event $B_n$, and summed over all possible events in $B$.\n\n### The Law of Conditional Probability\n\nWhat's the probability of something conditioned on something else? To determine this we have to go back to set theory and think about the intersection of sets:\n\nThe formula for actual calculation:\n\n$$P(A|B) = \\frac{P(A \\cap B)}{P(B)}$$\n\n\n\nThink of the overall rectangle as the whole probability space, $A$ as the left circle, $B$ as the right circle, and their intersection as the red area. Try to visualize the ratio being described in the above formula, and how it is different from just the $P(A)$ (not conditioned on $B$).\n\nWe can see how this relates back to the law of total probability - multiply both sides by $P(B)$ and you get $P(A|B)P(B) = P(A \\cap B)$ - replaced back into the law of total probability we get $P(A) = \\sum_n P(A \\cap B_n)$.\n\nThis may not seem like an improvement at first, but try to relate it back to the above picture - if you think of sets as physical objects, we're saying that the total probability of $A$ given $B$ is all the little pieces of it intersected with $B$, added together. The conditional probability is then just that again, but divided by the probability of $B$ itself happening in the first place.\n\n\\begin{align}\nP(A|B) &= \\frac{P(A \\cap B)}{P(B)}\\\\\n\\Rightarrow P(A|B)P(B) &= P(A \\cap B)\\\\\nP(B|A) &= \\frac{P(B \\cap A)}{P(A)}\\\\\n\\Rightarrow P(B|A)P(A) &= P(B \\cap A)\\\\\n\\Rightarrow P(A|B)P(B) &= P(B|A)P(A) \\\\\nP(A \\cap B) &= P(B \\cap A)\\\\\nP(A|B) &= \\frac{P(B|A) \\times P(A)}{P(B)}\n\\end{align}\n\n### Bayes Theorem\n\nHere is is, the seemingly magic tool:\n\n$$P(A|B) = \\frac{P(B|A)P(A)}{P(B)}$$\n\nIn words - the probability of $A$ conditioned on $B$ is the probability of $B$ conditioned on $A$, times the probability of $A$ and divided by the probability of $B$. These unconditioned probabilities are referred to as \"prior beliefs\", and the conditioned probabilities as \"updated.\"\n\nWhy is this important? Scroll back up to the XKCD example - the Bayesian statistician draws a less absurd conclusion because their prior belief in the likelihood that the sun will go nova is extremely low. So, even when updated based on evidence from a detector that is $35/36 = 0.972$ accurate, the prior belief doesn't shift enough to change their overall opinion.\n\nThere's many examples of Bayes' theorem - one less absurd example is to apply to [breathalyzer tests](https://www.bayestheorem.net/breathalyzer-example/). You may think that a breathalyzer test that is 100% accurate for true positives (detecting somebody who is drunk) is pretty good, but what if it also has 8% false positives (indicating somebody is drunk when they're not)? And furthermore, the rate of drunk driving (and thus our prior belief) is 1/1000.\n\nWhat is the likelihood somebody really is drunk if they test positive? Some may guess it's 92% - the difference between the true positives and the false positives. But we have a prior belief of the background/true rate of drunk driving. Sounds like a job for Bayes' theorem!\n\n$$\n\\begin{aligned}\nP(Drunk | Positive) &= \\frac{P(Positive | Drunk)P(Drunk)}{P(Positive)} \\\\\n&= \\frac{1 \\times 0.001}{0.08} \\\\\n&= 0.0125\n\\end{aligned}\n$$\n\nIn other words, the likelihood that somebody is drunk given they tested positive with a breathalyzer in this situation is only 1.25% - probably much lower than you'd guess. This is why, in practice, it's important to have a repeated test to confirm (the probability of two false positives in a row is $0.08 * 0.08 = 0.0064$, much lower), and Bayes' theorem has been relevant in court cases where proper consideration of evidence was important.\n\n\n\n\n\nSource: \n\n## Live Lecture - Deriving Bayes' Theorem, Calculating Bayesian Confidence\n\nNotice that $P(A|B)$ appears in the above laws - in Bayesian terms, this is the belief in $A$ updated for the evidence $B$. So all we need to do is solve for this term to derive Bayes' theorem. Let's do it together!\n\n\n```\n# Activity 2 - Use SciPy to calculate Bayesian confidence intervals\n# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bayes_mvs.html#scipy.stats.bayes_mvs\n```\n\n\n```\nfrom scipy import stats\nimport numpy as np\n\n# Set Random Seed for Reproducibility\nnp.random.seed(seed=42)\n\ncoinflips = np.random.binomial(n=1, p=.5, size=10)\nprint(coinflips)\n```\n\n\n```\ndef confidence_interval(data, confidence=.95):\n n = len(data)\n mean = sum(data)/n\n data = np.array(data)\n stderr = stats.sem(data)\n interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n-1)\n return (mean , mean-interval, mean+interval)\n\nconfidence_interval(coinflips)\n```\n\n\n```\nbayes_mean_CI, _, _ = stats.bayes_mvs(coinflips, alpha=.95)\n \nbayes_mean_CI\n```\n\n\n```\n??stats.bayes_mvs\n```\n\n\n```\ncoinflips_mean_dist.rvs(1000)\n```\n\n## Assignment - Code it up!\n\nMost of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions!\n\nSpecific goals/targets:\n\n1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations\n2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week\n3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach\n4. In your own words, summarize the difference between Bayesian and Frequentist statistics\n\nIf you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar!\n\nStretch goals:\n\n- Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective\n- Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples\n- Take PyMC3 further - see if you can build something with it!\n\n\n```\nimport random\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\n```\n\n\n```\n\ndef prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk):\n return (prob_positive_drunk*prob_drunk_prior) / prob_positive\n```\n\n\n```\nprob_drunk_given_positive(0.001, 0.08, 1)\n```\n\n\n\n\n 0.0125\n\n\n\n\n```\nlist1 = []\nfor i in range(100):\n \n num = prob_drunk_given_positive(random.uniform(0.001, 0.003), \n random.uniform(0.06, 0.1),\n random.uniform(0.95, 1))\n list1.append(num)\n \n \nplt.scatter(range(100),list1);\nplt.xlabel('Trial run number');\nplt.ylabel('Probability');\n```\n\n\n```\ncoinflips_100 = np.random.binomial(n=1, p=.5, size=100)\n\nsample_std = np.std(coinflips_100)\nsample_size = len(coinflips_100)\nstandard_error = sample_std / (sample_size**(.5))\nt = stats.t.ppf(.975 , sample_size-1)\nsample_mean = coinflips_100.mean()\nconfidence_interval = (sample_mean - t*standard_error, sample_mean + t*standard_error)\n\nbayes_mean_CI, _, _ = stats.bayes_mvs(coinflips_100, alpha=.95)\nprint(bayes_mean_CI)\n```\n\n Mean(statistic=0.54, minmax=(0.4406089327527315, 0.6393910672472686))\n\n\n\n```\nimport seaborn as sns\n\nsns.kdeplot(coinflips_100)\nplt.axvline(x=confidence_interval[0], color='red')\nplt.axvline(x=confidence_interval[1], color='red')\nplt.axvline(x=sample_mean, color='k');\nplt.title(\"Frequentist Approach\");\n\n```\n\n\n```\nsns.kdeplot(coinflips_100)\nplt.axvline(x=bayes_mean_CI[1][0], color='red')\nplt.axvline(x=bayes_mean_CI[1][1], color='red')\nplt.axvline(x=sample_mean, color='k');\nplt.title(\"Bayesian Approach\");\n```\n\nBayesian Statistics deals with probability and uncertainty in a broader spectrum of events. Although the events that are tested may not be reproducible, Bayes stats works to still find a numerical value/probability to place on the event. It also uses prior probabilities in the calculation of new ones, which is criticized by frequentists.\n\nFrequentist statistics employs the use of confidence intervals and p-values to explain the probability of events. These events are more than likely repeatable and give a more accurate look into long-term occurences. Frequentists like having more concrete values before making any assertions and \n\n## Resources\n\n- [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator)\n- [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139)\n", "meta": {"hexsha": "7e68853de89545e872d10f356406abde4bb4c9a0", "size": 72957, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "LS_DS_133_Introduction_to_Bayesian_Inference.ipynb", "max_stars_repo_name": "justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_stars_repo_head_hexsha": "8df28a369a1b282dead1dede3c6b97ab6393d094", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LS_DS_133_Introduction_to_Bayesian_Inference.ipynb", "max_issues_repo_name": "justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_issues_repo_head_hexsha": "8df28a369a1b282dead1dede3c6b97ab6393d094", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LS_DS_133_Introduction_to_Bayesian_Inference.ipynb", "max_forks_repo_name": "justin-hsieh/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_forks_repo_head_hexsha": "8df28a369a1b282dead1dede3c6b97ab6393d094", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 141.1160541586, "max_line_length": 20304, "alphanum_fraction": 0.8459914744, "converted": true, "num_tokens": 2802, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3174262655876759, "lm_q2_score": 0.4610167793123159, "lm_q1q2_score": 0.14633883463036615}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n#####Version 0.1\nWelcome to *Bayesian Methods for Hackers*. The full Github repository, and additional chapters, is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are a Bayesian practitioner! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n\n###The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty* about our beliefs. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist* methods assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these universes, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is clear how we can speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate A will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either heads or tails. Now what is *your* belief that the coin is heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true. Though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease.\n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial evidence. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$.:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being heads. $P(A | X):\\;\\;$ You look at the coin, observe a heads has landed, denote this information $X$, and trivially assign probability 1.0 to heads and 0.0 to tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*.\n\n\n\n###Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: a probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n####Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of stastical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computational-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools like Least Squares linear regression, LASSO regression, EM algorithm etc. are all very powerful and incredibly fast. Bayesian methods are a compliment to solve the problems these solutions cannot or to gain further insight into the underlying system by offering more flexibility in modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after it's discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for matplotlib plots.\nIf executing this book, and you wish to use the book's styling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the book's styles/ dir.\n See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to update the styles\n in only this notebook. Try running the following code:\n\n import json\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n#the code below can be passed over, as it is currently not important.\n%pylab inline\nfigsize( 11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0,1,2,3,4,5,8,15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size = n_trials[-1] )\nx = np.linspace(0,1,100)\n\nfor k, N in enumerate(n_trials):\n sx = subplot( len(n_trials)/2, 2, k+1)\n \n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads )\n plt.plot( x, y, label= \"observe %d tosses,\\n %d heads\"%(N,heads) )\n plt.fill_between( x, 0, y, color=\"#348ABD\", alpha = 0.4 )\n plt.vlines( 0.5, 0, 4, color = \"k\", linestyles = \"--\", lw=1 )\n \n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight = True)\n\n\nplt.suptitle( \"Bayesian updating of posterior probabilities\", \n y = 1.02,\n fontsize = 14);\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our confidence is proportional to the height of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will lump closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5. As more data accumulates, we would see more and more probability being assigned at $p=0.5$.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n#####Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```\nfigsize(12.5,4)\np = np.linspace( 0,1, 50)\nplt.plot( p, 2*p/(1+p), color = \"#348ABD\", lw = 3 )\n#plt.fill_between( p, 2*p/(1+p), alpha = .5, facecolor = [\"#A60628\"])\nplt.scatter( 0.2, 2*(0.2)/1.2, s = 140, c =\"#348ABD\" )\nplt.xlim( 0, 1)\nplt.ylim( 0, 1)\nplt.xlabel( \"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title( \"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a graph of both the prior and the posterior probabilities. \n\n\n\n```\nfigsize( 12.5, 4 )\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar( [0,.7], prior ,alpha = 0.70, width = 0.25, \\\n color = colours[0], label = \"prior distribution\",\n lw = \"3\", edgecolor = colours[0])\n\n\nplt.bar( [0+0.25,.7+0.25], posterior ,alpha = 0.7, \\\n width = 0.25, color = colours[1], \n label = \"posterior distribution\",\n lw = \"3\", edgecolor = colours[1])\n\nplt.xticks( [0.20,.95], [\"Bugs Absent\", \"Bugs Present\"] )\nplt.title(\"Prior and Posterior probability of bugs present, prior = 0.2\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n##Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n###Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\nWhat is $\\lambda$? It is called the parameter, and it describes the shape of the distribution. For the Poisson random variable, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. Unlike $\\lambda$, which can be any positive number, $k$ must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne very useful property of the Poisson random variable, given we know $\\lambda$, is that its expected value is equal to the parameter, ie.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's something useful to remember. Below we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$ we add more probability to larger values occurring. Secondly, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```\nfigsize( 12.5, 4)\n\nimport scipy.stats as stats\na = np.arange( 16 )\npoi = stats.poisson\nlambda_ = [1.5, 4.25 ]\n\nplt.bar( a, poi.pmf( a, lambda_[0]), color=colours[0],\n label = \"$\\lambda = %.1f$\"%lambda_[0], alpha = 0.60,\n edgecolor = colours[0], lw = \"3\")\n\nplt.bar( a, poi.pmf( a, lambda_[1]), color=colours[1],\n label = \"$\\lambda = %.1f$\"%lambda_[1], alpha = 0.60,\n edgecolor = colours[1], lw = \"3\")\n\nplt.xticks( a + 0.4, a )\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n###Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with a *exponential density*. The density function for an exponential random variable looks like:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike the Poisson random variable, an exponential random variable can only take on non-negative values. But unlike a Poisson random variable, the exponential can take on *any* non-negative values, like 4.25 or 5.612401. This makes it a poor choice for count data, which must be integers, but a great choice for time data, or temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. Below are two probability density functions with different $\\lambda$ value. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```\na = np.linspace(0,4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l,c in zip(lambda_,colours):\n plt.plot( a, expo.pdf( a, scale=1./l), lw=3, \n color=c, label = \"$\\lambda = %.1f$\"%l)\n plt.fill_between( a, expo.pdf( a, scale=1./l), color=c, alpha = .33)\n \nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n###But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We only see $Z$, and must go backwards to try and determine $\\lambda$. The problem is so difficult because there is not a one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is better! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ is. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first: after all, $\\lambda$ is fixed, it is not (necessarily) random! How can we assign probabilities to a non-random event. Ah, we have fallen for the frequentist interpretation. Recall, under our Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, concerning text-message rates:\n\n> You are given a series of text-message counts from a user of your system. The data, plotted over time, appears in the graph below. You are curious if the user's text-messaging habits changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```\nfigsize( 12.5, 3.5 )\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar( np.arange( n_count_data ), count_data, color =\"#348ABD\" )\nplt.xlabel( \"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim( 0, n_count_data );\n```\n\nBefore we begin, with respect to the plot above, would you say there was a change in behaviour\nduring the time period? \n\nHow can we start to model this? Well, as I conveniently already introduced, a Poisson random variable would be a very appropriate model for this *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure about what the $\\lambda$ parameter is though. Looking at the chart above, it appears that the rate might become higher at some later date, which is equivalently saying the parameter $\\lambda$ increases at some later date (recall a higher $\\lambda$ means more probability on larger outcomes, that is, higher probability of many texts.).\n\nHow can we mathematically represent this? We can think, that at some later date (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we create two $\\lambda$ parameters, one for behaviour before the $\\tau$, and one for behaviour after. In literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\n If, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, the $\\lambda$'s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda_i, \\; i=1,2,$ can be any positive number. The *exponential* random variable has a density function for any positive number. This would be a good choice to model $\\lambda_i$. But, we need a parameter for this exponential distribution: call it $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter*, or a *parent-variable*, literally a parameter that influences other parameters. The influence is not too strong, so we can choose $\\alpha$ liberally. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data, since we're modeling $\\\\lambda$ using an Exponential distribution we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAlternatively, and something I encourage the reader to try, is to have two priors: one for each $\\lambda_i$; creating two exponential distributions with different $\\alpha$ values reflects a prior belief that the rate changed after some period.\n\nWhat about $\\tau$? Well, due to the randomness, it is too difficult to pick out when $\\tau$ might have occurred. Instead, we can assign an *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it would be an ugly, complicated, mess involving symbols only a mathematician would love. And things would only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution. We next turn to PyMC, a Python library for performing Bayesian analysis, that is agnostic to the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that documentation can be lacking in areas, especially the bridge between beginner to hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the above problem using the PyMC library. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random. The title is given because we create probability models using programming variables as the model's components, that is, model components are first-class primitives in this framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nDue to its poorly understood title, I'll refrain from using the name *probabilistic programming*. Instead, I'll simply use *programming*, as that is what it really is. \n\nThe PyMC code is easy to follow along: the only novel thing should be the syntax, and I will interrupt the code to explain sections. Simply remember we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```\nimport pymc as mc\n\nn = count_data.shape[0]\n\nalpha = 1.0/count_data.mean() #recall count_data is \n #the variable that holds our txt counts\n\nlambda_1 = mc.Exponential( \"lambda_1\", alpha )\nlambda_2 = mc.Exponential( \"lambda_2\", alpha )\n\ntau = mc.DiscreteUniform( \"tau\", lower = 0, upper = n )\n```\n\nIn the above code, we create the PyMC variables corresponding to $\\lambda_1, \\; \\lambda_2$. We assign them to PyMC's *stochastic variables*, called stochastic variables because they are treated by the backend as random number generators. We can test this by calling their built-in `random()` method.\n\n\n```\nprint \"Random output:\", tau.random(),tau.random(), tau.random()\n```\n\n Random output: 13 12 54\n\n\n\n```\n@mc.deterministic\ndef lambda_( tau = tau, lambda_1 = lambda_1, lambda_2 = lambda_2 ):\n out = np.zeros( n ) \n out[:tau] = lambda_1 #lambda before tau is lambda1\n out[tau:] = lambda_2 #lambda after tau is lambda2\n return out\n```\n\nThis code is creating a new function `lambda_`, but really we think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet. The `@mc.deterministic` is a decorator to tell PyMC that this is a deterministic function, i.e., if the arguments were deterministic (which they are not), the output would be deterministic as well. \n\n\n```\nobservation = mc.Poisson( \"obs\", lambda_, value = count_data, observed = True)\n\nmodel = mc.Model( [observation, lambda_1, lambda_2, tau] )\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we try to retrieve the results.\n\nThe below code will be explained in the Chapter 3, but this is where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Monte Carlo Markov Chains* (which I delay explaining until Chapter 3). It returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distribution looks like. Below, we collect the samples (called *traces* in MCMC literature) in histograms.\n\n\n```\n### Mysterious code to be explained in Chapter 3.\nmcmc = mc.MCMC(model)\nmcmc.sample( 35000, 5000, 1 )\n```\n\n [****************100%******************] 35000 of 35000 complete\n\n\n\n```\nlambda_1_samples = mcmc.trace( 'lambda_1' )[:]\nlambda_2_samples = mcmc.trace( 'lambda_2' )[:]\ntau_samples = mcmc.trace( 'tau' )[:]\n```\n\n\n```\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist( lambda_1_samples, histtype='stepfilled', bins = 30, alpha = 0.85, \n label = \"posterior of $\\lambda_1$\", color = \"#A60628\",normed = True )\nplt.legend(loc = \"upper left\")\nplt.title(r\"Posterior distributions of the variables $\\lambda_1,\\;\\lambda_2,\\;\\tau$\")\nplt.xlim([15,30])\nplt.xlabel(\"$\\lambda_2$ value\")\nplt.ylabel(\"probability\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\n\nplt.hist( lambda_2_samples,histtype='stepfilled', bins = 30, alpha = 0.85, \n label = \"posterior of $\\lambda_2$\",color=\"#7A68A6\", normed = True )\nplt.legend(loc = \"upper left\")\nplt.xlim([15,30])\nplt.xlabel(\"$\\lambda_2$ value\")\nplt.ylabel(\"probability\")\n\nplt.subplot(313)\n\n\nw = 1.0/ tau_samples.shape[0] * np.ones_like( tau_samples )\nplt.hist( tau_samples, bins = n_count_data, alpha = 1, \n label = r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth =1. )\n\nplt.legend(loc = \"upper left\");\nplt.ylim([0,.75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(\"days\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that the Bayesian methodology returns a *distribution*, hence we now have distributions to describe the unknown $\\lambda$'s and $\\tau$. What have we gained? Immediately we can see the uncertainty in our estimates: the more variance in the distribution, the less certain our posterior belief should be. We can also say what a plausible value for the parameters might be: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. What other observations can you make? Look at the data again, do these seem reasonable? The distributions of the two $\\\\lambda$s are positioned very differently, indicating that it's likely there was a change in the user's text-message behaviour.\n\nAlso notice that the posterior distributions for the $\\lambda$'s do not look like any exponential distributions, though we originally started modelling with exponential random variables. They are really not anything we recognize. But this is OK. This is one of the benefits of taking a computational point-of-view. If we had instead done this mathematically, we would have been stuck with a very analytically intractable (and messy) distribution. Via computations, we are agnostic to the tractability.\n\nOur analysis also returned a distribution for what $\\tau$ might be. Its posterior distribution looks a little different from the other two because it is a discrete random variable, hence it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance the users behaviour changed. Had no change occurred, or the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many values are likely candidates for $\\tau$. On the contrary, it is very peaked. \n\n###Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say we can perform amazingly useful things. For now, let's end this chapter with one more example. We'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le70$? Recall that the expected value of a Poisson is equal to its parameter $\\lambda$, then the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, we are calculating the following: Let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change hadn't occurred yet), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n\n\n\n___________________\n\n\n```\nfigsize( 12.5, 4)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\" (in the lambda1 \"regime\")\n # or \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed, \n # and therefore lambda (the poisson parameter) is the expected value of \"message count\"\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum() \n + lambda_2_samples[~ix].sum() ) /N\n\n \nplt.plot( range( n_count_data), expected_texts_per_day, lw =4, color = \"#E24A33\" )\nplt.xlim( 0, n_count_data )\nplt.xlabel( \"Day\" )\nplt.ylabel( \"Expected # text-messages\" )\nplt.title( \"Expected number of text-messages received\")\n#plt.ylim( 0, 35 )\nplt.bar( np.arange( len(count_data) ), count_data, color =\"#348ABD\", alpha = 0.5,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and the change was sudden rather then gradual (demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-2-text subscription, or a new relationship. (The 45th day corresponds to Christmas, and I moved away to Toronto the next month leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```\n#type your code here.\n```\n\n3\\. Looking at the posterior distribution graph of $\\tau$, why do you think there is a small number of posterior $\\tau$ samples near 0? `hint:` Look at the data again.\n\n4\\. What is the mean of $\\lambda_1$ **given** we know $\\tau$ is less than 45. That is, suppose we have new information as we know for certain that the change in behaviour occurred before day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part, just consider all instances where `tau_samples<45`. )\n\n\n```\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. .\n- [2] Norvig, Peter. 2009. [*The Unreasonable Effectiveness of Data*](http://www.csee.wvu.edu/~gidoretto/courses/2011-fall-cp/reading/TheUnreasonable EffectivenessofData_IEEE_IS2009.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```\n\n```\n", "meta": {"hexsha": "b3ccb581fa3dab3ec95bb576ebeece7674001483", "size": 408118, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_stars_repo_name": "aespinoza/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "38751435529deea0a5637a1aff86b416317c7ac9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-28T04:08:27.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T04:08:27.000Z", "max_issues_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_issues_repo_name": "aespinoza/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "38751435529deea0a5637a1aff86b416317c7ac9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_forks_repo_name": "aespinoza/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "38751435529deea0a5637a1aff86b416317c7ac9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 390.1701720841, "max_line_length": 113544, "alphanum_fraction": 0.9033490314, "converted": true, "num_tokens": 10903, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3311197264277872, "lm_q2_score": 0.4416730056646256, "lm_q1q2_score": 0.14624664480620933}} {"text": "\n\n\n```python\n# Mount Google Drive\nfrom google.colab import drive # import drive from google colab\n\nROOT = \"/content/drive\" # default location for the drive\nprint(ROOT) # print content of ROOT (Optional)\n\ndrive.mount(ROOT,force_remount=True) \n```\n\n /content/drive\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n\n\n# Neuromatch Academy: Week 1, Day 1, Tutorial 3\n# Model Types: \"Why\" models\n__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording\n\n__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom\n\nWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here.\n\n\n___\n# Tutorial Objectives\nThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.\n\nTo understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:\n\n- Write code to compute formula for entropy, a measure of information\n- Compute the entropy of a number of toy distributions\n- Compute the entropy of spiking activity from the Steinmetz dataset\n\n\n```python\n#@title Video 1: \u201cWhy\u201d models\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n\n```\n\n Video available at https://youtube.com/watch?v=OOIDEr1e5Gg\n\n\n\n\n\n\n\n\n\n\n\n# Setup\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n```\n\n\n```python\n#@title Figure Settings\nimport ipywidgets as widgets #interactive display\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n\n```\n\n\n```python\n#@title Helper Functions\n\ndef plot_pmf(pmf,isi_range):\n \"\"\"Plot the probability mass function.\"\"\"\n ymax = max(0.2, 1.05 * np.max(pmf))\n pmf_ = np.insert(pmf, 0, pmf[0])\n plt.plot(bins, pmf_, drawstyle=\"steps\")\n plt.fill_between(bins, pmf_, step=\"pre\", alpha=0.4)\n plt.title(f\"Neuron {neuron_idx}\")\n plt.xlabel(\"Inter-spike interval (s)\")\n plt.ylabel(\"Probability mass\")\n plt.xlim(isi_range);\n plt.ylim([0, ymax])\n```\n\n\n```python\n#@title Download Data\nimport io\nimport requests\nr = requests.get('https://osf.io/sy5xt/download')\nif r.status_code != 200:\n print('Could not download data')\nelse:\n steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']\n```\n\n# Section 1: Optimization and Information\n\nNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: \n\nWhat is the optimal way for a neuron to fire in order to maximize its ability to communicate information?\n\nIn order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\n\n\\begin{align}\n H_b(X) &= -\\sum_{x\\in X} p(x) \\log_b p(x)\n\\end{align}\n\nwhere $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.\n\nThe most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*.\n\nFirst, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.\n\nFor our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.\n\n\n```python\nn_bins = 50 # number of points supporting the distribution\nx_range = (0, 1) # will be subdivided evenly into bins corresponding to points\n\nbins = np.linspace(*x_range, n_bins + 1) # bin edges\n\npmf = np.zeros(n_bins)\npmf[len(pmf) // 2] = 1.0 # middle point has all the mass\n\n# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not\n# suitable. Instead, we directly plot the PMF as a step function to visualize \n# the histogram:\npmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges\nplt.plot(bins, pmf_, drawstyle=\"steps\")\n# `fill_between` provides area shading\nplt.fill_between(bins, pmf_, step=\"pre\", alpha=0.4)\nplt.xlabel(\"x\")\nplt.ylabel(\"p(x)\")\nplt.xlim(x_range)\nplt.ylim(0, 1);\n```\n\nIf we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.\n\nHow much entropy is contained in a deterministic distribution? 0\n\n## Exercise 1: Computing Entropy\n\nYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. \n\nRecall that $\\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` (\"Not a Number\"). By convention, these undefined terms\u2014 which correspond to points in the distribution with zero mass\u2014are excluded from the sum that computes the entropy.\n\n\n```python\ndef entropy(pmf):\n \"\"\"Given a discrete distribution, return the Shannon entropy in bits.\n \n This is a measure of information in the distribution. For a totally \n deterministic distribution, where samples are always found in the same bin,\n then samples from the distribution give no more information and the entropy\n is 0.\n\n For now this assumes `pmf` arrives as a well-formed distribution (that is, \n `np.sum(pmf)==1` and `not np.any(pmf < 0)`)\n\n Args:\n pmf (np.ndarray): The probability mass function for a discrete distribution\n represented as an array of probabilities.\n Returns:\n h (number): The entropy of the distribution in `pmf`.\n\n \"\"\"\n ############################################################################\n # Exercise for students: compute the entropy of the provided PMF \n # 1. Exclude the points in the distribution with no mass (where `pmf==0`).\n # Hint: this is equivalent to including only the points with `pmf>0`.\n # 2. Implement the equation for Shannon entropy (in bits).\n # When ready to test, comment or remove the next line\n #raise NotImplementedError(\"Excercise: implement the equation for entropy\")\n ############################################################################\n\n # reduce to non-zero entries to avoid an error from log2(0)\n pmf = pmf[pmf>0]\n\n # implement the equation for Shannon entropy (in bits)\n h = -np.sum(np.multiply(pmf,np.log2(pmf)))\n\n # return the absolute value (avoids getting a -0 result)\n return np.abs(h)\n\n# Uncomment to test your entropy function\nprint(f\"{entropy(pmf):.2f} bits\")\n```\n\n 0.00 bits\n\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py)\n\n\n\nWe expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be\n\n$-1\\log_2 1 = -0=0$\n\nNote that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.\n\n\n```python\npmf = np.zeros(n_bins)\npmf[2] = 1.0 # arbitrary point has all the mass\n\npmf_ = np.insert(pmf, 0, pmf[0])\nplt.plot(bins, pmf_, drawstyle=\"steps\")\nplt.fill_between(bins, pmf_, step=\"pre\", alpha=0.4)\nplt.xlabel(\"x\")\nplt.ylabel(\"p(x)\")\nplt.xlim(x_range)\nplt.ylim(0, 1);\n```\n\nWhat about a distribution with mass split equally between two points?\n\n\n```python\npmf = np.zeros(n_bins)\npmf[len(pmf) // 3] = 0.5 \npmf[2 * len(pmf) // 3] = 0.5\n\npmf_ = np.insert(pmf, 0, pmf[0])\nplt.plot(bins, pmf_, drawstyle=\"steps\")\nplt.fill_between(bins, pmf_, step=\"pre\", alpha=0.4)\nplt.xlabel(\"x\")\nplt.ylabel(\"p(x)\")\nplt.xlim(x_range)\nplt.ylim(0, 1);\n```\n\nHere, the entropy calculation is\n\n$-(0.5 \\log_2 0.5 + 0.5\\log_2 0.5)=1$\n\nThere is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. \n\nLikewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other:\n\n\n\n$-(0.2 \\log_2 0.2 + 0.8\\log_2 0.8)\\approx 0.72$\n\nTry changing the definition of the number and weighting of peaks, and see how the entropy varies.\n\nIf we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\n\n\\begin{align}\n -\\sum_i p_i \\log_b p_i&= -\\sum_i^N \\frac{1}{N} \\log_b \\frac{1}{N}\\\\\n &= -\\log_b \\frac{1}{N} \\\\\n &= \\log_b N\n\\end{align}\n$$$$\n\nIf we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \\log_b N]$.\n\n\n\n```python\npmf = np.ones(n_bins) / n_bins # [1/N] * N\n\npmf_ = np.insert(pmf, 0, pmf[0])\nplt.plot(bins, pmf_, drawstyle=\"steps\")\nplt.fill_between(bins, pmf_, step=\"pre\", alpha=0.4)\nplt.xlabel(\"x\")\nplt.ylabel(\"p(x)\")\nplt.xlim(x_range);\nplt.ylim(0, 1);\n```\n\nHere, there are 50 points and the entropy of the uniform distribution is $\\log_2 50\\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\\log_2 50$, something must be wrong with our implementation of the discrete entropy computation.\n\n# Section 2: Information, neurons, and spikes\n\n\n```python\n#@title Video 2: Entropy of different distributions\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=o6nyrx3KH20\n\n\n\n\n\n\n\n\n\n\n\nRecall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? \n\nWe'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:\n\n1. Deterministic\n2. Uniform\n3. Exponential\n\nFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?\n\nLet's construct our three distributions and see how their entropies differ.\n\n\n```python\nn_bins = 50\nmean_isi = 0.025\nisi_range = (0, 0.25)\n\nbins = np.linspace(*isi_range, n_bins + 1)\nmean_idx = np.searchsorted(bins, mean_isi)\n\n# 1. all mass concentrated on the ISI mean\npmf_single = np.zeros(n_bins)\npmf_single[mean_idx] = 1.0\n\n# 2. mass uniformly distributed about the ISI mean\npmf_uniform = np.zeros(n_bins)\npmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)\n\n# 3. mass exponentially distributed about the ISI mean\npmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)\npmf_exp /= np.sum(pmf_exp)\n```\n\n\n```python\n#@title\n#@markdown Run this cell to plot the three PMFs\nfig, axes = plt.subplots(ncols=3, figsize=(18, 5))\n\ndists = [# (subplot title, pmf, ylim)\n (\"(1) Deterministic\", pmf_single, (0, 1.05)), \n (\"(1) Uniform\", pmf_uniform, (0, 1.05)), \n (\"(1) Exponential\", pmf_exp, (0, 1.05))]\n\nfor ax, (label, pmf_, ylim) in zip(axes, dists):\n pmf_ = np.insert(pmf_, 0, pmf_[0])\n ax.plot(bins, pmf_, drawstyle=\"steps\")\n ax.fill_between(bins, pmf_, step=\"pre\", alpha=0.4)\n ax.set_title(label)\n ax.set_xlabel(\"Inter-spike interval (s)\")\n ax.set_ylabel(\"Probability mass\")\n ax.set_xlim(isi_range);\n ax.set_ylim(ylim);\n```\n\n\n```python\nprint(\n f\"Deterministic: {entropy(pmf_single):.2f} bits\",\n f\"Uniform: {entropy(pmf_uniform):.2f} bits\",\n f\"Exponential: {entropy(pmf_exp):.2f} bits\",\n sep=\"\\n\",\n)\n```\n\n Deterministic: 0.00 bits\n Uniform: 3.32 bits\n Exponential: 3.77 bits\n\n\nThe entropy here can be greater than the uniform because the exponential input range is unbounded\n\n\n```python\n#@title Video 3: Probabilities from histogram\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=e2U_-07O9jo\n\n\n\n\n\n\n\n\n\n\n\nIn the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?\n\nOne way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\n\n\\begin{align}\np_i = \\frac{n_i}{\\sum\\nolimits_{i}n_i}\n\\end{align}\n\nwhere $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval.\n\n### Exercise 2: Probabilty Mass Function\n\nYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.\n\nTo verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.\n\n\n```python\nneuron_idx = 283\n\nisi = np.diff(steinmetz_spikes[neuron_idx])\nbins = np.linspace(*isi_range, n_bins + 1)\ncounts, _ = np.histogram(isi, bins)\n```\n\n\n```python\ndef pmf_from_counts(counts):\n \"\"\"Given counts, normalize by the total to estimate probabilities.\"\"\"\n ###########################################################################\n # Exercise: Compute the PMF. Remove the next line to test your function\n #raise NotImplementedError(\"Student excercise: compute the PMF from ISI counts\")\n ###########################################################################\n\n pmf = counts/np.sum(counts)\n\n return pmf\n\n# Uncomment when ready to test your function\npmf = pmf_from_counts(counts)\nplot_pmf(pmf,isi_range)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)\n\n*Example output:*\n\n\n\n\n\n# Section 3: Calculate entropy from a PMF\n\n\n```python\n#@title Video 4: Calculating entropy from pmf\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=Xjy-jj-6Oz0\n\n\n\n\n\n\n\n\n\n\n\nNow that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.\n\n\n```python\nprint(f\"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits\")\n```\n\n Entropy for Neuron 283: 3.36 bits\n\n\n## Interactive Demo: Entropy of neurons\n\nWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.\n\n\n\n\n\n```python\n#@title\n#@markdown **Run the cell** to enable the sliders. \n\ndef _pmf_from_counts(counts):\n \"\"\"Given counts, normalize by the total to estimate probabilities.\"\"\"\n pmf = counts / np.sum(counts)\n return pmf\n\ndef _entropy(pmf):\n \"\"\"Given a discrete distribution, return the Shannon entropy in bits.\"\"\"\n # remove non-zero entries to avoid an error from log2(0)\n pmf = pmf[pmf > 0]\n h = -np.sum(pmf * np.log2(pmf))\n # absolute value applied to avoid getting a -0 result\n return np.abs(h)\n\n@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))\ndef steinmetz_pmf(neuron):\n \"\"\" Given a neuron from the Steinmetz data, compute its PMF and entropy \"\"\"\n isi = np.diff(steinmetz_spikes[neuron])\n bins = np.linspace(*isi_range, n_bins + 1)\n counts, _ = np.histogram(isi, bins)\n pmf = _pmf_from_counts(counts)\n\n plot_pmf(pmf,isi_range)\n plt.title(f\"Neuron {neuron}: H = {_entropy(pmf):.2f} bits\")\n\n```\n\n\n interactive(children=(IntSlider(value=0, description='neuron', max=733), Output()), _dom_classes=('widget-inte\u2026\n\n\n---\n# Summary\n\n\n\n```python\n#@title Video 5: Summary of model types\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=X4K2RR5qBK8\n\n\n\n\n\n\n\n\n\n\n\nCongratulations! You've finished your first NMA tutorial. In this 3 part tutorial series, we used different types of models to understand the spiking behavior of neurons recorded in the Steinmetz data set. \n\n - We used \"what\" models to discover that the ISI distribution of real neurons is closest to an exponential distribution\n - We used \"how\" models to discover that balanced excitatory and inbhitiory inputs, coupled with a leaky membrane, can give rise to neuronal spiking with exhibiting such an exponential ISI distribution\n - We used \"why\" models to discover that exponential ISI distributions contain the most information when the mean spiking is constrained\n\n\n\n# Bonus\n\n### The foundations for Entropy\n\nIn his foundational [1948 paper](https://en.wikipedia.org/wiki/A_Mathematical_Theory_of_Communication) on information theory, Claude Shannon began with three criteria for a function $H$ defining the entropy of a discrete distribution of probability masses $p_i\\in p(X)$ over the points $x_i\\in X$:\n1. $H$ should be continuous in the $p_i$. \n - That is, $H$ should change smoothly in response to smooth changes to the mass $p_i$ on each point $x_i$.\n2. If all the points have equal shares of the probability mass, $p_i=1/N$, $H$ should be a non-decreasing function of $N$. \n - That is, if $X_N$ is the support with $N$ discrete points and $p(x\\in X_N)$ assigns constant mass to each point, then $H(X_1) < H(X_2) < H(X_3) < \\dots$\n3. $H$ should be preserved by (invariant to) the equivalent (de)composition of distributions.\n - For example (from Shannon's paper) if we have a discrete distribution over three points with masses $(\\frac{1}{2},\\frac{1}{3},\\frac{1}{6})$, then their entropy can be represented in terms of a direct choice between the three and calculated $H(\\frac{1}{2},\\frac{1}{3},\\frac{1}{6})$. However, it could also be represented in terms of a series of two choices: \n 1. either we sample the point with mass $1/2$ or not (_not_ is the other $1/2$, whose subdivisions are not given in the first choice), \n 2. if (with probability $1/2$) we _don't_ sample the first point, we sample one of the two remaining points, masses $1/3$ and $1/6$.\n \n Thus in this case we require that $H(\\frac{1}{2},\\frac{1}{3},\\frac{1}{6})=H(\\frac{1}{2},\\frac{1}{2}) + \\frac{1}{2}H(\\frac{1}{3}, \\frac{1}{6})$\n\nThere is a unique function (up to a linear scaling factor) which satisfies these 3 requirements: \n\n\\begin{align}\n H_b(X) &= -\\sum_{x\\in X} p(x) \\log_b p(x)\n\\end{align}\n\nWhere the base of the logarithm $b>1$ controls the units of entropy. The two most common cases are $b=2$ for units of _bits_, and $b=e$ for _nats_.\n\nWe can view this function as the expectation of the self-information over a distribution:\n\n$$H_b(X) = \\mathbb{E}_{x\\in X} \\left[I_b(x)\\right]$$\n\n$$I_b(x)=-\\log_b p(x)$$\n\nSelf-information is just the negative logarithm of probability, and is a measure of how surprising an event sampled from the distribution would be. Events with $p(x)=1$ are certain to occur, and their self-information is zero (as is the entropy of the distribution they compose) meaning they are totally unsurprising. The smaller the probability of an event, the higher its self-information, and the more surprising the event would be to observe. \n\n", "meta": {"hexsha": "6d05760b12d75f9469ae5cbc858aa52a3b73ed99", "size": 442381, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial3.ipynb", "max_stars_repo_name": "hnoamany/course-content", "max_stars_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial3.ipynb", "max_issues_repo_name": "hnoamany/course-content", "max_issues_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial3.ipynb", "max_forks_repo_name": "hnoamany/course-content", "max_forks_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 442381.0, "max_line_length": 442381, "alphanum_fraction": 0.9421222883, "converted": true, "num_tokens": 5905, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.3040416686603661, "lm_q1q2_score": 0.14608553902941246}} {"text": "# Ipython notebook\n\nIpython est initialement un projet de console python sp\u00e9cialis\u00e9 pour les applications scientifiques.\nAvec le temps, le projet s'est \u00e9largi et inclus d\u00e9sormais un ensemble d'utilitaires pour les applications scientifiques, dont le notebook.\n\nIpython notebook est une feuille de calul interactive, inspir\u00e9e de Maple et de Mathematica, qui fonctionne en mode web service. Au d\u00e9marrage d'un notebook, Ipython lance un serveur d'application dans lequel tourne un kernel (noyau d'application) Ipython et le navigateur par d\u00e9faut de votre OS sur l'adresse localhost 127.0.0.1. \n\nCe mode de fonctionnement client/serveur permet notamment de lancer des calculs en se connectant \u00e0 un supercalculateur ou encore \u00e0 un data center de type Amazon Web Service, Microsoft Azure ou Google App Engine.\n\n\nLe notebook est un environnement de calcul interactif compl\u00e9mentaire avec Spyder.\nEn tant qu'IDE, Spyder est plus adapt\u00e9 au d\u00e9veloppement de fonctions et d'applications.\nLe notebook est plus adapt\u00e9 \u00e0 la r\u00e9alisation d'\u00e9tude ou au traitement de donn\u00e9e.\n\n## Fonctionnement de l'Ipython Notebook\n\nDans les versions >2.x de Ipython, le notebook s'utilise en basculant constemment entre 2 modes : \n\n* mode commande des cellules: \n * Accessible en appuyant sur ESC\n * Permet de cr\u00e9er (A et B), supprimer (X), copier (C) et coller (V) des cellules\n * Changer de type de cellule : code (Y) ou markdown (M)\n\n* mode \u00e9dition\n * Accessible en appuyant sur ENTER\n * Executer le code de la cellule (SHIFT ENTER) ou afficher le rendu d'une cellule mardown\n\nL'ensemble des commandes est accessible dans la barre d'outil (edit). L'ensemble des raccourcis est disponible dans le menu help de l'interface (help -> keyboard shortcut), ainsi qu'une visite guid\u00e9e de l'interface (help -> user interface tour)\n\nPour plus d'\u00e9l\u00e9ments, se r\u00e9r\u00e9rer \u00e0 la documentation officielle : \n\nhttp://ipython.org/ipython-doc/2/notebook/notebook.html#structure-of-a-notebook-document\n\nUne galerie de notebook est disponible en ligne via l'application web nbviewer qui permet la visualisation de fichiers .ipynb h\u00e9berg\u00e9s sur Github : \nhttp://nbviewer.ipython.org/\n\n# Numpy, Scipy et Matplotlib\n\nNumpy, Scipy et Matplotlib sont les biblioth\u00e8ques de calcul num\u00e9rique les plus importantes de Python.\n\n## Numpy \n\nInclus : \n\n* l'ensemble des fonctions math\u00e9matiques\n* les manipulations sur les vecteurs et les matrices\nhttp://wiki.scipy.org/NumPy_for_Matlab_Users\n\nIl comprend approximativement l'ensemble des fonctionalit\u00e9s de base de Matlab \n\n\n```python\nimport numpy as np # np est une convention d'import\nprint np.array([0,1,2])\n```\n\n [0 1 2]\n\n\nLe numpy array est l'\u00e9quivalent du vecteur matlab : \n\n\n```python\nx = np.arange(0,10)\nprint x\n```\n\n [0 1 2 3 4 5 6 7 8 9]\n\n\n\n```python\nprint x**2 # mise \u00e0 la puissance 2. Toute les op\u00e9rations sont par d\u00e9faut \u00e9l\u00e9ment par \u00e9l\u00e9ment (elemnt wise)\nprint x+x\nprint x.dot(x) # produit scalaire\n```\n\n [ 0 1 4 9 16 25 36 49 64 81]\n [ 0 2 4 6 8 10 12 14 16 18]\n 285\n\n\n\n```python\nx2 = np.array([[0,1],[1,0]]) # matrice\nprint x2 \n```\n\n [[0 1]\n [1 0]]\n\n\n\n```python\nx3 = np.random.randint(low=0,high=10,size=[3,3])\nu3 = np.array([2,3,4])\nprint x3\nprint x3.T # transpos\u00e9e\nprint x3*u3# multiplication de chaque ligne par le vecteur u3\nprint x3.dot(u3) # produit matricielle\n```\n\n [[1 6 5]\n [7 3 8]\n [8 7 2]]\n [[1 7 8]\n [6 3 7]\n [5 8 2]]\n [[ 2 18 20]\n [14 9 32]\n [16 21 8]]\n [40 55 45]\n\n\nDe mani\u00e8re g\u00e9n\u00e9rale, les manipulations purement matricielles sont souvent plus lourdes en termes de syntaxe que Matlab® (MATrix LABoratory) qui est sp\u00e9cialis\u00e9 sur ce type d'objet. Cependant, les performances sont similaires. \n\nUn benchmark sur ce type de manipulation r\u00e9alis\u00e9 par la NASA entre Matlab, Python, Java et Fortran est disponible en ligne : https://modelingguru.nasa.gov/docs/DOC-1762 .\n\nEn substance matlab et numpy ont des performances similaires, Fortran (avec compilateur intel) est toujours meilleur.\n\n\n## Matplotlib \n\nLa biblioth\u00e8que graphique la plus populaire pour les graphes 2D (Mayavi pour la 3D).\nLa galerie en ligne est munie de nombreux exemple de codes : \n\nhttp://matplotlib.org/gallery.html\n\nUn tutorial de matplotlib : http://nbviewer.ipython.org/github/cmiller8/PythonforBuildingAnalysts/blob/master/0_PythonBaseLibraries/3_MatplotlibLibrary.ipynb\n\n\n```python\nimport matplotlib.pylab as plt\nimport seaborn as sns\n%matplotlib inline\n```\n\nLa ligne pr\u00e9c\u00e9dente est une commande sp\u00e9cifique \u00e0 Ipython. Elle ne fait pas partie du langage Python.\nLes commandes comman\u00e7ant par **%** ou **%%** sont des **magic** qui configurent le comportement du notebook.\n**%matplotlib inline** indique que les graphes seront inclus en ligne dans le notebook (et sauvegard\u00e9s avec lui).\nLes graphes interactifs s'obtiennent avec la commande **%matplotlib qt **\n\n\n```python\nplt.plot([0,1,2,3],[0,2,1,2],'s-')\n```\n\nCr\u00e9ation de 100 points de 0 \u00e0 $2\\pi$ dans le vecteur $x_4$, puis calcul de \n$$y_4 = \\sin(x4)$$\n\n\n```python\nx4 = np.linspace(0,5*np.pi,100) \ny4 = np.sin(x4)\n\nfig = plt.figure() # cr\u00e9ation d'une fenetre de graph\nax1 = fig.add_subplot(121) # graphe de gauche sur 2 graphes r\u00e9partis horizontalement\nax2 = fig.add_subplot(122) # graphe de droite sur 2 graphes r\u00e9partis horizontalement \nax1.plot(x4,y4)\nax2.plot(y4,x4)\nax1.set_title(r'$sin(x)$')\nfig.savefig('test.png',dpi=150)\n```\n\nLe code suivant g\u00e9n\u00e8re un \u00e9chantillon de taille 1000 tir\u00e9 dans la loi normal multivari\u00e9e : \n\n$$x_5 \\sim \\mathcal{N}(\\mu,\\Sigma)$$\nAvec $$\\mu= \\left(\\begin{array}{cccc} 20 \\\\ 30 \\\\ 100\\end{array} \\right) $$\n\n$$\\Sigma= \\left(\\begin{array}{cccc} \\\\10&-5&0.3\\\\-5&10&0.2\\\\0.3&0.2&1000\\end{array} \\right)$$\n\nPuis on trace uniquement les 2 premi\u00e8res composantes du vecteur dans un graphe de dispersion (*scatter*). On utilise la derni\u00e8re composante pour la taille et la couleur des points.\n\n\n```python\nx5 = np.random.multivariate_normal(mean=[20,30,100],cov=[[10,-5,0.3],[-5,10,0.2],[0.3,0.2,1000]],size=1000)\nplt.scatter(x5[:,0],x5[:,1],s=x5[:,2],c=x5[:,2],edgecolor='none',alpha=0.15,cmap='hot')\n```\n\n\n```python\nres = plt.hist(x5.flatten(),bins=50) # que fait la m\u00e9thode flatten ? \n```\n\n## Scipy \n\nComprend l'\u00e9quivalent des principales toolbox de Matlab.\n\n\n```python\nfrom scipy import linalg, optimize, sparse # par exemple : alg\u00e8bre lin\u00e9aire, optimisation et traitement de matrice crzeuse\nfrom copy import copy\n```\n\n# TP : manipulation matricielle\n\nhttp://en.wikibooks.org/wiki/LaTeX/Mathematics\n\n### Exercice : \n* En utilisant la fonction diag de numpy, \u00e9crire la fonction permettant de construire la matrice tridiagonale de taille $N_x$ suivante:\n\n$$ A = \\left( \\begin{array}{cccc}\n1+2Fo & -Fo & & 0\\\\\n-Fo & 1+2Fo & \\ddots & \\\\\n\\\\\n& \\ddots & 1+2Fo & -Fo \\\\\n0 & & -Fo & 1+2Fo \\end{array} \\right)$$\n\nO\u00f9 \n\n$$Fo = \\frac{\\alpha \\Delta t}{\\Delta x^2}$$\n\nEt le vecteur :\n\n$$b^{i-1} = \\left(\\begin{array}{cccc} \\theta^{i-1}_1+Fo\\theta^{i-1}_{0} \\\\ \\vdots \\\\ \\theta^{i-1}_k\\\\ \\vdots \\\\ \\theta^{i-1}_{N_x}+Fo\\theta^{i-1}_{N_x+1} \\end{array} \\right) $$\n\n\n* En utilisant la fonction solve du package scipy.linalg, trouvez $T^i$ tel que : \n\n$$ AT^i = b^{i-1} + \\frac{\\Delta t}{\\rho C_p}S^{i-1}$$\n\nO\u00f9 $S$ est un terme source en $W.m^{-3}.$\n\n** NB : Il s'agit de la r\u00e9solution de l'\u00e9quation de la chaleur en 1D en instationnaire sur un schema centr\u00e9 implicite.**\n\n**$\\theta^{i-1}_{0}$ et $\\theta^{i-1}_{n+1}$ sont les conditions limites \u00e0 gauche et \u00e0 droites**\n\n**Fo est le nombre de Fourier de maille. Sur ce sch\u00e9ma num\u00e9rique implicite, la solution est inconditionnellement stable, mais la pr\u00e9cision est d\u00e9grad\u00e9e pour Fo > 1.**\n\n**Exercice : **\n\n* Caluler l'\u00e9volution de la temp\u00e9rature pour $N_t$ it\u00e9rations temporelles et $N_x$ mailles spatiales. \n* Stocker le r\u00e9sultat dans une matrice, soit en affectant les valeurs dans une matrice d\u00e9finie \u00e0 l'avance, soit en concatenant les vecteurs avec la fonction np.hstack\n* Utilisez la commande **%%timeit** pour v\u00e9rifier la performance de votre code\n \n\n**D\u00e9finition des param\u00e8tres physiques du syst\u00e8me**\n\n\n```python\nalpha = 0.54e-6 # m2.s-1\nrho = 2.4e3 # kg.m-3\nCp = 0.88e3 #J.kg-1.K-1 \n\ndt = 10*60. # s\ndx = 0.01 # 1 m\n\nL = 0.5 # 2 m \nduration = 3*24*3600. # 3 jours\n\nNx = int(L/dx)\nNt = int(duration/dt)\n\nFo = alpha*dt/(dx**2) # Fourier de maille. Peremt d'apprecier la pr\u00e9cision et la stabilit\u00e9 du sch\u00e9ma num\u00e9rique\n```\n\n**D\u00e9finition de la matrice $A$**\n\n\n```python\nA = np.diag(-Fo*np.ones(Nx-1),-1)+\\\n np.diag(1+2*Fo*np.ones(Nx))+\\\n np.diag(-Fo*np.ones(Nx-1),1)\n```\n\n\n```python\nprint A\n```\n\n [[ 7.48 -3.24 0. ..., 0. 0. 0. ]\n [-3.24 7.48 -3.24 ..., 0. 0. 0. ]\n [ 0. -3.24 7.48 ..., 0. 0. 0. ]\n ..., \n [ 0. 0. 0. ..., 7.48 -3.24 0. ]\n [ 0. 0. 0. ..., -3.24 7.48 -3.24]\n [ 0. 0. 0. ..., 0. -3.24 7.48]]\n\n\n**D\u00e9finition du vecteur $b$**\n\n\n```python\ndef rhs(Ti,Tg,Td,Fo):\n \"\"\"\n rhs : right hand side of the equation\n \n Calcul le vecteur b tel que Ax = b \n \n Ti vecteur de solution \u00e0 l'instant i\n Tg condition limite \u00e0 gauche en temp\u00e9rature (scalaire)\n Td condition limite \u00e0 droite en temp\u00e9rature (scalaire)\n Fo : Fourier de maille\n \"\"\"\n b = copy(Ti)\n b[0] += Fo*Tg \n b[-1] += Fo*Td \n return b \n```\n\n**R\u00e9solution du syst\u00e8me pour $N_t$ pas de temps**\n\n\n```python\n#%%timeit\nTi = 15*np.ones([Nx,1]) # Condition initiale \u00e0 15\u00b0C\nt = np.linspace(0,duration,Nt) # vecteur temps\nTg = 10 + 2*np.sin(t/(24*3600)*2*np.pi) # Condition limit\u00e9 \u00e0 gauche sinusoidale centr\u00e9 autour de 10\u00b0C\nTT = [] # liste vide dans laquelle on va ajouter les profiles de temp\u00e9rature \u00e0 chaque pas de temps\nTT.append([Ti])\n\nfor t in range(Nt): \n b = rhs(Ti,Tg[t],19,Fo)\n Ti = np.linalg.solve(A,b) # R\u00e9solution d'un syst\u00e8me d'\u00e9quation lin\u00e9aire AX=b\n TT.append([Ti])\n\nT = np.vstack(TT)\nT = np.reshape(T,[Nt+1,Nx])\n```\n\n### Exercice : \n\n* Tracez de mani\u00e8re superpos\u00e9e le profil de temp\u00e9rature $\\theta$ pour les 10 premiers pas de temps.\n\n* Repr\u00e9sentez les courbes d'isotemp\u00e9ratures annot\u00e9e en noire sur le champs de temp\u00e9rature repr\u00e9sent\u00e9 par la colormap \"hot\"\n\n\n```python\nwith sns.color_palette(\"Blues_r\",12): # d\u00e9finition du cycle des couleurs des courbes\n ax = plt.plot(T[0:10,:].T)\n```\n\n\n```python\nX,Y = np.meshgrid(np.arange(Nx),np.arange(Nt+1)) # Construction des matrices X et Y pour la fonction countour\n```\n\n\n```python\nprint X\nprint \nprint Y\n```\n\n [[ 0 1 2 ..., 47 48 49]\n [ 0 1 2 ..., 47 48 49]\n [ 0 1 2 ..., 47 48 49]\n ..., \n [ 0 1 2 ..., 47 48 49]\n [ 0 1 2 ..., 47 48 49]\n [ 0 1 2 ..., 47 48 49]]\n \n [[ 0 0 0 ..., 0 0 0]\n [ 1 1 1 ..., 1 1 1]\n [ 2 2 2 ..., 2 2 2]\n ..., \n [430 430 430 ..., 430 430 430]\n [431 431 431 ..., 431 431 431]\n [432 432 432 ..., 432 432 432]]\n\n\n\n```python\nax = plt.imshow(T.T,cmap='RdBu_r',aspect='auto',interpolation='nearest') # cr\u00e9ation de la carte des temp\u00e9ratures\n\n# Cration des contours\nCS = plt.contour(Y,X,T,10,\n colors='k',linewidth=1) # negative contours will be dashed by default\n \nplt.clabel(CS, fontsize=10, inline=0.5) # tracer des labels\n\nf = ax.get_figure() # r\u00e9cup\u00e9ration de l'objet figure qui \"encapsule\" l'objet ax\n\nf.savefig('Chronogram_Temperature_1D.png',dpi=150) # Sauvegarde du graphe\n```\n\n### Sauvegarder vos r\u00e9sultats : \n\nSauvez la matrice T au format csv avec la fonction savetxt de numpy\n\n\n```python\nnp.savetxt('temperature_1D_implicit.csv',T,delimiter=';')\n```\n\n## Bonus : Calcul symbolique avec Sympy\n\n\n```python\nfrom sympy import *\ninit_printing() # formatte les sorties avec Latex\n```\n\n\n```python\nX = Symbol('X')\nx, y, z, t, i = symbols('x y z t i')\nk, m, n = symbols('k m n', integer=True)\nf, g, h = symbols('f g h', cls=Function)\n```\n\n\n```python\nexpand((X+1)*(X+2))\n```\n\n\n```python\nfactor(x**2+3*x+2)\n```\n\n\n```python\n(1/cos(x)).series(x, 0, 10) # d\u00e9veloppement en s\u00e9rie limit\u00e9e autour de 0 \u00e0 l'ordre 10\n```\n\n\n```python\nsummation(1/2**i, (i, 0, oo)) # somme infinie de 1/2^n \n```\n\n\n```python\nintegrate(exp(-y**2)*erf(y), y) # int\u00e9gration\n```\n\n\n```python\nf = Function('f')\nf(x).diff(x, x) + f(x)\n```\n\n\n```python\ndsolve(f(x).diff(x, x) + f(x), f(x)) # r\u00e9solution d'\u00e9quations diff\u00e9rentielles\n```\n\n\n```python\nsolve([x + 5*y - 2, -3*x + 6*y - 15], [x, y]) # r\u00e9solution d'\u00e9quation lin\u00e9aire\n```\n\n C:\\Anaconda\\lib\\site-packages\\IPython\\core\\formatters.py:239: FormatterWarning: Exception in image/png formatter: \n \\begin{Bmatrix}x : -3, & y : 1\\end{Bmatrix}\n ^\n Unknown symbol: \\begin (at char 0), (line:1, col:1)\n FormatterWarning,\n\n\n\n\n\n$$\\begin{Bmatrix}x : -3, & y : 1\\end{Bmatrix}$$\n\n\n\n\n```python\nimport IPython\nimport sympy\nfrom statsmodels import version as sm_version\nprint \"Sympy version \\t:\\t %s\"%sympy.__version__\nprint \"IPython version \\t:\\t %s\"%IPython.__version__\nprint \"numpy version \\t:\\t %s\"%np.__version__\nprint \"statsmodels version \\t:\\t %s\"%sm_version.full_version\nprint \"matplotlib version \\t:\\t %s\"%plt.__version__\n```\n\n Sympy version \t:\t 0.7.5\n IPython version \t:\t 2.4.1\n numpy version \t:\t 1.9.2\n statsmodels version \t:\t 0.5.0\n matplotlib version \t:\t 1.9.2\n\n", "meta": {"hexsha": "faec3b42a991abbdddb9632b2693fe3ef5194bc5", "size": 282493, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Intro_Ipython_Numpy_Scipy_solution.ipynb", "max_stars_repo_name": "pascal-schetelat/teaching_material", "max_stars_repo_head_hexsha": "966ecdacccb50f7d4f940b93920ea48ad2cc527d", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Intro_Ipython_Numpy_Scipy_solution.ipynb", "max_issues_repo_name": "pascal-schetelat/teaching_material", "max_issues_repo_head_hexsha": "966ecdacccb50f7d4f940b93920ea48ad2cc527d", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Intro_Ipython_Numpy_Scipy_solution.ipynb", "max_forks_repo_name": "pascal-schetelat/teaching_material", "max_forks_repo_head_hexsha": "966ecdacccb50f7d4f940b93920ea48ad2cc527d", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.6441795587, "max_line_length": 338, "alphanum_fraction": 0.8044588715, "converted": true, "num_tokens": 4455, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121444839335, "lm_q2_score": 0.3775406687981454, "lm_q1q2_score": 0.14531998845699262}} {"text": "[](https://colab.research.google.com/github/tuankhoin/COMP30027-Practical-Solutions/blob/main/Week%202.ipynb)\n\n\n```python\n%%html\nWELCOME TO COMP30027!\n```\n\n\nWELCOME TO COMP30027!\n\n\n\n## Boring stuff\n- My full name is Tuan Khoi Nguyen, and you can call me Khoi\n- Master of Engineering (Mechatronics)\n- Email: `tuankhoi@unimelb.edu.au` || `tuankhoin@student.unimelb.edu.au`\n\n## Facts\n- May swear during teaching (sorry in advance)\n- Can recommend good food in Footscray (Nguyen gang)\n- Hobbies: a bit of hiking and photography\n\n\n### The University of Melbourne, School of Computing and Information Systems\n# COMP30027 Machine Learning, 2022\n\n## Week 2 - Introduction\n- Jupyter Notebook\n- `numpy`\n\n### What are we using?\n- Python 3.x\n- Jupyter Notebook\n\nJust so?\n- `numpy`\n- `pandas`\n- `matplotlib`\n- `scikit-learn`\n- `scipy`\n\n### Do you even Notebook?\n- `pip`/`apt-get`/`brew`\n- Anaconda\n- VSCode & friends\n- Google Colab\n- What else?\n\n## Cells\n\nNotebooks are made up cells: *markdown cells* and *code cells*. \n\nMarkdown cells can contain text, tables, images, equations, etc. \n(see the Markdown guide under the _Help_ menu for more info). \n\nNext are some code cells. \nYou can evaluate them individually, using the button or by hitting `+`. \nOften, you'll want to run all cells in the notebook, or below a certain point. The functions for doing this are in the _Cell_ menu.\n\n``: Run the cell\n\n`` + ...\n- `A`: New cell above\n- `B`: New cell below\n- `M`: Markdown mode\n- `Y`: Code mode\n- `D D`: Delete\n- `L`: Show line numbers\n\nMore? Google 'Jupyter notebook shortcuts'\n\n\n```python\nmessage = \"Wassup world!\"\nprint(message)\n```\n\n Wassup world!\n\n\nPlease ensure that the scipy, numpy, matplotlib, and sklearn packages are installed (although we won\u2019t be\nusing the latter two today).\n\n\n```python\nimport scipy\nimport numpy as np \nimport matplotlib as mpl\nimport sklearn\n```\n\n(You might wish to examine the installation instructions at http://scipy.org/install.html\nif you are considering using your local machine.)\n\n## NumPy Basics\n\nThe main numpy object is a so-called \u201chomogeneous multidimensional array\u201d \u2014 note that this is a little less flexible than using a list or tuple, but it allows mathematical operations to be performed much faster. (And we\u2019ll be doing a fair bit of number-crunching this semester, so this is an important property.) The following is an introduction to NumPy functions and properties.\n\n### Creating Arrays\n\n\n```python\na = np.array([0, 1, 2, 3, 4])\nb = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype = float)\nc = np.array([[1, 6], [2, 7], [3, 8], [4, 9], [5, 10]], dtype = int)\n```\n\n\n```python\nnp.arange(0, 10) # array of evenly spaced values\n\n\n\n\nnp.linspace(0,9,10).astype(int)\n```\n\n\n\n\n array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\n\n```python\nnp.zeros((2, 3)) # array of zeros with the given shape\n```\n\n\n\n\n array([[0., 0., 0.],\n [0., 0., 0.]])\n\n\n\n\n```python\nnp.ones(2) # array of ones with the given shape\n```\n\n\n\n\n array([1., 1.])\n\n\n\n\n```python\nnp.empty((2, 3)) # empty array (arbitrary values)\n```\n\n\n\n\n array([[0., 0., 0.],\n [0., 0., 0.]])\n\n\n\n\n```python\nnp.eye(3) # identity matrix\n```\n\n\n\n\n array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]])\n\n\n\n\n```python\nnp.full((2, 3), 3) # fill new array with given shape\n```\n\n\n\n\n array([[3, 3, 3],\n [3, 3, 3]])\n\n\n\n### Inspecting an array\n\n\n```python\nb.size # number of elements in the array\n```\n\n\n\n\n 10\n\n\n\n\n```python\nb.ndim # number of dimensions\n```\n\n\n\n\n 2\n\n\n\n\n```python\nb.shape # lengths of each dimension\n```\n\n\n\n\n (2, 5)\n\n\n\n\n```python\nb.dtype # data type of array elements\n```\n\n\n\n\n dtype('float64')\n\n\n\n### Numpy Basic operations\n\n\nNumpy supports vector (and matrix) operations,like addition, subtraction, and scalar multiplication.\n\nYou need to be very, very careful about manipulating arrays of different sizes. numpy typically won\u2019t throw exceptions. Instead, it will do \"something\": that something might be very intelligent, like automatically increasing the dimensionality of the smaller array to match the larger array \u2014 but if you aren\u2019t expecting it, the errors can be very difficult to find.\n\n\n```python\na1 = np.array([0,1,2,3,4])\na2 = np.array([1,3,-2,0,4])\n```\n\n\n```python\nprint(a1 + a2) # element-wise addition (or np.add)\nprint(a1 - a2) # element-wise subtraction (or np.subtract)\nprint(a1 * a2) # element-wise multiplication (or np.multiply)\nprint(a1 / a2) # element-wise division (or np.divide)\n```\n\n [1 4 0 3 8]\n [-1 -2 4 3 0]\n [ 0 3 -4 0 16]\n [ 0. 0.33333333 -1. inf 1. ]\n\n\n C:\\Users\\HP OMEN 15\\AppData\\Local\\Programs\\Python\\Python36\\lib\\site-packages\\ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in true_divide\n after removing the cwd from sys.path.\n\n\n\n```python\nb.sum() # sum elements\nb.min() # minimum element\nb.max() # maximum element\nb.mean() # mean of elements\na == a # element-wise comparison\n```\n\n\n\n\n array([ True, True, True, True, True])\n\n\n\n#### Question 1\nHow can we add (element-wise) arrays `b` and `c`? \n\n\n```python\nprint(b)\nprint(c)\n```\n\n [[0. 1. 2. 3. 4.]\n [5. 6. 7. 8. 9.]]\n [[ 1 6]\n [ 2 7]\n [ 3 8]\n [ 4 9]\n [ 5 10]]\n\n\n#### Answer\nWe cannot! ;) The two arrays `b` and `c` have different shapes. `b.shape` is `(2, 5)`, while `c.shape` is `(5,2)` so none of the basic numpy operations (addition, subtraction, and scalar multiplication) can happen between them. \n\n\n```python\n# b+c --> ValueError: operands could not be broadcast together with shapes (2,5) (5,2) \n```\n\n#### Question 2\nWhat do you think would be the result of comparision `b < 2`? How about `a1 = a2`?\n\n\n```python\nb < 2 # element-wise comparison\n```\n\n\n\n\n array([[ True, True, False, False, False],\n [False, False, False, False, False]])\n\n\n\n\n```python\na1 == a2 # element-wise comparison\n```\n\n\n\n\n array([False, False, False, False, True])\n\n\n\n#### Question 3\nHow can we check whether arrays have the same shape and elements?\n\n\n```python\nnp.array_equal(a, b) # check whether arrays have the same shape and elements\n```\n\n\n\n\n False\n\n\n\nBonus: What if we know that `a.shape == b.shape` ?\n\n\n```python\n(a==b).all()\nnp.allclose(a,b) # float comparison\n```\n\n\n\n\n False\n\n\n\n### Using Numpy arrays\n\nNumpy arrays can be indexed, sliced, and iterated over, similarly to lists. \n\n#### Exercise 1\nWrite a function to calculate the **Euclidean distance** between $\\vec{a}$ and $\\vec{b}$. \n\n\\begin{align}\n E_d(\\vec{a},\\vec{b})= \\sqrt{\\sum_{i=1}^n (a_i-b_i)^2}\n\\end{align}\n\n\n```python\n# Original solution\ndef my_euclidean_dist(a, b):\n assert len(a)==len(b), \"Arrays are of different sizes!\"\n return np.sqrt(sum([(a[i]-b[i])*(a[i]-b[i]) for i in range(len(a))]))\n\ndef my_euclidean_dist_2(a, b):\n assert len(a)==len(b), \"Arrays are of different sizes!\"\n return sum((a-b)**2)**0.5\n```\n\nUse this function to calculate the eculadian Distance between `a1` and `a2`.\n\n\n```python\nprint(my_euclidean_dist(a1, a2))\nprint(my_euclidean_dist_2(a1, a2))\n```\n\n 5.477225575051661\n 5.477225575051661\n\n\n### Numpy and Matrices\n\nMatrices can be made in numpy by wrapping a list of lists. For example the matrices M and N can be modeled in Numpy by using the following code.\n$$\n\\begin{align}\n \\mathbf{M} = \\begin{pmatrix} \n 1 & 2 & 3 \\\\ 4 & 2 & 1 \\\\ 6 & 2 & 0 \n \\end{pmatrix} \n \\quad \\text{and} \\quad \n \\mathbf{N} = \\begin{pmatrix} \n 0 & 3 & 1 \\\\ 1 & 1 & 4 \\\\ 2 & 0 & 3 \n \\end{pmatrix}\n\\end{align}\n$$\n\n\n```python\nM = np.array([[1,2,3],[4,2,1],[6,2,0]])\nN = np.array([[0,3,1],[1,1,4],[2,0,3]])\n```\n\nYou can use Numpy to perform all kind of **Linear Algebra** operations on these matrices. Such as:\n\n\n```python\nnp.transpose(M) # reverse the Matrix M\n\n\n\nM.T\n```\n\n\n\n\n array([[1, 4, 6],\n [2, 2, 2],\n [3, 1, 0]])\n\n\n\n\n```python\nnp.dot(M,N) # Calculate the dot product of M and N (Does it?)\n\n\n\nM @ N # (Python 3.5+)\n```\n\n\n\n\n array([[ 8, 5, 18],\n [ 4, 14, 15],\n [ 2, 20, 14]])\n\n\n\n\n```python\nnp.linalg.inv(M) # matrix inverse\n\n\n\nnp.matrix(M).I\n```\n\n\n\n\n matrix([[ 1. , -3. , 2. ],\n [-3. , 9. , -5.5],\n [ 2. , -5. , 3. ]])\n\n\n\n\n```python\nnp.reshape(M,9) # reshape the matrix to a 1D row\n```\n\n\n\n\n array([1, 2, 3, 4, 2, 1, 6, 2, 0])\n\n\n\n\n```python\narr = np.reshape(range(9),(3,3)) # reshape the 1D row (range(9)) to a 3x3 matrix\nprint(arr)\n```\n\n [[0 1 2]\n [3 4 5]\n [6 7 8]]\n\n\n\n```python\nnp.delete(arr, 0, 1) # remove the first column of the matrix\n```\n\n\n\n\n array([[1, 2],\n [4, 5],\n [7, 8]])\n\n\n\n\n```python\nnp.delete(arr, 2, 0) # remove the third row of the matrix\n```\n\n\n\n\n array([[0, 1, 2],\n [3, 4, 5]])\n\n\n\n#### Exercise 2\n\nWrite a function that calculates the dot product between two vectors: A \u00b7 B = $\\sum_{i} a_ib_i$. Find the between first row of matrix `M` and the first column of matrix `N`. \n\n\n```python\n# Original solution\ndef my_dot(a,b):\n assert len(a)==len(b), \"Arrays are of different sizes!\"\n return sum([a[i]*b[i] for i in range(len(a))])\n\ndef my_dot_2(a,b):\n assert len(a)==len(b), \"Arrays are of different sizes!\"\n return sum(a*b)\n\ndef my_dot_3(a,b):\n assert len(a)==len(b), \"Arrays are of different sizes!\"\n return np.einsum('i,i->',a,b)\n```\n\n\n```python\nrow = M[0] #copies the first row from Matric M\ncolumn = N[:,0] #copies the first column from Matric N\nprint(my_dot(row,column))\nprint(my_dot_2(row,column))\nprint(my_dot_3(row,column))\n```\n\n 8\n 8\n 8\n\n\n#### Exercise 3\nWrite a short script to compare:\n1. M * N and np.dot(M, N)\n2. N * M and np.dot(N, M)\n3. M * M and M**2 and np.dot(M, M)\n\n\n```python\nprint(\"M*N\\n\",M*N)\nprint(\"M.N\\n\",M @ N)\nprint(\"N*M\\n\",N*M)\nprint(\"N.M\\n\",N @ M)\nprint(\"M*M\\n\",M*M)\nprint(\"M**2\\n\",M**2)\nprint(\"M.M\\n\",M @ M)\n```\n\n M*N\n [[ 0 6 3]\n [ 4 2 4]\n [12 0 0]]\n M.N\n [[ 8 5 18]\n [ 4 14 15]\n [ 2 20 14]]\n N*M\n [[ 0 6 3]\n [ 4 2 4]\n [12 0 0]]\n N.M\n [[18 8 3]\n [29 12 4]\n [20 10 6]]\n M*M\n [[ 1 4 9]\n [16 4 1]\n [36 4 0]]\n M**2\n [[ 1 4 9]\n [16 4 1]\n [36 4 0]]\n M.M\n [[27 12 5]\n [18 14 14]\n [14 16 20]]\n\n\nYou can probably see that the multiplication (and exponentiation) operation happens element-wise, namely:\n$MN[i][j] = M[i][j] * N[i][j]$\n\nThis is actually convenient in certain contexts, but is certainly not how we typically wish to multiply matrices!\n\n#### Exercise 4\nConsider the matrix **T1** as\n$$\n\\begin{align}\n \\mathbf{T1} = \\begin{pmatrix} \n 1 & 2 & 3 \\\\ 4 & 5 & 6 \n \\end{pmatrix} \n\\end{align}\n$$\nand arrays T2 & T3 as **T2** = `<7, 8, 9>` **T3** = `<10, 20, 30>`. Write a script to do the following:\n1. add **T2** as the third row to **T1**; \n2. add **T3** as the forth column to the result matrix\n\n\n```python\nT1 = np.reshape(range(1,7), (2,3))\nT2 = np.array([range(7,10)])\nT3 = np.array([[10, 20, 30]])\n\nR1 = np.concatenate((T1, T2), axis=0)\nprint(\"R1\\n\",R1)\n\nR2 = np.concatenate((R1, T3.T), axis=1)\nprint(\"R2\\n\",R2)\n```\n\n R1\n [[1 2 3]\n [4 5 6]\n [7 8 9]]\n R2\n [[ 1 2 3 10]\n [ 4 5 6 20]\n [ 7 8 9 30]]\n\n\n## Getting Help\nConfused about a particular function / method? Putting a question mark `` after the object in question will return the docstring.\n\n\n```python\nnp.random.normal?\n```\n\n \u001b[1;31mDocstring:\u001b[0m\n normal(loc=0.0, scale=1.0, size=None)\n \n Draw random samples from a normal (Gaussian) distribution.\n \n The probability density function of the normal distribution, first\n derived by De Moivre and 200 years later by both Gauss and Laplace\n independently [2]_, is often called the bell curve because of\n its characteristic shape (see the example below).\n \n The normal distributions occurs often in nature. For example, it\n describes the commonly occurring distribution of samples influenced\n by a large number of tiny, random disturbances, each with its own\n unique distribution [2]_.\n \n .. note::\n New code should use the ``normal`` method of a ``default_rng()``\n instance instead; please see the :ref:`random-quick-start`.\n \n Parameters\n ----------\n loc : float or array_like of floats\n Mean (\"centre\") of the distribution.\n scale : float or array_like of floats\n Standard deviation (spread or \"width\") of the distribution. Must be\n non-negative.\n size : int or tuple of ints, optional\n Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n ``m * n * k`` samples are drawn. If size is ``None`` (default),\n a single value is returned if ``loc`` and ``scale`` are both scalars.\n Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.\n \n Returns\n -------\n out : ndarray or scalar\n Drawn samples from the parameterized normal distribution.\n \n See Also\n --------\n scipy.stats.norm : probability density function, distribution or\n cumulative density function, etc.\n Generator.normal: which should be used for new code.\n \n Notes\n -----\n The probability density for the Gaussian distribution is\n \n .. math:: p(x) = \\frac{1}{\\sqrt{ 2 \\pi \\sigma^2 }}\n e^{ - \\frac{ (x - \\mu)^2 } {2 \\sigma^2} },\n \n where :math:`\\mu` is the mean and :math:`\\sigma` the standard\n deviation. The square of the standard deviation, :math:`\\sigma^2`,\n is called the variance.\n \n The function has its peak at the mean, and its \"spread\" increases with\n the standard deviation (the function reaches 0.607 times its maximum at\n :math:`x + \\sigma` and :math:`x - \\sigma` [2]_). This implies that\n normal is more likely to return samples lying close to the mean, rather\n than those far away.\n \n References\n ----------\n .. [1] Wikipedia, \"Normal distribution\",\n https://en.wikipedia.org/wiki/Normal_distribution\n .. [2] P. R. Peebles Jr., \"Central Limit Theorem\" in \"Probability,\n Random Variables and Random Signal Principles\", 4th ed., 2001,\n pp. 51, 51, 125.\n \n Examples\n --------\n Draw samples from the distribution:\n \n >>> mu, sigma = 0, 0.1 # mean and standard deviation\n >>> s = np.random.normal(mu, sigma, 1000)\n \n Verify the mean and the variance:\n \n >>> abs(mu - np.mean(s))\n 0.0 # may vary\n \n >>> abs(sigma - np.std(s, ddof=1))\n 0.1 # may vary\n \n Display the histogram of the samples, along with\n the probability density function:\n \n >>> import matplotlib.pyplot as plt\n >>> count, bins, ignored = plt.hist(s, 30, density=True)\n >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *\n ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),\n ... linewidth=2, color='r')\n >>> plt.show()\n \n Two-by-four array of samples from N(3, 6.25):\n \n >>> np.random.normal(3, 2.5, size=(2, 4))\n array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random\n [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random\n \u001b[1;31mType:\u001b[0m builtin_function_or_method\n\n\n## Interrupting/restarting the kernel\n\nCode is run in the kernel process. You can interrupt the kernel by pressing the stop button in the toolbar. Try it out below.\n\n\n```python\nimport time\ntime.sleep(10)\n```\n\nOccassionally you may want to restart the kernel (e.g. to clear the namespace). You can do this by pressing the button in the toolbar. You can find more options under the _Kernel_ menu.\n", "meta": {"hexsha": "35c37dd859fc4e4dfb03c917a9eb5f45397c1675", "size": 31423, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Week 2.ipynb", "max_stars_repo_name": "tuankhoin/COMP30027-Practical-Solutions", "max_stars_repo_head_hexsha": "f5a407d0c820276426932e08c2f2c2acc0538a5f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-27T02:09:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T02:09:45.000Z", "max_issues_repo_path": "Week 2.ipynb", "max_issues_repo_name": "tuankhoin/COMP30027-Practical-Solutions", "max_issues_repo_head_hexsha": "f5a407d0c820276426932e08c2f2c2acc0538a5f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week 2.ipynb", "max_forks_repo_name": "tuankhoin/COMP30027-Practical-Solutions", "max_forks_repo_head_hexsha": "f5a407d0c820276426932e08c2f2c2acc0538a5f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.2836166924, "max_line_length": 386, "alphanum_fraction": 0.48127168, "converted": true, "num_tokens": 5076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49609382947091946, "lm_q2_score": 0.2909808785120009, "lm_q1q2_score": 0.1443538183238309}} {"text": "```python\n%%HTML\n\n```\n\n\n\n\n\n\n# Metody Numeryczne\n\n## Ca\u0142kowanie numeryczne\n\n\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof.AGH\n\n\n## Ca\u0142ka oznaczona\n\n$$\n\\int_a^b f(x) \\mathrm{d}x\n$$\n\nFormalnie interesuje nas ca\u0142ka Riemanna, czyli granica ci\u0105gu sum Riemanna, niezale\u017cna od ci\u0105gu podzia\u0142\u00f3w przedzia\u0142u.\n\nW praktyce mo\u017cemy my\u015ble\u0107 o ca\u0142ce wzgl\u0119dem jednej zmiennej jako polu pod krzyw\u0105.\n\nZ tego powodu wzory na wyliczanie ca\u0142ek nazywamy kwadraturami.\n\n## Zastosowania ca\u0142kowania numerycznego\n\n- Ca\u0142kowanie w czasie rzeczywistym.\n- Ca\u0142kowanie funkcji jednej zmiennej.\n- Ca\u0142kowanie funkcji wielu zmiennych.\n\n\n## Ca\u0142kowanie w czasie rzeczywistym\n\nPodstawowe zastosowanie ca\u0142kowania w automatyce - regulacja PI, PID\n\nWraz z pojawieniem si\u0119\u00a0nowego pomiaru warto\u015b\u0107 ca\u0142ki musi zosta\u0107 zaktualizowana.\n\n## Jak praktycznie to liczy\u0107?\n\nW zasadzie mamy dost\u0119pne trzy wzory - wzory prostok\u0105t\u00f3w i wz\u00f3r trapez\u00f3w.\n\nDla ustalenia uwagi interesuje nas ca\u0142ka\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\n$$\n\n## Wz\u00f3r prostok\u0105t\u00f3w w prz\u00f3d\nInaczej metoda Eulera w prz\u00f3d\n\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\\approx f(t_{k-1})(t_k-t_{k-1})\n$$\n\ndla pomiar\u00f3w r\u00f3wnoodleg\u0142ych (sta\u0142y krok) $t_k=kh$\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\\approx f((k-1)h)h\n$$\n\nCo odpowiada transmitancji\n$$\n\\frac{1}{s}\\approx \\frac{h}{z-1}\n$$\n\n## Wz\u00f3r prostok\u0105t\u00f3w w ty\u0142\nInaczej metoda r\u00f3\u017cnic wstecznych\n\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\\approx f(t_{k})(t_k-t_{k-1})\n$$\n\ndla pomiar\u00f3w r\u00f3wnoodleg\u0142ych (sta\u0142y krok) $t_k=kh$\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\\approx f((kh)h\n$$\n\nCo odpowiada transmitancji\n$$\n\\frac{1}{s}\\approx \\frac{hz}{z-1}\n$$\n\n## Wz\u00f3r trapez\u00f3w\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\\approx \\left(\\frac{f(t_{k})+f(t_{k-1})}{2}\\right)(t_k-t_{k-1})\n$$\n\ndla pomiar\u00f3w r\u00f3wnoodleg\u0142ych (sta\u0142y krok) $t_k=kh$\n$$\n\\int_{t_{k-1}}^{t_k} f(t) \\mathrm{d}t\\approx \\frac{h}{2}(f(kh)+f((k-1)h))\n$$\n\nCo odpowiada transmitancji (tzw. aproksymacja Tustina)\n$$\n\\frac{1}{s}\\approx \\frac{h}{2}\\frac{z+1}{z-1}\n$$\n\n\n## R\u00f3\u017cnice\nMetoda prostok\u0105t\u00f3w w ty\u0142 i trapez\u00f3w s\u0105 generalnie dobre. Jedna i druga metoda jest stabilna (o tym na nast\u0119pnym wyk\u0142adzie). Metoda trapez\u00f3w jest bardziej dok\u0142adna jej b\u0142\u0105d:\n\n$$\n\\text{error} \\leq \\frac{(h)^3}{12} M_2\n$$\ngdzie $M_2$ to maksimum modu\u0142u drugiej pochodnej $f$ na przedziale $(kh, (k-1)h)$, za\u015b b\u0142\u0105d metody prostok\u0105t\u00f3w w ty\u0142\n\n$$\n\\text{error} \\leq -\\frac{(h)^2}{2} M_1\n$$\ngdzie $M_1$ to maksimum modu\u0142u pierwszej pochodnej $f$ na przedziale $(kh, (k-1)h)$.\n\n\n## Liczenie ca\u0142ki z funkcji\nPodobnie jak w przypadku wszystkich innych metod konieczne jest stworzenie modelu (np. wielomianu), z kt\u00f3rego jest \u0142atwiej policzy\u0107 ca\u0142k\u0119.\n\n## Kwadratury interpolacyjne\nKwadratury interpolacyjne bazuj\u0105 na przybli\u017ceniu funkcji wielomianem interpolacyjnym. \n\nWyr\u00f3\u017cniamy:\n\n- Kwadratury Newtona-Cotesa (na w\u0119z\u0142ach r\u00f3wnoodleg\u0142ych) (do 4 stopnia maj\u0105 nazwy - wz\u00f3r trapez\u00f3w, wz\u00f3r Simpsona, wz\u00f3r Boole'a)\n- Kwadratura Clenshawa-Curtisa (na w\u0119z\u0142ach Czebyszewa)\n- Kwadratura Gaussa-Legendre'a (na w\u0119z\u0142ach Legendre'a)\n\n## Posta\u0107 kwadratury\nW praktyce posta\u0107 kwadratury to\n$$\n\\int_{-1}^{1}f(x)\\mathrm{d} x\\approx\\sum_{i=0}^n w_i f(x_i)\n$$\n\ngdzie $x_i$ to w\u0119z\u0142y kwadratury a $w_i$ to tzw. wagi kwadratury.\n\n## Ocena jako\u015bci kwadratury\n\nDla kwadratury o $n$ w\u0119z\u0142ach m\u00f3wimy, \u017ce jest rz\u0119du wielomianowego $N$ jes\u017celi podaje dok\u0142adn\u0105 warto\u015b\u0107 ca\u0142ki z wielomian\u00f3w stopnia $N$ i mniejszych.\n\n## Twierdzenie o rz\u0119dzie wielomianowym kwadratur\n\nDla ka\u017cdego $n\\geq0$ kwadratura interpolacyjna stopnia $n+1$ jest dok\u0142adna dla wielomian\u00f3w stopnia $n$, przy czym kwadratura Gaussa-Legendre'a jest dok\u0142adna dla wielomian\u00f3w stopnia $2n+1$\n\n**Dow\u00f3d**\n\nDok\u0142adno\u015b\u0107 dla wielomian\u00f3w stopnia $n$ jest oczywista. \nDla ustalenia uwagi wielomiany Legendre'a to wielomiany ortogonalne z iloczynem skalnarnym \n$$\nf\\circ g=\\int_{-1}^{1}f(x)g(x)\\mathrm{d} x\n$$\nw\u0119z\u0142y Legendre'a to pierwiastki tych wielomian\u00f3w.\n\n\n## Dow\u00f3d cd..\nNiech funkcja $f$ b\u0119dzie wielomianem stopnia $2n+1$. Tak\u0105 funkcj\u0119\u00a0mo\u017c\u0144a zapisa\u0107 w formie\n$$f(x)=P_{n+1}(x)q_n(x)+r_n(x)$$ gdzie $P_{n+1}$ to wielomian Legendre'a stopnia $n+1$ a $q_n$ i $r_n$ s\u0105 wielomianami stopnia $n$.\n\n$$\nI=\\int_{-1}^{1}f(x)\\mathrm{d} x=\\int_{-1}^{1}P_{n+1}(x)q_n(x)\\mathrm{d} x+\\int_{-1}^{1}r_n(x)\\mathrm{d} x\n$$\n\nPoniewa\u017c wielomiany Legendre'a s\u0105\u00a0ortogonalne, to pierwsza ca\u0142ka jest r\u00f3wna 0. wi\u0119c\n$$\nI=\\int_{-1}^{1}r_n(x)\\mathrm{d} x\n$$\nPoniewa\u017c w\u0119z\u0142y kwadratury s\u0105 miejscami zerowymi $P_{n+1}$ to mamy, \u017ce $f(x_k)=r(x_k)$. St\u0105d warto\u015b\u0107 kwadratury interpolacyjnej dla $f$ i dla $r_n$ b\u0119d\u0105 sobie r\u00f3wne. A poniewa\u017c $r_n$ jest stopnia $n$ to warto\u015b\u0107 tej kwadratury jest dok\u0142\u0105dnie r\u00f3wna warto\u015bci ca\u0142ki.\n\n## Dodatkowe uwagi\n- Kwadratura Gaussa-Legendra jest najdok\u0142adniejsza dla wielomian\u00f3w, jednak w praktycznym ca\u0142kowaniu jej przewaga nad kwadratur\u0105 Clenshawa-Curtisa (na w\u0119z\u0142ach Czebyszewa) nie jest taka du\u017ca. \n\n- Dok\u0142adny wynik jest zbyt skomplikowany aby go tu omawia\u0107, ale dla ma\u0142ych $n$ kwadratury zachowuj\u0105 si\u0119\u00a0w praktyce tak samo, dopiero dla bardzo du\u017cych $n$ kwadratura GL staje si\u0119\u00a0dwa razy szybsza. Dla nieregularnych funkcji (nieanalitycznych) r\u00f3\u017cnica ta mo\u017ce by\u0107 niezauwa\u017calna w zakresie dok\u0142adno\u015bci maszynowej.\n\n## Kwadratura Clenshawa-Curtisa \n\nImplementuje si\u0119\u00a0j\u0105 przez FFT - kod w Matlabie:\n\n```function I = clenshaw_curtis(f,n) \n% (n+1)-pt C-C quadrature of f x = cos(pi*(0:n)'/n); \n% Chebyshev points\nfx = feval(f,x)/(2*n); % f evaluated at these points\ng = real(fft(fx([1:n+1 n:-1:2]))); % fast Fourier transform\na = [g(1); g(2:n)+g(2*n:-1:n+2); g(n+1)]; % Chebyshev coeffs w = 0*a'; w(1:2:end) = 2./(1-(0:2:n).^2); % weight vector\nI = w*a; % the integral```\n\nW Pythonie jest zaimplementowana jako ```scipy.integrate.quad```\n\n\n## Kwadratura Gaussa-Legendra\nImplementacja przez algorytm Goluba-Wlescha:\nPierwiastki wielomianu Legendre'a spe\u0142niaj\u0105 r\u00f3wnanie \n$$ J\\tilde{P} = x\\tilde{P} - p_n(x) \\times \\mathbf{e}_n$$\ngdzie $\\tilde{P} = \\begin{bmatrix} p_0(x) & p_1(x) & \\ldots & p_{n-1}(x) \\end{bmatrix}^\\mathsf{T}$, $\\mathbf{e}_n = \\begin{bmatrix} 0 & \\ldots & 0 & 1 \\end{bmatrix}^\\mathsf{T}$ a $J$ jest macierz\u0105 Jacobiego\n$$\n\\mathbf{J}=\\begin{pmatrix}\n a_0 & 1 & 0 & \\ldots & \\ldots & \\ldots \\\\\n b_1 & a_1 & 1 & 0 & \\ldots & \\ldots \\\\\n 0 & b_2 & a_2 & 1 & 0 & \\ldots \\\\\n 0 & \\ldots & \\ldots & \\ldots & \\ldots & 0 \\\\\n \\ldots & \\ldots & 0 & b_{n-2} & a_{n-2} & 1 \\\\\n \\ldots & \\ldots & \\ldots & 0 & b_{n-1} & a_{n-1}\n\\end{pmatrix}\n$$\nWarto\u015bci w\u0142asne $J$ s\u0105 pierwiastkami wielomian\u00f3w Legendre'a\n\n\n## Wyliczanie wag kwadratur\nKorzystamy z lekko zmodyfikowanej macierzy Jacobiego\n$$ \\begin{align}\n \\mathcal{J}_{i,i} = J_{i,i} &= a_{i-1} && i=1,\\ldots,n \\\\ \n \\mathcal{J}_{i-1,i} = \\mathcal{J}_{i,i-1} = \\sqrt{J_{i,i-1}J_{i-1,i}} &= \\sqrt{b_{i-1}} && i=2,\\ldots,n.\n\\end{align}\n$$\nwtedy wagi kwadratury wynikaj\u0105 wprost z wektor\u00f3w w\u0142asnych macierzy.\n\n## Kod w Matlabie\n```\nfunction I = gauss(f,n) % (n+1)-pt Gauss quadrature of f\nbeta = .5./sqrt(1-(2*(1:n)).^(-2)); % 3-term recurrence coeffs \nT = diag(beta ,1) + diag(beta ,-1); % Jacobi matrix\n[V,D] = eig(T); % eigenvalue decomposition\nx = diag(D); \n[x,i] = sort(x); % nodes (= Legendre points)\nw = 2*V(1,i).^2; % weights\nI = w*feval(f,x); % the integral\n```\n\n## Inne kwadratury\nGeneralnie kwadratura Gaussa-Legendrea jest najdok\u0142adniejsza.\n\nProblem polega na tym, \u017ce je\u017celi nie jeste\u015bmy pewni jakiego rz\u0119du u\u017cy\u0107 nie mo\u017cemy wykorzystywa\u0107 tych samych warto\u015bci wielokrotnie.\n\nPotencjalnie problemem jest tak\u017ce to, \u017ce w\u0119z\u0142y Legendre'a nie zaczynaj\u0105 si\u0119\u00a0na kra\u0144cach przedzia\u0142u, wi\u0119c przy podziale ca\u0142kowania na podprzedzia\u0142y nie ma zachowania ci\u0105g\u0142o\u015bci.\n\n## Kwadratura Lobatto\n\n- Modyfikacja kwadratury Gaussa-Legendre'a tak, aby pierwszy i ostatni w\u0119ze\u0142 by\u0142y na brzegach przedzia\u0142u.\n\n- Dla $n$ w\u0119z\u0142\u00f3w jest dok\u0142adna dla wielomian\u00f3w stopnia $2n-3$ \n\n## Kwadratura Gaussa-Konroda\n\n- Opiera si\u0119\u00a0na koncepcji tzw. zagnie\u017cd\u017cania. \n\n- Pomys\u0142 jest taki, aby licz\u0105c kwadratur\u0119\u00a0Gaussa-Legendra wyliczy\u0107 dodatkowo warto\u015b\u0107 funkcji w pewnej liczbie dodatkowych w\u0119z\u0142\u00f3w przedzia\u0142u. Tak aby za pomoc\u0105\u00a0wszystkich warto\u015bci wyliczy\u0107 przybli\u017cenie. W ten spos\u00f3b dostajemy jednocze\u015bnie dwa przybli\u017cenia i mo\u017cemy oszacowa\u0107 b\u0142\u0105d.\n\n- Dla kwadratury GL rz\u0119du $n$ (rz\u0105d przybli\u017cenia $2n-1$) wylicza si\u0119 dodatkowe $n+1$ w\u0119z\u0142\u00f3w uzyskuj\u0105c przybli\u017cenie rz\u0119du $3n +1$\n\n\n## Kwadratury adaptacyjne\n\n- W praktycznych obliczeniach, gdy nie wiemy jaki rz\u0105d kwadratury dobra\u0107 stosujemy kwadratury adaptacyjne.\n- Jednocze\u015bnie na tym samym przedziale liczymy kwadratury dw\u00f3ch r\u00f3\u017cnych rz\u0119d\u00f3w. \n- Je\u017celi r\u00f3\u017cnica mi\u0119dzy warto\u015bciami jest ma\u0142a to wynik wy\u017cszego rz\u0119du uznajemy za rozwi\u0105zanie.\n- Je\u017celi r\u00f3\u017cnica przekracza ustalony pr\u00f3g dzielimy przedzia\u0142 ca\u0142kowania na podprzedzia\u0142y (mi\u0119dzy policzonymi ju\u017c w\u0119z\u0142ami) i w ka\u017cdym z nich stosujemy te same dwie kwadratury.\n- Proces powtarzamy do osi\u0105gni\u0119cia tolerancji w ka\u017cdym z podprzedzia\u0142\u00f3w\n\n## Implementacja kwadratur adaptacyjnych\n\n- W Matlabie zaimplementowane s\u0105 kwadratury oparte o kwadratury Lobatto niskich rz\u0119d\u00f3w opracowane przez W. Gautshiego (funkcja ```quad```)\n- W Pythonie wykorzystywane s\u0105\u00a0kwadratury Gaussa-Konroda pochodz\u0105ce z pakietu Fortrana QUADPACK (```scipy.integrate.quadrature```)\n\n\n```python\n\n```\n", "meta": {"hexsha": "86dd036c2b82646228ca3f5403e92deeddfbe270", "size": 15908, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Metody Numeryczne/Lecture 5 (integrals)/Lecture.ipynb", "max_stars_repo_name": "ggaallzz/public_lectures", "max_stars_repo_head_hexsha": "b8fae55938e2a9732a1b0f4a6bcd77511caadf2b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Metody Numeryczne/Lecture 5 (integrals)/Lecture.ipynb", "max_issues_repo_name": "ggaallzz/public_lectures", "max_issues_repo_head_hexsha": "b8fae55938e2a9732a1b0f4a6bcd77511caadf2b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Metody Numeryczne/Lecture 5 (integrals)/Lecture.ipynb", "max_forks_repo_name": "ggaallzz/public_lectures", "max_forks_repo_head_hexsha": "b8fae55938e2a9732a1b0f4a6bcd77511caadf2b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.2426470588, "max_line_length": 317, "alphanum_fraction": 0.5557581091, "converted": true, "num_tokens": 3796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121444839335, "lm_q2_score": 0.3738758297482025, "lm_q1q2_score": 0.14390934739909064}} {"text": "# Probability Theory Review\n\n### Preliminaries\n\n- Goal \n - Review of probability theory as a theory for rational/logical reasoning with uncertainties (i.e., a Bayesian interpretation)\n- Materials \n - Mandatory\n - These lecture notes\n - [Ariel Caticha - 2012 - Entropic Inference and the Foundations of Physics](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/Caticha-2012-Entropic-Inference-and-the-Foundations-of-Physics.pdf), pp.7-26 (sections 2.1 through 2.5), on deriving probability theory. You may skip section 2.3.4: Cox's proof (pp.15-18). \n - The assignment is only meant to appreciate how this line of \"axiomatic derivation\" of the rules of PT goes. I will not ask questions about any details of the derivations at the exam. \n \n - Optional\n - the pre-recorded video guide and live class of 2020\n - [Ariel Caticha - 2012 - Entropic Inference and the Foundations of Physics](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/Caticha-2012-Entropic-Inference-and-the-Foundations-of-Physics.pdf), pp.7-56 (ch.2: probability)\n - Great introduction to probability theory, in particular w.r.t. its correct interpretation as a state-of-knowledge.\n - Absolutely worth your time to read the whole chapter!\n - [Edwin Jaynes - 2003 - Probability Theory -- The Logic of Science](https://archive.org/details/ProbabilityTheoryTheLogicOfScience). \n - Brilliant book on Bayesian view on probability theory. Just for fun, scan the annotated bibliography and references.\n - Bishop pp. 12-24\n\n### Example Problem: Disease Diagnosis\n\n- **Problem**: Given a disease with prevalence of 1% and a test procedure with sensitivity ('true positive' rate) of 95% and specificity ('true negative' rate) of 85% , what is the chance that somebody who tests positive actually has the disease?\n\n- **Solution**: Use probabilistic inference, to be discussed in this lecture. \n\n### The Design of Probability Theory\n\n- Define an **event** (or \"proposition\") $A$ as a statement, whose truth can be contemplated by a person, e.g., \n\n$$\ud835\udc34= \\texttt{'there is life on Mars'}$$\n\n- If we assume the fact $$I = \\texttt{'All known life forms require water'}$$ and a new piece of information $$x = \\texttt{'There is water on Mars'}$$ becomes available, how _should_ our degree of belief in event $A$ be affected (if we were rational)? \n\n- [Richard T. Cox, 1946](https://aapt.scitation.org/doi/10.1119/1.1990764) developed a **calculus for rational reasoning** about how to represent and update the degree of _beliefs_ about the truth value of events when faced with new information. \n\n- In developing this calculus, only some very agreeable assumptions were made, e.g.,\n - (Transitivity). If the belief in $A$ is greater than the belief in $B$, and the belief in $B$ is greater than the belief in $C$, then the belief in $A$ must be greater than the belief in $C$.\n - (Consistency). If the belief in an event can be inferred in two different ways, then the two ways must agree on the resulting belief.\n\n- This effort resulted in confirming that the [sum and product rules of Probability Theory](#PT-calculus) are the **only** proper rational way to process belief intensities. \n\n- $\\Rightarrow$ Probability theory (PT) provides _the_ **theory of optimal processing of incomplete information** (see [Cox theorem](https://en.wikipedia.org/wiki/Cox%27s_theorem), and [Caticha](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/Caticha-2012-Entropic-Inference-and-the-Foundations-of-Physics.pdf), pp.7-24), and as such provides the (only) proper quantitative framework for drawing conclusions from a finite (read: incomplete) data set.\n\n### Why Probability Theory for Machine Learning?\n\n- Machine learning concerns drawing conclusions about model parameter settings from (a finite set of) data and therefore PT provides the _optimal calculus for machine learning_. \n\n- In general, nearly all interesting questions in machine learning can be stated in the following form (a conditional probability):\n\n$$p(\\texttt{whatever-we-want-to-know}\\, | \\,\\texttt{whatever-we-do-know})$$\n\nwhere $p(a|b)$ means the probability that $a$ is true, given that $b$ is true.\n\n- Examples\n - Predictions\n $$p(\\,\\texttt{future-observations}\\,|\\,\\texttt{past-observations}\\,)$$\n - Classify a received data point $x$ \n $$p(\\,x\\texttt{-belongs-to-class-}k \\,|\\,x\\,)$$\n - Update a model based on a new observation\n $$p(\\,\\texttt{model-parameters} \\,|\\,\\texttt{new-observation},\\,\\texttt{past-observations}\\,)$$\n\n### Frequentist vs. Bayesian Interpretation of Probabilities\n\n- The interpretation of a probability as a **degree-of-belief** about the truth value of an event is also called the **Bayesian** interpretation. \n\n- In the **Bayesian** interpretation, the probability is associated with a **state-of-knowledge** (usually held by a person). \n - For instance, in a coin tossing experiment, $p(\\texttt{tail}) = 0.4$ should be interpreted as the belief that there is a 40% chance that $\\texttt{tail}$ comes up if the coin were tossed.\n - Under the Bayesian interpretation, PT calculus (sum and product rules) **extends boolean logic to rational reasoning with uncertainty**. \n\n- The Bayesian interpretation contrasts with the **frequentist** interpretation of a probability as the relative frequency that an event would occur under repeated execution of an experiment.\n\n - For instance, if the experiment is tossing a coin, then $p(\\texttt{tail}) = 0.4$ means that in the limit of a large number of coin tosses, 40% of outcomes turn up as $\\texttt{tail}$. \n\n- The Bayesian viewpoint is more generally applicable than the frequentist viewpoint, e.g., it is hard to apply the frequentist viewpoint to events like '$\\texttt{it will rain tomorrow}$'. \n\n- The Bayesian viewpoint is clearly favored in the machine learning community. (In this class, we also strongly favor the Bayesian interpretation). \n\n### Probability Theory Notation\n\n##### events\n- Define an **event** $A$ as a statement, whose truth can be contemplated by a person, e.g.,\n\n$$A = \\text{'it will rain tomorrow'}$$\n \n\n- We write the denial of $A$, i.e. the event **not**-A, as $\\bar{A}$. \n\n- Given two events $A$ and $B$, we write the **conjunction** \"$A \\wedge B$\" as \"$A,B$\" or \"$AB$\". The conjunction $AB$ is true only if both $A$ and $B$ are true. \n\n- We will write the **disjunction** \"$A \\lor B$\" as \"$A + B$\", which is true if either $A$ or $B$ is true or both $A$ and $B$ are true. \n\n- Note that, if $X$ is a variable, then an assignment $X=x$ (with $x$ a value, e.g., $X=5$) can be interpreted as an event. \n\n##### probabilities\n\n- For any event $A$, with background knowledge $I$, the **conditional probability of $A$ given $I$**, is written as \n$$p(A|I)\\,.$$\n\n- All probabilities are in principle conditional probabilities of the type $p(A|I)$, since there is always some background knowledge. \n\n##### Unfortunately, PT notation is usually rather sloppy :(\n\n- We often write $p(A)$ rather than $p(A|I)$ if the background knowledge $I$ is assumed to be obviously present. E.g., $p(A)$ rather than $p(\\,A\\,|\\,\\text{the-sun-comes-up-tomorrow}\\,)$.\n\n- (In the context of random variable assignments) we often write $p(x)$ rather than $p(X=x)$, assuming that the reader understands the context. \n\n- In an apparent effort to further abuse notational conventions, $p(X)$ denotes the full distribution over random variable $X$, i.e., the distribution for all assignments for $X$. \n\n- If $X$ is a *discretely* valued variable, then $p(X=x)$ is a probability *mass* function (PMF) with $0\\le p(X=x)\\le 1$ and normalization $\\sum_x p(x) =1$. \n\n- If $X$ is *continuously* valued, then $p(X=x)$ is a probability *density* function (PDF) with $p(X=x)\\ge 0$ and normalization $\\int_x p(x)\\mathrm{d}x=1$. \n - Note that if $X$ is continuously valued, then the value of the PDF $p(x)$ is not necessarily $\\le 1$. E.g., a uniform distribution on the continuous domain $[0,.5]$ has value $p(x) = 2$.\n \n\n- Often, we do not bother to specify if $p(x)$ refers to a continuous or discrete variable. \n\n### Probability Theory Calculus\n \n \n\n- Let $p(A|I)$ indicate the belief in event $A$, given that $I$ is true. \n\n- The following product and sum rules are also known as the **axioms of probability theory**, but as discussed above, under some mild assumptions, they can be derived as the unique rules for *rational reasoning under uncertainty* ([Cox theorem, 1946](https://en.wikipedia.org/wiki/Cox%27s_theorem), and [Caticha, 2012](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/Caticha-2012-Entropic-Inference-and-the-Foundations-of-Physics.pdf), pp.7-26).\n\n- **Sum rule**. The disjunction for two events $A$ and $B$ given background $I$ is given by\n$$ \\boxed{p(A+B|I) = p(A|I) + p(B|I) - p(A,B|I)}$$\n \n\n- **Product rule**. The conjuction of two events $A$ and $B$ with given background $I$ is given by \n$$ \\boxed{p(A,B|I) = p(A|B,I)\\,p(B|I)}$$\n \n \n\n- **All legitimate probabilistic relations can be derived from the sum and product rules!**\n\n### Independent and Mutually Exclusive Events\n\n- Two events $A$ and $B$ are said to be **independent** if the probability of one is not altered by information about the truth of the other, i.e., $p(A|B) = p(A)$\n - $\\Rightarrow$ If $A$ and $B$ are independent, given $I$, then the product rule simplifies to $$p(A,B|I) = p(A|I) p(B|I)$$\n\n- Two events $A_1$ and $A_2$ are said to be **mutually exclusive** if they cannot be true simultanously, i.e., if $p(A_1,A_2)=0$.\n - $\\Rightarrow$ For mutually exclusive events, the sum rule simplifies to\n $$p(A_1+A_2) = p(A_1) + p(A_2)$$\n \n\n- A set of events $A_1, A_2, \\ldots, A_N$ is said to be **collectively exhaustive** if one of the statements is necessarily true, i.e., $A_1+A_2+\\cdots +A_N=\\mathrm{TRUE}$, or equivalently \n$$p(A_1+A_2+\\cdots +A_N)=1$$\n\n\n- Note that, if $A_1, A_2, \\ldots, A_n$ are both **mutually exclusive** and **collectively exhausitive** (MECE) events, then\n $$\\sum_{n=1}^N p(A_n) = p(A_1 + \\ldots + A_N) = 1$$\n - More generally, if $\\{A_n\\}$ are MECE events, then $\\sum_{n=1}^N p(A_n,B) = p(B)$\n\n### The Sum Rule and Marginalization\n\n- We mentioned that every inference problem in PT can be evaluated through the sum and product rules. Next, we present two useful corollaries: (1) _Marginalization_ and (2) _Bayes rule_ \n\n- If $X \\in \\mathcal{X}$ and $Y \\in \\mathcal{Y}$ are random variables over finite domains, than it follows from the above considerations about MECE events that \n$$\n\\sum_{Y\\in \\mathcal{Y}} p(X,Y) = p(X) \\,.\n$$\n\n- Summing $Y$ out of a joint distribution $p(X,Y)$ is called **marginalization** and the result $p(X)$ is sometimes referred to as the **marginal probability**. \n\n- Note that this is just a **generalized sum rule**. In fact, Bishop (p.14) (and some other authors as well) calls this the sum rule.\n\n\n- Of course, in the continuous domain, the (generalized) sum rule becomes\n$$p(X)=\\int p(X,Y) \\,\\mathrm{d}Y$$\n\n### The Product Rule and Bayes Rule\n\n- Consider two variables $D$ and $\\theta$; it follows from symmetry arguments that \n$$p(D,\\theta)=p(D|\\theta)p(\\theta)=p(\\theta|D)p(D)$$ \nand hence that\n$$ p(\\theta|D) = \\frac{p(D|\\theta) }{p(D)}p(\\theta)\\,.$$ \n\n- This formula is called **Bayes rule** (or Bayes theorem). While Bayes rule is always true, a particularly useful application occurs when $D$ refers to an observed data set and $\\theta$ is set of model parameters. In that case,\n\n - the **prior** probability $p(\\theta)$ represents our **state-of-knowledge** about proper values for $\\theta$, before seeing the data $D$.\n - the **posterior** probability $p(\\theta|D)$ represents our state-of-knowledge about $\\theta$ after we have seen the data.\n\n$\\Rightarrow$ Bayes rule tells us how to update our knowledge about model parameters when facing new data. Hence, \n\n
\n
\nBayes rule is the fundamental rule for learning from data!\n
\n
\n\n### Bayes Rule Nomenclature\n- Some nomenclature associated with Bayes rule:\n$$\n\\underbrace{p(\\theta | D)}_{\\text{posterior}} = \\frac{\\overbrace{p(D|\\theta)}^{\\text{likelihood}} \\times \\overbrace{p(\\theta)}^{\\text{prior}}}{\\underbrace{p(D)}_{\\text{evidence}}}\n$$\n\n- Note that the evidence (a.k.a. _marginal likelihood_ ) can be computed from the numerator through marginalization since\n$$ p(D) = \\int p(D,\\theta) \\,\\mathrm{d}\\theta = \\int p(D|\\theta)\\,p(\\theta) \\,\\mathrm{d}\\theta$$\n\n- Hence, having access to likelihood and prior is in principle sufficient to compute both the evidence and the posterior. To emphasize that point, Bayes rule is sometimes written as a transformation:\n\n$$ \\underbrace{\\underbrace{p(\\theta|D)}_{\\text{posterior}}\\cdot \\underbrace{p(D)}_{\\text{evidence}}}_{\\text{this is what we want to compute}} = \\underbrace{\\underbrace{p(D|\\theta)}_{\\text{likelihood}}\\cdot \\underbrace{p(\\theta)}_{\\text{prior}}}_{\\text{this is available}}$$ \n\n\n- For a given data set $D$, the posterior probabilities of the parameters scale relatively against each other as\n\n$$\np(\\theta|D) \\propto p(D|\\theta) p(\\theta)\n$$\n\n- $\\Rightarrow$ All that we can learn from the observed data is contained in the likelihood function $p(D|\\theta)$. This is called the **likelihood principle**.\n\n### The Likelihood Function vs the Sampling Distribution\n\n- Consider a distribution $p(D|\\theta)$, where $D$ relates to variables that are observed (i.e., a \"data set\") and $\\theta$ are model parameters.\n\n- In general, $p(D|\\theta)$ is just a function of the two variables $D$ and $\\theta$. We distinguish two interpretations of this function, depending on which variable is observed (or given by other means). \n\n- The **sampling distribution** (a.k.a. the **data-generating** distribution) $$p(D|\\theta=\\theta_0)$$ (which is a function of $D$ only) describes a probability distribution for data $D$, assuming that it is generated by the given model with parameters fixed at $\\theta = \\theta_0$.\n\n- In a machine learning context, often the data is observed, and $\\theta$ is the free variable. In that case, for given observations $D=D_0$, the **likelihood function** (which is a function only of the model parameters $\\theta$) is defined as $$\\mathrm{L}(\\theta) \\triangleq p(D=D_0|\\theta)$$\n\n- Note that $\\mathrm{L}(\\theta)$ is not a probability distribution for $\\theta$ since in general $\\sum_\\theta \\mathrm{L}(\\theta) \\neq 1$.\n\n### Code Example: Sampling Distribution and Likelihood Function for the Coin Toss\n\nConsider the following simple model for the outcome (head or tail) $y \\in \\{0,1\\}$ of a biased coin toss with parameter $\\theta \\in [0,1]$:\n\n$$\\begin{align*}\np(y|\\theta) &\\triangleq \\theta^y (1-\\theta)^{1-y}\\\\\n\\end{align*}$$\n\nWe can plot both the sampling distribution $p(y|\\theta=0.8)$ and the likelihood function $L(\\theta) = p(y=0|\\theta)$.\n\n\n```julia\nusing Pkg; Pkg.activate(\"probprog/workspace\");Pkg.instantiate();\nIJulia.clear_output();\n```\n\n\n```julia\nusing PyPlot\n#using Plots\np(y,\u03b8) = \u03b8.^y .* (1 .- \u03b8).^(1 .- y)\nf = figure()\n\n\u03b8 = 0.5 # Set parameter\n# Plot the sampling distribution\nsubplot(221); stem([0,1], p([0,1],\u03b8)); \ntitle(\"Sampling distribution\");\nxlim([-0.5,1.5]); ylim([0,1]); xlabel(\"y\"); ylabel(\"p(y|\u03b8=$(\u03b8))\");\n\nsubplot(222);\n_\u03b8 = 0:0.01:1\ny = 1.0 # Plot p(y=1 | \u03b8)\nplot(_\u03b8,p(y,_\u03b8))\ntitle(\"Likelihood function\"); \nxlabel(\"\u03b8\"); \nylabel(\"L(\u03b8) = p(y=$y)|\u03b8)\");\n\n\n```\n\nThe (discrete) sampling distribution is a valid probability distribution. \nHowever, the likelihood function $L(\\theta)$ clearly isn't, since $\\int_0^1 L(\\theta) \\mathrm{d}\\theta \\neq 1$. \n\n\n### Probabilistic Inference\n\n- **Probabilistic inference** refers to computing\n$$\np(\\,\\text{whatever-we-want-to-know}\\, | \\,\\text{whatever-we-already-know}\\,)\n$$\n - For example: \n $$\\begin{align*}\n p(\\,\\text{Mr.S.-killed-Mrs.S.} \\;&|\\; \\text{he-has-her-blood-on-his-shirt}\\,) \\\\\n p(\\,\\text{transmitted-codeword} \\;&|\\;\\text{received-codeword}\\,) \n \\end{align*}$$\n\n- This can be accomplished by repeated application of sum and product rules.\n\n- In particular, consider a joint distribution $p(X,Y,Z)$. Assume we are interested in $p(X|Z)$:\n$$\\begin{align*}\np(X|Z) \\stackrel{p}{=} \\frac{p(X,Z)}{p(Z)} \\stackrel{s}{=} \\frac{\\sum_Y p(X,Y,Z)}{\\sum_{X,Y} p(X,Y,Z)} \\,,\n\\end{align*}$$\nwhere the 's' and 'p' above the equality sign indicate whether the sum or product rule was used. \n\n- In the rest of this course, we'll encounter many long probabilistic derivations. For each manipulation, you should be able to associate an 's' (for sum rule), a 'p' (for product or Bayes rule) or an 'm' (for a simplifying model assumption) above any equality sign.\n\n### Working out the example problem: Disease Diagnosis\n\n- **Problem**: Given a disease $D$ with prevalence of $1\\%$ and a test procedure $T$ with sensitivity ('true positive' rate) of $95\\%$ and specificity ('true negative' rate) of $85\\%$, what is the chance that somebody who tests positive actually has the disease?\n\n- **Solution**: The given data are $p(D=1)=0.01$, $p(T=1|D=1)=0.95$ and $p(T=0|D=0)=0.85$. Then according to Bayes rule,\n\n$$\\begin{align*}\np( D=1 &| T=1) \\\\\n&\\stackrel{p}{=} \\frac{p(T=1|D=1)p(D=1)}{p(T=1)} \\\\\n&\\stackrel{s}{=} \\frac{p(T=1|D=1)p(D=1)}{p(T=1|D=1)p(D=1)+p(T=1|D=0)p(D=0)} \\\\\n&= \\frac{0.95\\times0.01}{0.95\\times0.01 + 0.15\\times0.99} = 0.0601\n\\end{align*}$$\n\n### Inference Exercise: Bag Counter\n\n- **Problem**: A bag contains one ball, known to be either white or black. A white ball is put in, the bag is shaken,\n and a ball is drawn out, which proves to be white. What is now the\n chance of drawing a white ball?\n \n\n- **Solution**: Again, use Bayes and marginalization to arrive at $p(\\text{white}|\\text{data})=2/3$, see the [Exercises](https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/exercises/Exercises-Probability-Theory-Review.ipynb) notebook.\n\n- $\\Rightarrow$ Note that probabilities describe **a person's state of knowledge** rather than a 'property of nature'.\n\n### Inference Exercise: Causality?\n\n- **Problem**: A dark bag contains five red balls and seven green ones. (a) What is the probability of drawing a red ball on the first draw? Balls are not returned to the bag after each draw. (b) If you know that on the second draw the ball was a green one, what is now the probability of drawing a red ball on the first draw?\n\n- **Solution**: (a) $5/12$. (b) $5/11$, see the [Exercises](https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/exercises/Exercises-Probability-Theory-Review.ipynb) notebook.\n\n- $\\Rightarrow$ Again, we conclude that conditional probabilities reflect **implications for a state of knowledge** rather than temporal causality.\n\n### Moments of the PDF\n\n- Consider a distribution $p(x)$. The **expected value** or **mean** is defined as \n$$\\mu_x = \\mathbb{E}[x] \\triangleq \\int x \\,p(x) \\,\\mathrm{d}{x}$$ \n\n- The **variance** of $x$ is defined as \n$$\\Sigma_x \\triangleq \\mathbb{E} \\left[(x-\\mu_x)(x-\\mu_x)^T \\right]$$ \n\n- The **covariance** matrix between _vectors_ $x$ and $y$ is defined as\n$$\\begin{align*}\n \\Sigma_{xy} &\\triangleq \\mathbb{E}\\left[ (x-\\mu_x) (y-\\mu_y)^T \\right]\\\\\n &= \\mathbb{E}\\left[ (x-\\mu_x) (y^T-\\mu_y^T) \\right]\\\\\n &= \\mathbb{E}[x y^T] - \\mu_x \\mu_y^T\n\\end{align*}$$\n - Clearly, if $x$ and $y$ are independent, then $\\Sigma_{xy} = 0$, since $\\mathbb{E}[x y^T] = \\mathbb{E}[x] \\mathbb{E}[y^T] = \\mu_x \\mu_y^T$.\n\n\n\n### Linear Transformations \n\n- Consider an arbitrary distribution $p(X)$ with mean $\\mu_x$ and variance $\\Sigma_x$ and the linear transformation $$Z = A X + b \\,.$$ \n\n- No matter the specification of $p(X)$, we can derive that (see [Exercises](https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/exercises/Exercises-Probability-Theory-Review.ipynb) notebook)\n$$\\begin{align}\n\\mu_z &= A\\mu_x + b \\tag{SRG-3a}\\\\\n\\Sigma_z &= A\\,\\Sigma_x\\,A^T \\tag{SRG-3b}\n\\end{align}$$\n - (The tag (SRG-3a) refers to the corresponding eqn number in Sam Roweis [Gaussian identities](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/Roweis-1999-gaussian-identities.pdf) notes.)\n\n\n\n\n### PDF for the Sum of Two Variables\n\n\n- Given eqs SRG-3a and SRG-3b (previous cell), you should now be able to derive the following: for any distribution of variable $X$ and $Y$ and sum $Z = X+Y$ (proof by [Exercise](https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/exercises/Exercises-Probability-Theory-Review.ipynb))\n\n$$\\begin{align*}\n \\mu_z &= \\mu_x + \\mu_y \\\\\n \\Sigma_z &= \\Sigma_x + \\Sigma_y + 2\\Sigma_{xy} \n\\end{align*}$$\n\n- Clearly, it follows that if $X$ and $Y$ are **independent**, then\n\n$$\\Sigma_z = \\Sigma_x + \\Sigma_y $$\n\n- More generally, given two **independent** variables\n$X$ and $Y$, with PDF's $p_x(x)$ and $p_y(y)$. The PDF $p_z(z)$for $Z=X+Y$ is given by the **convolution**\n\n$$\np_z (z) = \\int_{ - \\infty }^\\infty {p_x (x)p_y (z - x)\\,\\mathrm{d}{x}}\n$$ \n\n- **Proof**: Let $p_z(z)$ be the probability that $Z$ has value $z$. This occurs if $X$ has some value $x$ and at the same time $Y=z-x$, with joint probability $p_x(x)p_y(z-x)$. Since $x$ can be any value, we sum over all possible values for $x$ to get\n$\np_z (z) = \\int_{ - \\infty }^\\infty {p_x (x)p_y (z - x)\\,\\mathrm{d}{x}}\n$ \n \n - Note that $p_z(z) \\neq p_x(x) + p_y(y)\\,$ !!\n \n\n- [https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions](https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions) shows how these convolutions work out for a few common probability distributions. \n\n- In linear stochastic systems theory, the Fourier Transform of a PDF (i.e., the characteristic function) plays an important computational role. Why?\n\n### Code Example: Sum of Two Gaussian Distributed Variables\n\n- Consider the PDF of the sum of two independent Gaussian distributed $X$ and $Y$:\n\n$$\\begin{align*}\np_X(x) &= \\mathcal{N}(\\,x\\,|\\,\\mu_X,\\sigma_X^2\\,) \\\\ \np_Y(y) &= \\mathcal{N}(\\,y\\,|\\,\\mu_Y,\\sigma_Y^2\\,) \n\\end{align*}$$\n\n- Let $Z = X + Y$. Performing the convolution (nice exercise) yields a Gaussian PDF for $Z$: \n\n$$\np_Z(z) = \\mathcal{N}(\\,z\\,|\\,\\mu_X+\\mu_Y,\\sigma_X^2+\\sigma_Y^2\\,).\n$$\n\n\n```julia\nusing PyPlot, Distributions\n\u03bcx = 2.\n\u03c3x = 1.\n\u03bcy = 2.\n\u03c3y = 0.5\n\u03bcz = \u03bcx+\u03bcy; \u03c3z = sqrt(\u03c3x^2 + \u03c3y^2)\nx = Normal(\u03bcx, \u03c3x)\ny = Normal(\u03bcy, \u03c3y)\nz = Normal(\u03bcz, \u03c3z)\nrange_min = minimum([\u03bcx-2*\u03c3x, \u03bcy-2*\u03c3y, \u03bcz-2*\u03c3z])\nrange_max = maximum([\u03bcx+2*\u03c3x, \u03bcy+2*\u03c3y, \u03bcz+2*\u03c3z])\nrange_grid = range(range_min, stop=range_max, length=100)\nplot(range_grid, pdf.(x,range_grid), \"k-\")\nplot(range_grid, pdf.(y,range_grid), \"b-\")\nplot(range_grid, pdf.(z,range_grid), \"r-\")\nlegend([L\"p_X\", L\"p_Y\", L\"p_Z\"])\ngrid()\n```\n\n### PDF for the Product of Two Variables\n\n- For two continuous random **independent** variables\n$X$ and $Y$, with PDF's $p_x(x)$ and $p_y(y)$, the PDF of \n$Z = X Y $ is given by \n$$\np_z(z) = \\int_{-\\infty}^{\\infty} p_x(x) \\,p_y(z/x)\\, \\frac{1}{|x|}\\,\\mathrm{d}x\n$$\n\n- For proof, see [https://en.wikipedia.org/wiki/Product_distribution](https://en.wikipedia.org/wiki/Product_distribution)\n\n- Generally, this integral does not lead to an analytical expression for $p_z(z)$. For example, [**the product of two independent variables that are both normally (Gaussian) distributed does not lead to a normal distribution**](https://nbviewer.jupyter.org/github/bertdv/BMLIP/blob/master/lessons/notebooks/The-Gaussian-Distribution.ipynb#product-of-gaussians).\n - Exception: the distribution of the product of two variables that both have [log-normal distributions](https://en.wikipedia.org/wiki/Log-normal_distribution) is again a lognormal distribution.\n - (If $X$ has a normal distribution, then $Y=\\exp(X)$ has a log-normal distribution.)\n\n### Variable Transformations\n\n- Suppose $x$ is a **discrete** random variable with probability **mass** function $P_x(x)$, and $y = h(x)$ is a one-to-one function with $x = g(y) = h^{-1}(y)$. Then\n\n$$\nP_y(y) = P_x(g(y))\\,.\n$$\n\n- **Proof**: $P_y(\\hat{y}) = P(y=\\hat{y}) = P(h(x)=\\hat{y}) = P(x=g(\\hat{y})) = P_x(g(\\hat{y})). \\,\\square$\n\n- If $x$ is defined on a **continuous** domain, and $p_x(x)$ is a probability **density** function, then probability mass is represented by the area under a (density) curve. Let $a=g(c)$ and $b=g(d)$. Then\n$$\\begin{align*}\nP(a \u2264 x \u2264 b) &= \\int_a^b p_x(x)\\mathrm{d}x \\\\\n &= \\int_{g(c)}^{g(d)} p_x(x)\\mathrm{d}x \\\\\n &= \\int_c^d p_x(g(y))\\mathrm{d}g(y) \\\\\n &= \\int_c^d \\underbrace{p_x(g(y)) g^\\prime(y)}_{p_y(y)}\\mathrm{d}y \\\\ \n &= P(c \u2264 y \u2264 d)\n\\end{align*}$$\n\n- Equating the two probability masses leads to identificaiton of the relation \n$$p_y(y) = p_x(g(y)) g^\\prime(y)\\,,$$ \nwhich is also known as the [Change-of-Variable theorem](https://en.wikipedia.org/wiki/Probability_density_function#Function_of_random_variables_and_change_of_variables_in_the_probability_density_function). \n\n- If the tranformation $y = h(x)$ is not invertible, then $x=g(y)$ does not exist. In that case, you can still work out the transformation by equating equivalent probability masses in the two domains. \n\n### Example: Transformation of a Gaussian Variable\n\n- Let $p_x(x) = \\mathcal{N}(x|\\mu,\\sigma^2)$ and $y = \\frac{x-\\mu}{\\sigma}$. \n\n- **Problem**: What is $p_y(y)$? \n\n- **Solution**: Note that $h(x)$ is invertible with $x = g(y) = \\sigma y + \\mu$. The change-of-variable formula leads to\n$$\\begin{align*}\np_y(y) &= p_x(g(y)) \\cdot g^\\prime(y) \\\\\n &= p_x(\\sigma y + \\mu) \\cdot \\sigma \\\\\n &= \\frac{1}{\\sigma\\sqrt(2 \\pi)} \\exp\\left( - \\frac{(\\sigma y + \\mu - \\mu)^2}{2\\sigma^2}\\right) \\cdot \\sigma \\\\\n &= \\frac{1}{\\sqrt(2 \\pi)} \\exp\\left( - \\frac{y^2 }{2}\\right)\\\\\n &= \\mathcal{N}(y|0,1) \n\\end{align*}$$\n\n### Summary\n\n- Probabilities should be interpretated as degrees of belief, i.e., a state-of-knowledge, rather than a property of nature.\n\n- We can do everything with only the **sum rule** and the **product rule**. In practice, **Bayes rule** and **marginalization** are often very useful for inference, i.e., for computing\n\n$$p(\\,\\text{what-we-want-to-know}\\,|\\,\\text{what-we-already-know}\\,)\\,.$$\n\n- Bayes rule $$ p(\\theta|D) = \\frac{p(D|\\theta)p(\\theta)} {p(D)} $$ is the fundamental rule for learning from data!\n\n- For a variable $X$ with distribution $p(X)$ with mean $\\mu_x$ and variance $\\Sigma_x$, the mean and variance of the **Linear Transformation** $Z = AX +b$ is given by \n$$\\begin{align}\n\\mu_z &= A\\mu_x + b \\tag{SRG-3a}\\\\\n\\Sigma_z &= A\\,\\Sigma_x\\,A^T \\tag{SRG-3b}\n\\end{align}$$\n\n- That's really about all you need to know about probability theory, but you need to _really_ know it, so do the [Exercises](https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/exercises/Exercises-Probability-Theory-Review.ipynb).\n\n\n```julia\nopen(\"../../styles/aipstyle.html\") do f\n display(\"text/html\", read(f,String))\nend\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "54b7eaa34377335d8b4042268cf3da2ef6becb11", "size": 127373, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lessons/notebooks/Probability-Theory-Review.ipynb", "max_stars_repo_name": "Yikeru/BMLIP", "max_stars_repo_head_hexsha": "296f5330210d387809b2c3ce7a6847f2bd69b24c", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lessons/notebooks/Probability-Theory-Review.ipynb", "max_issues_repo_name": "Yikeru/BMLIP", "max_issues_repo_head_hexsha": "296f5330210d387809b2c3ce7a6847f2bd69b24c", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lessons/notebooks/Probability-Theory-Review.ipynb", "max_forks_repo_name": "Yikeru/BMLIP", "max_forks_repo_head_hexsha": "296f5330210d387809b2c3ce7a6847f2bd69b24c", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.8696439348, "max_line_length": 44914, "alphanum_fraction": 0.7920202869, "converted": true, "num_tokens": 9224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.35577489351363034, "lm_q2_score": 0.4035668537353746, "lm_q1q2_score": 0.14357895441333374}} {"text": "# YOUR PROJECT TITLE\n\n> **Note the following:** \n> 1. This is *not* meant to be an example of an actual **model analysis project**, just an example of how to structure such a project.\n> 1. Remember the general advice on structuring and commenting your code from [lecture 5](https://numeconcopenhagen.netlify.com/lectures/Workflow_and_debugging).\n> 1. Remember this [guide](https://www.markdownguide.org/basic-syntax/) on markdown and (a bit of) latex.\n> 1. Turn on automatic numbering by clicking on the small icon on top of the table of contents in the left sidebar.\n> 1. The `modelproject.py` file includes a function which could be used multiple times in this notebook.\n\nImports and set magics:\n\n\n```python\nimport numpy as np\nfrom scipy import optimize\nimport sympy as sm\n\n# autoreload modules when code is run\n%load_ext autoreload\n%autoreload 2\n\n# local modules\nimport modelproject\n```\n\n# Model description\n\n**Write out the model in equations here.** \n\nMake sure you explain well the purpose of the model and comment so that other students who may not have seen it before can follow. \n\n## Analytical solution\n\nIf your model allows for an analytical solution, you should provide here.\n\nYou may use Sympy for this. Then you can characterize the solution as a function of a parameter of the model.\n\nTo characterize the solution, first derive a steady state equation as a function of a parameter using Sympy.solve and then turn it into a python function by Sympy.lambdify. See the lecture notes for details. \n\n## Numerical solution\n\nYou can always solve a model numerically. \n\nDefine first the set of parameters you need. \n\nThen choose one of the optimization algorithms that we have gone through in the lectures based on what you think is most fitting for your model.\n\nAre there any problems with convergence? Does the model converge for all starting values? Make a lot of testing to figure these things out. \n\n# Further analysis\n\nMake detailed vizualizations of how your model changes with parameter values. \n\nTry to make an extension of the model. \n\n# Conclusion\n\nAdd concise conclusion. \n", "meta": {"hexsha": "221dbb81e7e0e054f3f26665ec53dee3dddf3fdd", "size": 3974, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "modelproject/modelproject.ipynb", "max_stars_repo_name": "HumphreyRoddik/Project-2022-HAMA", "max_stars_repo_head_hexsha": "fd5497ef667609562dc9f565c77d5ced882a6e6e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "modelproject/modelproject.ipynb", "max_issues_repo_name": "HumphreyRoddik/Project-2022-HAMA", "max_issues_repo_head_hexsha": "fd5497ef667609562dc9f565c77d5ced882a6e6e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2022-03-28T15:23:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T10:45:29.000Z", "max_forks_repo_path": "modelproject/modelproject.ipynb", "max_forks_repo_name": "HumphreyRoddik/Project-2022-HAMA", "max_forks_repo_head_hexsha": "fd5497ef667609562dc9f565c77d5ced882a6e6e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2022-03-15T15:00:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T13:26:24.000Z", "avg_line_length": 25.6387096774, "max_line_length": 214, "alphanum_fraction": 0.593608455, "converted": true, "num_tokens": 472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4765796510636759, "lm_q2_score": 0.30074559147596, "lm_q1q2_score": 0.1433292290445518}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 101)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)//2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(-1, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n :4: FutureWarning: In v4.0, pm.sample will return an `arviz.InferenceData` object instead of a `MultiTrace` by default. You can pass return_inferencedata=True or return_inferencedata=False to be safe and silence this warning.\n trace = pm.sample(10000, tune=5000,step=step)\n Multiprocess sampling (2 chains in 2 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n\n\n\n\n
\n \n \n 100.00% [30000/30000 00:08<00:00 Sampling 2 chains, 0 divergences]\n
\n\n\n\n Sampling 2 chains for 5_000 tune and 10_000 draw iterations (10_000 + 20_000 draws total) took 9 seconds.\n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".1\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum() + \n lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "fcb9172b50c12139637367daf6a49f5aa24c4395", "size": 317208, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "sparcs/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "2f74077ff95f1e526c6dbdbe43d05bc704383119", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "sparcs/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "2f74077ff95f1e526c6dbdbe43d05bc704383119", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "sparcs/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "2f74077ff95f1e526c6dbdbe43d05bc704383119", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 292.6273062731, "max_line_length": 88724, "alphanum_fraction": 0.9035963784, "converted": true, "num_tokens": 11465, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.30074557894124154, "lm_q1q2_score": 0.14332922307076013}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n :4: FutureWarning: In v4.0, pm.sample will return an `arviz.InferenceData` object instead of a `MultiTrace` by default. You can pass return_inferencedata=True or return_inferencedata=False to be safe and silence this warning.\n trace = pm.sample(10000, tune=5000,step=step)\n Multiprocess sampling (4 chains in 4 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n\n\n\n\n
\n \n \n 100.00% [60000/60000 00:06<00:00 Sampling 4 chains, 0 divergences]\n
\n\n\n\n Sampling 4 chains for 5_000 tune and 10_000 draw iterations (20_000 + 40_000 draws total) took 7 seconds.\n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "afdebced708b95b2ad2fa7a7d41d91092c8d4c32", "size": 304720, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "kanishkd4/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "6a7976737905cd92abef560eea94aa8944b0adc5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "kanishkd4/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "6a7976737905cd92abef560eea94aa8944b0adc5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "kanishkd4/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "6a7976737905cd92abef560eea94aa8944b0adc5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 275.0180505415, "max_line_length": 82984, "alphanum_fraction": 0.89841822, "converted": true, "num_tokens": 11462, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295350395727, "lm_q2_score": 0.32082130731838393, "lm_q1q2_score": 0.14293536788034747}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15000/15000 [00:06<00:00, 2447.84it/s]\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a350135d91e728d25cfc49996061c04bacab56c0", "size": 302889, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "20_Bayesian_Methods_for_Hackers/Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "MichaelRW/Kalman_and_Bayesian_Filtering", "max_stars_repo_head_hexsha": "2e9394c7942872b155228ed7b21798527961282b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "20_Bayesian_Methods_for_Hackers/Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "MichaelRW/Kalman_and_Bayesian_Filtering", "max_issues_repo_head_hexsha": "2e9394c7942872b155228ed7b21798527961282b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "20_Bayesian_Methods_for_Hackers/Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "MichaelRW/Kalman_and_Bayesian_Filtering", "max_forks_repo_head_hexsha": "2e9394c7942872b155228ed7b21798527961282b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 282.5457089552, "max_line_length": 90672, "alphanum_fraction": 0.8914354764, "converted": true, "num_tokens": 11393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4687906266262437, "lm_q2_score": 0.30404167496654744, "lm_q1q2_score": 0.1425318873280605}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\nOriginal content created by Cam Davidson-Pilon\n\nPorted to [Pyro](http://pyro.ai/) by Carlos Souza (souza@gatech.edu), with the help from [Pyro community](https://forum.pyro.ai/).\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n%matplotlib inline\n\nimport torch\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.infer import MCMC, NUTS, HMC\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\n```\n\n\n```python\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\nx = np.linspace(0, 1, 100)\ndata = pyro.sample('coin_toss', pyro.distributions.Bernoulli(0.5), [n_trials[-1]])\n```\n\n\n```python\nfigsize(11, 9)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n d = pyro.distributions.Beta(1 + heads, 1 + N - heads)\n y = torch.exp(d.log_prob(torch.tensor(x)))\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$?\n\n\n```python\nfigsize(12.5, 4)\np = torch.linspace(0, 1, 50)\nplt.plot(p, 2 * p / (1 + p), color=\"#348ABD\", lw=3)\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nlambda_ = [1.5, 4.25]\n\nd = dist.Poisson(torch.tensor(lambda_))\nx = torch.arange(16).float().view(-1, 1)\ny = torch.exp(d.log_prob(x))\n\ncolours = [\"#348ABD\", \"#A60628\"]\nplt.bar(x.flatten(), y[:, 0], color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(x.flatten(), y[:, 1], color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(x.flatten().numpy() + 0.4, x.flatten().numpy().astype(int))\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\nlambda_ = [0.5, 1]\nd = dist.Exponential(torch.tensor(lambda_))\n\nx = torch.linspace(0, 4, 100).view(-1, 1)\ny = torch.exp(d.log_prob(x))\n```\n\n\n```python\nfor i, c in enumerate(colours):\n plt.plot(x.flatten(), y[:, i], lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % lambda_[i])\n plt.fill_between(x.flatten(), y[:, i], color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = torch.from_numpy(np.loadtxt(\"data/txtdata.csv\"))\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to [Pyro](https://pyro.ai/), a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\n## Introducing our first hammer: Pyro\n\nPyro is a Python library for programming Bayesian analysis. It is intended for data scientists, statisticians, machine learning practitioners, and scientists. Since it is built on the PyTorch stack, it brings the runtime benefits of PyTorch to Bayesian analysis. These include write-once run-many (ability to run your development model in production) and speedups via state-of-the-art hardware (GPUs and TPUs). \n\nSince Pyro is relatively new, the Pyro community is actively developing documentation, \nespecially docs and examples that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why TFP is so cool.\n\nWe will model the problem above using Pyro. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. \n\nB. Cronin [[4]](#scrollTo=nDdph0r1ABCn) has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n \nPyro code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\ndef model(data):\n alpha = 1.0 / data.mean()\n lambda_1 = pyro.sample(\"lambda_1\", dist.Exponential(alpha))\n lambda_2 = pyro.sample(\"lambda_2\", dist.Exponential(alpha))\n \n tau = pyro.sample(\"tau\", dist.Uniform(0, 1))\n lambda1_size = (tau * data.size(0) + 1).long()\n lambda2_size = data.size(0) - lambda1_size\n lambda_ = torch.cat([lambda_1.expand((lambda1_size,)),\n lambda_2.expand((lambda2_size,))])\n\n with pyro.plate(\"data\", data.size(0)):\n pyro.sample(\"obs\", dist.Poisson(lambda_), obs=data)\n```\n\n\n```python\nkernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True, max_tree_depth=3)\nposterior = MCMC(kernel, num_samples=5000, warmup_steps=500)\nposterior.run(count_data);\n```\n\n Sample: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5500/5500 [00:43, 126.11it/s, step size=2.45e-01, acc. prob=0.802]\n\n\n\n```python\nhmc_samples = {k: v.detach().cpu().numpy() for k, v in posterior.get_samples().items()}\nlambda_1_samples = hmc_samples['lambda_1']\nlambda_2_samples = hmc_samples['lambda_2']\ntau_samples = (hmc_samples['tau'] * count_data.size(0) + 1).astype(int)\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n# Your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n# Your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n# Your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "23f48286202869a8db7b57f4fbcc546ea08f9eee", "size": 290138, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_Pyro.ipynb", "max_stars_repo_name": "thien/probabilistic", "max_stars_repo_head_hexsha": "2194af37805a183abbefa723921cea4d9dc8dd2a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_Pyro.ipynb", "max_issues_repo_name": "thien/probabilistic", "max_issues_repo_head_hexsha": "2194af37805a183abbefa723921cea4d9dc8dd2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_Pyro.ipynb", "max_forks_repo_name": "thien/probabilistic", "max_forks_repo_head_hexsha": "2194af37805a183abbefa723921cea4d9dc8dd2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 291.0110330993, "max_line_length": 87912, "alphanum_fraction": 0.9042455659, "converted": true, "num_tokens": 10136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4416730056646256, "lm_q2_score": 0.3208213138121609, "lm_q1q2_score": 0.14169811395269116}} {"text": "```python\nfrom IPython.core.display import HTML\nHTML(\"\"\"\n\n\"\"\")\n```\n\n\n\n\n\n\n\n\n\n\n# *Circuitos El\u00e9tricos I - Semana 11*\n\n### A integral de Laplace\n\nSeja $f(t)$ uma fun\u00e7\u00e3o definida no intervalo $0\\leq t \\leq \\infty$, com $t$ e $f(t)$ reais, ent\u00e3o a fun\u00e7\u00e3o $F(s)$, definida pela integral de Laplace\n\n$$\\large\n\\begin{equation}\nF(s)=\\mathcal{L}\\{f(t)\\}=\\int_{0}^{\\infty} f(t) e^{-s t} dt,\\;\\; s \\in \\mathbb{C},\n\\end{equation}\n$$\n\n\u00e9 conhecida como a transformada de Laplace de $f(t)$.\n\nPara informa\u00e7\u00f5es sobre como utilizar o Sympy para o c\u00e1lculo da transformada de Laplace:\n\nhttps://dynamics-and-control.readthedocs.io/en/latest/1_Dynamics/3_Linear_systems/Laplace%20transforms.html\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sympy as sp\nfrom utils import round_expr, symdisp, symplot\n\nfrom sympy.polys.partfrac import apart\n\n# temp workaround\nimport warnings\nfrom matplotlib import MatplotlibDeprecationWarning\nwarnings.filterwarnings('ignore', category=MatplotlibDeprecationWarning)\n\nplt.rcParams['figure.figsize'] = 6, 4\nplt.rcParams['legend.fontsize'] = 13\nplt.rcParams['lines.linewidth'] = 2\nplt.rcParams['axes.grid'] = False\n```\n\n\n```python\n# transformada de Laplace\ndef L(f,t,s):\n return sp.laplace_transform(f, t, s, noconds=True)\n\n# transformada inversa de Laplace\ndef invL(F,s,t):\n return sp.re(sp.inverse_laplace_transform(F, s, t, noconds=True))\n\n# fun\u00e7\u00f5es para aux\u00edlio na expans\u00e3o em fra\u00e7\u00f5es parciais\ndef adjustCoeff(expr): \n coeff = expr.as_numer_denom()\n c0 = sp.poly(coeff[1].cancel()).coeffs()[0]\n \n return (coeff[0].cancel()/c0)/(coeff[1].cancel()/c0)\n\ndef partFrac(expr, Ndigits):\n expr = expr.cancel()\n expr = apart(adjustCoeff(expr), s, full=True).doit()\n \n return sp.N(expr, Ndigits)\n\nsp.init_printing()\n```\n\n#### Definindo algumas vari\u00e1veis simb\u00f3licas de interesse\n\n\n```python\nt, s = sp.symbols('t, s')\na = sp.symbols('a', real=True, positive=True)\nomega = sp.symbols('omega', real=True)\n```\n\n## Gere sua tabela de transformadas\n\n\n```python\nfunc = [1,\n t,\n sp.exp(-a*t),\n t*sp.exp(-a*t),\n t**2*sp.exp(-a*t),\n sp.sin(omega*t),\n sp.cos(omega*t),\n 1 - sp.exp(-a*t),\n sp.exp(-a*t)*sp.sin(omega*t),\n sp.exp(-a*t)*sp.cos(omega*t),\n ]\nfunc\n```\n\n\n```python\nFs = [L(f,t,s) for f in func]\nFs\n```\n\n### Problema 1\n\nN\u00e3o existe nenhuma energia armazenada no circuito da figura a seguir no momento em que a fonte de corrente \u00e9 ligada.\n\n\n\na. Determine $I_a(s)$ e $I_b(s)$.\\\nb. Determine $i_a(t)$ e $i_b(t)$.\\\nc. Determine $V_a(s)$, $V_b(s)$ e $V_c(s)$.\\\nd. Determine $v_a(t)$, $v_b(t)$ e $v_c(t)$.\n\n\n\na. Determinando $I_a(s)$ e $I_b(s)$:\n\n\n```python\nI2, I3, s = sp.symbols('I2, I3, s')\n\n# define os sistema de equa\u00e7\u00f5es\neq1 = sp.Eq((4 + s)*I2 - 2*I3, 4) \neq2 = sp.Eq(-2*s*I2 + s*(4 + s)*I3, 8) \n\n# resolve o sistema\nsoluc = sp.solve([eq1, eq2],[I2, I3], dict=True)\nsoluc\n\nI2 = [sol[I2] for sol in soluc]\nI3 = [sol[I3] for sol in soluc]\n\nI1 = 4/s\nI2 = I2[0]\nI3 = I3[0]\n\nprint('Correntes de malha no dom\u00ednio de Laplace: \\n')\nsymdisp('I_1(s) =', I1, 'As')\nsymdisp('I_2(s) =', I2, 'As')\nsymdisp('I_3(s) =', I3, 'As')\n```\n\n Correntes de malha no dom\u00ednio de Laplace: \n \n\n\n\n$\\displaystyle I_1(s) =\\frac{4}{s}\\;As$\n\n\n\n$\\displaystyle I_2(s) =\\frac{4 s + 8}{s^{2} + 6 s}\\;As$\n\n\n\n$\\displaystyle I_3(s) =\\frac{16}{s^{2} + 6 s}\\;As$\n\n\n\n```python\n# Calculando Ia\nIa = I1-I2\nIa = Ia.simplify()\n\nsymdisp('I_a(s) =', Ia, 'As')\n```\n\n\n$\\displaystyle I_a(s) =\\frac{16}{s \\left(s + 6\\right)}\\;As$\n\n\n\n```python\n# Calculando Ib\nIb = I2\n\nsymdisp('I_b(s) =', Ib, 'As')\n```\n\n\n$\\displaystyle I_b(s) =\\frac{4 s + 8}{s^{2} + 6 s}\\;As$\n\n\nb. Determinando $i_a(t)$ e $i_b(t)$\n\n\n```python\nsymdisp('I_a(s) =', Ia.apart(), 'As')\n```\n\n\n$\\displaystyle I_a(s) =- \\frac{8}{3 \\left(s + 6\\right)} + \\frac{8}{3 s}\\;As$\n\n\n\n```python\nt = sp.symbols('t',real=True)\n\nia = invL(Ia.apart(),s,t)\n\nsymdisp('i_a(t) =', ia, 'A')\n```\n\n\n$\\displaystyle i_a(t) =\\frac{8 \\left(e^{6 t} - 1\\right) e^{- 6 t} \\theta\\left(t\\right)}{3}\\;A$\n\n\n\n```python\nsymdisp('I_b(s) =', Ib.apart(), 'As')\n```\n\n\n$\\displaystyle I_b(s) =\\frac{8}{3 \\left(s + 6\\right)} + \\frac{4}{3 s}\\;As$\n\n\n\n```python\nib = invL(Ib,s,t)\n\nsymdisp('i_b(t) =', ib, 'A')\n```\n\n\n$\\displaystyle i_b(t) =\\frac{4 \\left(e^{6 t} + 2\\right) e^{- 6 t} \\theta\\left(t\\right)}{3}\\;A$\n\n\nc. Determinando $V_a(s)$, $V_b(s)$ e $V_c(s)$.\n\n\n```python\nVa = (100/s)*I2\nVb = (100/s)*(I3-I2)\nVc = (100/s)*(I1-I3)\n\nsymdisp('V_a(s) =', Va.simplify(), 'Vs')\nsymdisp('V_b(s) =', Vb.simplify(), 'Vs')\nsymdisp('V_c(s) =', Vc.simplify(), 'Vs')\n```\n\n\n$\\displaystyle V_a(s) =\\frac{400 \\left(s + 2\\right)}{s^{2} \\left(s + 6\\right)}\\;Vs$\n\n\n\n$\\displaystyle V_b(s) =\\frac{400 \\left(2 - s\\right)}{s^{2} \\left(s + 6\\right)}\\;Vs$\n\n\n\n$\\displaystyle V_c(s) =\\frac{400 \\left(s + 2\\right)}{s^{2} \\left(s + 6\\right)}\\;Vs$\n\n\n\n```python\nsymdisp('V_a(s) =', Va.apart(), 'Vs')\n```\n\n\n$\\displaystyle V_a(s) =- \\frac{400}{9 \\left(s + 6\\right)} + \\frac{400}{9 s} + \\frac{400}{3 s^{2}}\\;Vs$\n\n\n\n```python\nsymdisp('V_b(s) =', Vb.apart(), 'Vs')\n```\n\n\n$\\displaystyle V_b(s) =\\frac{800}{9 \\left(s + 6\\right)} - \\frac{800}{9 s} + \\frac{400}{3 s^{2}}\\;Vs$\n\n\n\n```python\nsymdisp('V_c(s) =', Vc.apart(), 'Vs')\n```\n\n\n$\\displaystyle V_c(s) =- \\frac{400}{9 \\left(s + 6\\right)} + \\frac{400}{9 s} + \\frac{400}{3 s^{2}}\\;Vs$\n\n\nd. Determinando $v_a(t)$, $v_b(t)$ e $v_c(t)$.\n\n\n```python\nva = ((-400/9)*sp.exp(-6*t) + (400/9) + (400/3)*t)*sp.Heaviside(t)\n\nsymdisp('v_a(t) =', round_expr(va,2), 'V')\n```\n\n\n$\\displaystyle v_a(t) =\\left(133.33 t + 44.44 - 44.44 e^{- 6 t}\\right) \\theta\\left(t\\right)\\;V$\n\n\n\n```python\nvb = ((800/9)*sp.exp(-6*t) - (800/9) + (400/3)*t)*sp.Heaviside(t)\n\nsymdisp('v_b(t) =', round_expr(vb,2), 'V')\n```\n\n\n$\\displaystyle v_b(t) =\\left(133.33 t - 88.89 + 88.89 e^{- 6 t}\\right) \\theta\\left(t\\right)\\;V$\n\n\n\n```python\nvc = va\n\nsymdisp('v_c(t) =', round_expr(vc,2), 'V')\n```\n\n\n$\\displaystyle v_c(t) =\\left(133.33 t + 44.44 - 44.44 e^{- 6 t}\\right) \\theta\\left(t\\right)\\;V$\n\n\n\n```python\n# plota fun\u00e7\u00f5es no dom\u00ednio do tempo\nintervalo = np.arange(-4, 10, 0.1)\nsymplot(t, [va, vb, vc], intervalo, ['va(t)','vb(t)','vc(t)'])\n```\n\nPergunta: estas solu\u00e7\u00f5es fazem sentido para o circuito analisado?\n\n### Problema 2\n\nN\u00e3o existe nenhuma energia armazenada no circuito da figura a seguir no momento em que a fonte de tens\u00e3o \u00e9 conectada.\n\n\n\na. Determine $V_0(s)$.\\\nb. Determine $v_0(t)$.\n\n\n\n\n```python\nIa, Ib, s = sp.symbols('Ia, Ib, s')\n\n# define o sistema de equa\u00e7\u00f5es\neq1 = sp.Eq(10*Ia + (s + 250/s)*Ib, 35/s) \neq2 = sp.Eq(Ia - (1 + 0.4*s)*Ib, 0)\n\n# resolve o sistema\nsoluc = sp.solve([eq1, eq2],[Ia, Ib], dict=True)\nsoluc\n\nIa = [sol[Ia] for sol in soluc]\nIb = [sol[Ib] for sol in soluc]\n\nIa = Ia[0]\nIb = Ib[0]\n```\n\n\n```python\nsymdisp('I_a(s) =', Ia.simplify(), 'As')\n```\n\n\n$\\displaystyle I_a(s) =\\frac{14.0 s + 35.0}{5.0 s^{2} + 10.0 s + 250.0}\\;As$\n\n\n\n```python\nsymdisp('I_b(s) =', Ib.simplify(), 'As')\n```\n\n\n$\\displaystyle I_b(s) =\\frac{7.0}{s^{2} + 2.0 s + 50.0}\\;As$\n\n\n\n```python\nV0 = 35/s - 2*Ia\n\nsymdisp('V_0(s) =', V0.simplify(), 'Vs')\n```\n\n\n$\\displaystyle V_0(s) =\\frac{147.0 s^{2} + 280.0 s + 8750.0}{s \\left(5.0 s^{2} + 10.0 s + 250.0\\right)}\\;Vs$\n\n\n\n```python\nsymdisp('V_0(s) =', partFrac(V0, 2), 'Vs')\n```\n\n\n$\\displaystyle V_0(s) =\\frac{-2.8 - 0.59 i}{s + 1.0 + 7.0 i} + \\frac{-2.8 + 0.59 i}{s + 1.0 - 7.0 i} + \\frac{35.0}{s}\\;Vs$\n\n\n\n```python\nraizes = np.roots([1, 2, 50, 0])\nraizes\n```\n\n\n\n\n array([-1.+7.j, -1.-7.j, 0.+0.j])\n\n\n\n\n```python\nK = sp.symbols('K')\n\u03c3, \u03c9 = sp.symbols('\u03c3, \u03c9', real=True)\n\nj = sp.I\n\nF = K/(s + \u03c3 + j*\u03c9) + sp.conjugate(K)/(s + \u03c3 - j*\u03c9)\n\nsymdisp('F(s) =', F)\n```\n\n\n$\\displaystyle F(s) =\\frac{K}{s + \u03c3 + i \u03c9} + \\frac{\\overline{K}}{s + \u03c3 - i \u03c9}\\; $\n\n\n\n```python\nsymdisp('f(t) =', invL(F,s,t))\n```\n\n\n$\\displaystyle f(t) =\\left(2 \\sin{\\left(t \u03c9 \\right)} \\operatorname{im}{\\left(K\\right)} + 2 \\cos{\\left(t \u03c9 \\right)} \\operatorname{re}{\\left(K\\right)}\\right) e^{- t \u03c3} \\theta\\left(t\\right)\\; $\n\n\n\n```python\nv0 = (35 + sp.exp(-t)*(-5.6*sp.cos(7*t)-1.2*sp.sin(7*t)))*sp.Heaviside(t)\n\nsymdisp('v_0(t) =', v0)\n```\n\n\n$\\displaystyle v_0(t) =\\left(\\left(- 1.2 \\sin{\\left(7 t \\right)} - 5.6 \\cos{\\left(7 t \\right)}\\right) e^{- t} + 35\\right) \\theta\\left(t\\right)\\; $\n\n\n\n```python\n# plota fun\u00e7\u00f5es no dom\u00ednio do tempo\nintervalo = np.arange(-4, 10, 0.05)\nsymplot(t, v0, intervalo, 'v0(t)')\n```\n", "meta": {"hexsha": "bfc9ba7d28a1e0161ce53bf30e1dc8c9ac255a5a", "size": 71561, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jupyter notebooks/.ipynb_checkpoints/Circuitos Eletricos I - Semana 11.2-checkpoint.ipynb", "max_stars_repo_name": "Jefferson-Lopes/ElectricCircuits", "max_stars_repo_head_hexsha": "bf2075dc0731cacece75f7b0b378c180630bdf85", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2021-05-19T18:36:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-18T16:30:17.000Z", "max_issues_repo_path": "Jupyter notebooks/.ipynb_checkpoints/Circuitos Eletricos I - Semana 11.2-checkpoint.ipynb", "max_issues_repo_name": "Jefferson-Lopes/ElectricCircuits", "max_issues_repo_head_hexsha": "bf2075dc0731cacece75f7b0b378c180630bdf85", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jupyter notebooks/.ipynb_checkpoints/Circuitos Eletricos I - Semana 11.2-checkpoint.ipynb", "max_forks_repo_name": "Jefferson-Lopes/ElectricCircuits", "max_forks_repo_head_hexsha": "bf2075dc0731cacece75f7b0b378c180630bdf85", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2021-06-25T12:52:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-11T14:25:48.000Z", "avg_line_length": 68.8086538462, "max_line_length": 21724, "alphanum_fraction": 0.7872304747, "converted": true, "num_tokens": 3416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881357207956, "lm_q2_score": 0.28776782797747225, "lm_q1q2_score": 0.14163591077265467}} {"text": "```python\nimport holoviews as hv\nhv.extension('bokeh')\nhv.opts.defaults(hv.opts.Curve(width=500), \n hv.opts.Points(width=500), \n hv.opts.Image(width=500, colorbar=True, cmap='Viridis'))\n```\n\n\n```python\nimport numpy as np\nimport scipy.linalg\n```\n\n# Estimador lineal \u00f3ptimo\n\nUn **estimador** es un sistema dise\u00f1ado para **extraer informaci\u00f3n** a partir de una **se\u00f1al**\n\n- La se\u00f1al contiene **informaci\u00f3n y ruido** \n- La se\u00f1al es representada como una secuencia de **datos**\n\nTipos de estimador\n\n- **Filtro:** Estimo el valor actual de mi se\u00f1al acentuando o eliminando una o m\u00e1s caracter\u00edsticas\n- **Predictor:** Estimo el valor futuro de mi se\u00f1al\n\nEn esta lecci\u00f3n estudiaremos estimadores lineales y \u00f3ptimos\n\n- Lineal: La cantidad estimada es una funci\u00f3n lineal de la entrada\n- \u00d3ptimo: El estimador es la mejor soluci\u00f3n posible de acuerdo a un criterio\n\nPara entender los fundamentos de los estimadores \u00f3ptimos es necesario introducir el concepto de proceso aleatorio. Luego estudiaremos uno de los estimadores \u00f3ptimos m\u00e1s importantes: El filtro de Wiener\n\n## Proceso aleatorio o proceso estoc\u00e1stico\n\nUn proceso estoc\u00e1stico es una **colecci\u00f3n de variables aleatorias** indexadas tal que forman una secuencia. Se denotan matem\u00e1ticamente como un conjunto $\\{U_k\\}$, con $k=0, 1, 2, \\ldots, N$. El \u00edndice $k$ puede representar tiempo, espacio u otra variable independiente.\n\nLa siguiente figura muestra tres realizaciones u observaciones de un proceso estoc\u00e1stico con cuatro elementos\n\n\n\nExisten muchos fen\u00f3menos cuya evoluci\u00f3n se modela utilizando procesos aleatorios. Por ejemplo\n\n- Los \u00edndices bursatiles\n- El comportamineto de un gas dentro de un contenedor\n- Las vibraciones de un motor el\u00e9ctrico\n- El \u00e1rea de una c\u00e9lula durante un proceso de organog\u00e9nesis\n\nA continuaci\u00f3n revisaremos algunas de las propiedades de los procesos aleatorios\n\n**Momentos de un proceso estoc\u00e1stico**\n\nUn proceso aleatorio $U_n = (u_n, u_{n-1}, u_{n-2}, \\ldots, u_{n-L})$ se describe a trav\u00e9s de sus momentos estad\u00edsticos. Si consideramos una caracter\u00edzaci\u00f3n de segundo orden necesitamos definir\n\n- Momento central o media: Describe el valor central del proceso\n\n$$\n\\mu(n) = \\mathbb{E}[U_n]\n$$\n\n- Segundo momento o correlaci\u00f3n: Describe la dispersi\u00f3n de un proceso \n\n$$\nr_{uu}(n, n-k) = \\mathbb{E}[U_n U_{n-k}]\n$$\n\n- Segundo momento centrado o covarianza\n\n$$\n\\begin{align}\nc_{uu}(n, n-k) &= \\mathbb{E}[(U_n-\\mu_n) (U_{n-k}- \\mu_{n-k})] \\nonumber \\\\\n&= r(n,n-k) - \\mu_n \\mu_{n-k} \\nonumber\n\\end{align}\n$$\n\n- Correlaci\u00f3n cruzada entre dos procesos \n\n$$\nr_{ud}(n, n-k) = \\mathbb{E}[U_n D_{n-k}]\n$$\n\n\n\n\n**Proceso estacionario y erg\u00f3dico**\n\nEn esta lecci\u00f3n nos vamos a centrar en el caso simplificado donde el **proceso es estacionario**, matem\u00e1ticamente esta propiedad significa que\n\n$$\n\\mu(n) = \\mu, \\forall n\n$$\n\ny\n\n$$\nr_{uu}(n, n-k) = r_{uu}(k), \\forall n\n$$\n\nes decir que los momentos estad\u00edsticos se mantienen constantes en el tiempo (no depende de $n$).\n\nOtra simplificaci\u00f3n que utilizaremos es que el proceso sea **erg\u00f3dico**, \n\n$$\n\\mathbb{E}[U_n] = \\frac{1}{N} \\sum_{n=1}^N u_n\n$$\n\nes decir que podemos reemplazar el valor esperado por la media muestral en el tiempo\n\n\n**Densidad espectral de potencia**\n\nLa densidad espectral de potencia o *power spectral density* (PSD) mide la distribuci\u00f3n en frecuencia de la potencia del proceso estoc\u00e1stico. Su definici\u00f3n matem\u00e1tica es\n\n$$\n\\begin{align}\nS_{uu}(f) &= \\sum_{k=-\\infty}^{\\infty} r_{uu}(k) e^{-j 2\\pi f k} \\nonumber \\\\\n&= \\lim_{N\\to\\infty} \\frac{1}{2N+1} \\mathbb{E} \\left [\\left|\\sum_{n=-N}^{N} u_n e^{-j 2\\pi f n} \\right|^2 \\right]\n\\end{align}\n$$\n\nque corresponde a la transformada de Fourier de la correlaci\u00f3n (caso estacionario)\n\nLa PSD y la correlaci\u00f3n forman un par de Fourier, es decir que uno es la transformada de Fourier del otro.\n\n## Filtro de Wiener\n\nEl filtro de Wiener fue publicado por Norbert Wiener en 1949 y es tal vez el ejemplo m\u00e1s famoso de un estimador lineal \u00f3ptimo.\n\n:::{important}\n\nPara dise\u00f1ar un estimador \u00f3ptimo necesitamos un **criterio** y **condiciones** (supuestos). Luego el estimador ser\u00e1 **\u00f3ptimo seg\u00fan dicho criterio y bajo los supuestos considerados**. Por ejemplo podr\u00edamos suponer un escenario donde el ruido es blanco o donde el proceso es estacionario.\n\n:::\n\nA continuaci\u00f3n describiremos en detalle este filtro y explicaremos como se optimiza. Luego se veran ejemplos de aplicaciones.\n\n\n\n\n\n### Notaci\u00f3n y arquitectura del filtro de Wiener\n\nEl filtro de Wiener es un sistema de tiempo discreto con estructura FIR y $L+1$ coeficientes. A continuaci\u00f3n se muestra un esquema del filtro de Wiener\n\n\n\n\n\nDel esquema podemos reconocer los elementos m\u00e1s importantes de este filtro\n\n- Los coeficientes del filtro: $h_0, h_1, h_2, \\ldots, h_{L}$\n- La se\u00f1al de entrada al filtro: $u_0, u_1, u_2, \\ldots$\n- La se\u00f1al de salida del filtro: $y_0, y_1, y_2, \\ldots$\n- La se\u00f1al de respuesta \"deseada\" u objetivo: $d_0, d_1, d_2, \\ldots$\n- La se\u00f1al de error: $e_0, e_1, e_2, \\ldots$\n\nAl ser un filtro FIR la salida del filtro est\u00e1 definida como\n\n$$\ny_n = \\sum_{k=0}^{L} h_k u_{n-k},\n$$\n\nes decir la convoluci\u00f3n entre la entrada y los coeficientes. Luego la se\u00f1al de error es\n\n$$\ne_n = d_n - y_n = d_n - \\sum_{k=0}^{L} h_k u_{n-k} \n$$\n\nque corresponde a la diferencia entre la se\u00f1al objetivo y la se\u00f1al de salida. \n\nA continuaci\u00f3n veremos que se ajustan los coeficientes del filtro en base al criterio de optimalidad. \n\n\n### Ajuste del filtro de Wiener\n\nEl criterio m\u00e1s com\u00fan para aprender o adaptar el filtro de Wiener es el **error medio cuadr\u00e1tico** o *mean square error* (MSE) entre la respuesta deseada y la salida del filtro.\n\nAsumiendo que $u$ y $d$ son secuencias de valores reales podemos escribir el MSE como\n\n$$\n\\begin{align}\n\\text{MSE} &= \\mathbb{E}\\left [e_n^2 \\right] \\nonumber \\\\\n&= \\mathbb{E}\\left [(d_n - y_n)^2 \\right] \\nonumber \\\\\n&= \\mathbb{E}\\left [d_n^2 \\right] - 2\\mathbb{E}\\left [ d_n y_n \\right] + \\mathbb{E}\\left [ y_n^2 \\right] \\nonumber \n\\end{align}\n$$\n\ndonde $\\sigma_d^2 = \\mathbb{E}\\left [d_n^2 \\right]$ es la varianza de la se\u00f1al deseada y $\\sigma_y^2 = \\mathbb{E}\\left [ y_n^2 \\right]$ es la varianza de nuestro estimador\n \n:::{note}\n \nMinimizar el MSE implica acercar la salida del filtro a la respuesta deseada\n \n:::\n\nEn este caso, igualando la derivada del MSE a cero, tenemos \n\n$$\n\\begin{align}\n\\frac{d}{d h_j} \\text{MSE} &= -2\\mathbb{E}\\left[ d_n \\frac{d y_n}{d h_j} \\right] + 2 \\mathbb{E}\\left[ y_n \\frac{d y_n}{d h_j} \\right] \\nonumber \\\\\n&= -2\\mathbb{E}\\left[ d_n u_{n-j} \\right] + 2 \\mathbb{E}\\left[ y_n u_{n-j} \\right] \\nonumber \\\\\n&= -2\\mathbb{E}\\left[ d_n u_{n-j} \\right] + 2 \\mathbb{E}\\left[ \\sum_{k=0}^{L} h_k u_{n-k} u_{n-j} \\right] \\nonumber \\\\\n&= -2\\mathbb{E}\\left[ d_n u_{n-j} \\right] + 2 \\sum_{k=0}^{L} h_k \\mathbb{E}\\left[ u_{n-k} u_{n-j} \\right] = 0 \\nonumber \\end{align}\n$$\n\nSi despejamos y repetimos para $j=0, \\ldots, L$ obtenemos el siguiente sistema de ecuaciones\n\n$$\n\\begin{align}\n\\begin{pmatrix}\nr_{uu}(0) & r_{uu}(1) & r_{uu}(2) & \\ldots & r_{uu}(L) \\\\\nr_{uu}(1) & r_{uu}(0) & r_{uu}(1) & \\ldots & r_{uu}(L-1) \\\\\nr_{uu}(2) & r_{uu}(1) & r_{uu}(0) & \\ldots & r_{uu}(L-2) \\\\\n\\vdots & \\vdots & \\vdots & \\ddots &\\vdots \\\\\nr_{uu}(L) & r_{uu}(L-1) & r_{uu}(L-2) & \\ldots & r_{uu}(0) \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\nh_0 \\\\\nh_1 \\\\\nh_2 \\\\\n\\vdots \\\\\nh_L \\\\\n\\end{pmatrix} &= \n\\begin{pmatrix}\nr_{ud}(0) \\\\\nr_{ud}(1) \\\\\nr_{ud}(2) \\\\\n\\vdots \\\\\nr_{ud}(L) \\\\\n\\end{pmatrix} \\nonumber \\\\\nR_{uu} \\textbf{h} &= R_{ud},\n\\end{align}\n$$\n\nque se conoce como las **ecuaciones de Wiener-Hopf**. Adem\u00e1s $R_{uu}$ se conoce como matriz de auto-correlaci\u00f3n. \n\nAsumiendo que $R_{uu}$ es no-singular, es decir que su inversa existe, la **soluci\u00f3n \u00f3ptima en el sentido de m\u00ednimo MSE** es \n\n$$\n\\textbf{h}^{*} = R_{uu} ^{-1} R_{ud}\n$$\n\n\nNotar que por construcci\u00f3n la matriz $R_{uu}$ es sim\u00e9trica y herm\u00edtica. Por lo que el sistema puede resolverse de forma eficiente con $\\mathcal{O}(L^2)$ operaciones usando la [recursi\u00f3n de Levison-Durbin](https://en.wikipedia.org/wiki/Levinson_recursion) \n\n\n:::{warning}\n\nPara llegar a la soluci\u00f3n impusimos dos condiciones sobre la salida deseada y la entrada: (1) tienen media cero y (2) son estacionarias en el sentido amplio (es decir la correlaci\u00f3n solo depende del retardo $m$).\n\n:::\n\n- Si la primera condici\u00f3n no se cumpliera, podr\u00eda restarse la media previo al entrenamiento del filtro\n- Si la segunda condici\u00f3n no se cumple conviene usar otro m\u00e9todo como los que veremos en las lecciones siguientes\n\n## Aplicaciones del filtro de Wiener\n\n### Regresi\u00f3n o identificaci\u00f3n de sistema\n\nEn regresi\u00f3n buscamos encontrar los coeficientes $h$ a partir de tuplas $(X, Y)$ tal que\n\n$$\nY = h^T X + \\epsilon,\n$$\n\ndonde $X \\in \\mathbb{R}^{N\\times D}$ son las variables dependientes (entrada), $Y \\in \\mathbb{R}^N$ es la variable dependiente (salida) y $\\epsilon$ es ruido\n\nPara entrenar el filtro\n\n1. Asumimos que hemos observado N muestras de $X$ e $Y$ \n1. A partir de $u=X$ construimos $R_{uu}$\n1. A partir de $d=Y$ construimos $R_{ud}$\n1. Finalmente recuperamos $\\textbf{h}$ usando $R_{uu} ^{-1} R_{ud}$\n1. Con esto podemos interpolar $Y$ \n\n\n\n**Ejemplo** Sea por ejemplo una regresi\u00f3n de tipo polinomial donde queremos encontrar $h_k$ tal que\n\n$$\n\\begin{align}\nd_i &= f_i + \\epsilon \\nonumber \\\\\n&= \\sum_{k=1}^L h_k u_i^k + \\epsilon \\nonumber\n\\end{align}\n$$\n\n\n```python\nnp.random.seed(12345)\nu = np.linspace(-2, 2, num=30)\nf = 0.25*u**5 - 2*u**3 + 5*u # Los coeficientes reales son [0, 5, 0, -2, 0, 1/4, 0, 0, 0, ...]\nd = f + np.random.randn(len(u))\n```\n\n\n```python\nhv.Points((u, d), kdims=['u', 'd'])\n```\n\nImplementemos el filtro como una clase con dos m\u00e9todos p\u00fablicos `fit` (ajustar) y `predict` (predecir). El filtro tiene un argumento, el n\u00famero de coeficientes $L$\n\n\n```python\nclass Wiener_polynomial_regression:\n \n def __init__(self, L: int):\n self.L = L\n self.h = np.zeros(shape=(L+1,))\n \n def _polynomial_basis(self, u: np.ndarray) -> np.ndarray:\n U = np.ones(shape=(len(u), self.L))\n for i in range(1, self.L):\n U[:, i] = u**i\n return U\n \n def fit(self, u: np.ndarray, d: np.ndarray):\n U = self._polynomial_basis(u)\n Ruu = np.dot(U.T, U)\n Rud = np.dot(U.T, d[:, np.newaxis])\n self.h = scipy.linalg.solve(Ruu, Rud, assume_a='pos')[:, 0]\n \n def predict(self, u: np.ndarray):\n U = self._polynomial_basis(u)\n return np.dot(U, self.h)\n```\n\n:::{note}\n\nLa funci\u00f3n `scipy.linalg.solve(A, B)` retorna la soluci\u00f3n del sistema de ecuaciones lineal `Ax = B`. El argumento `assume_a` puede usarse para indicar que `A` es sim\u00e9trica, herm\u00edtica o definido positiva.\n\n:::\n\nLos soluci\u00f3n de un sistema con `10` coeficientes es:\n\n\n```python\nregressor = Wiener_polynomial_regression(10)\nregressor.fit(u, d)\nprint(regressor.h)\n```\n\n\u00bfC\u00f3mo cambia el resultado con L?\n\n\n```python\nuhat = np.linspace(np.amin(u), np.amax(u), num=100)\nyhat = {}\nfor L in [2, 5, 15]:\n regressor = Wiener_polynomial_regression(L)\n regressor.fit(u, d)\n yhat[L] = regressor.predict(uhat)\n```\n\n\n```python\np = [hv.Points((u, d), kdims=['u', 'd'], label='data').opts(size=4, color='k')]\nfor L, prediction in yhat.items():\n p.append(hv.Curve((uhat, prediction), label=f'L={L}'))\nhv.Overlay(p).opts(legend_position='top') \n```\n\n:::{note}\n \nSi $L$ es muy peque\u00f1o el filtro es demasiado simple. Si $L$ es muy grande el filtro se puede sobreajustar al ruido\n \n:::\n\n### Predicci\u00f3n a futuro\n\nEn este caso asumimos que la se\u00f1al deseada es la entrada en el futuro\n\n$$\nd_n = \\{u_{n+1}, u_{n+2}, \\ldots, u_{n+m}\\}\n$$ \n\ndonde $m$ es el horizonte de predicci\u00f3n. Se llama *predicci\u00f3n a un paso* al caso particular $m=1$.\n\nEl largo del filtro $L$ define la cantidad de muestras pasadas que usamos para predecir. Por ejemplo un sistema de predicci\u00f3n a un paso con $L+1 = 3$ coeficientes:\n\n$$\nh_0 u_n + h_1 u_{n-1} + h_2 u_{n-2}= y_n = \\hat u_{n+1} \\approx u_{n+1}\n$$\n\nPara entrenar el filtro\n\n1. Asumimos que la se\u00f1al ha sido observada y que se cuenta con $N$ muestras para entrenar\n1. Podemos formar una matriz cuyas filas son $[u_n, u_{n-1}, \\ldots, u_{n-L}]$ para $n=L,L+1,\\ldots, N-1$\n1. Podemos formar un vector $[u_N, u_{N-1}, \\ldots, u_{L+1}]^T$ (caso $m=1$)\n1. Con esto podemos formar las matrices de correlaci\u00f3n y obtener $\\textbf{h}$\n1. Finalmente usamos $\\textbf{h}$ para predecir el futuro no observado de $u$\n\nNuevamente implementamos el filtro como una clase. Esta vez se utiliza la funci\u00f3n de numpy `as_strided` para formar los vectores de \"instantes pasados\"\n\n\n```python\nfrom numpy.lib.stride_tricks import as_strided\n\nclass Wiener_predictor:\n \n def __init__(self, L: int):\n self.L = L\n self.h = np.zeros(shape=(L+1,))\n \n def fit(self, u: np.ndarray):\n U = as_strided(u, [len(u)-self.L+1 , self.L+1], \n strides=[u.strides[0], u.strides[0]])\n Ruu = np.dot(U[:, :self.L].T, U[:, :self.L])\n Rud = np.dot(U[:, :self.L].T, U[:, self.L][:, np.newaxis])\n self.h = scipy.linalg.solve(Ruu, Rud, assume_a='pos')[:, 0]\n \n def predict(self, u: np.ndarray, m: int=1):\n u_pred = np.zeros(shape=(m+self.L, ))\n u_pred[:self.L] = u\n for k in range(self.L, m+L):\n u_pred[k] = np.sum(self.h*u_pred[k-self.L:k])\n return u_pred[self.L:]\n```\n\nPara la siguiente se\u00f1al sinusoidal \u00bfC\u00f3mo afecta $L$ a la calidad del predictor lineal?\n\nUtilizaremos los primeros 100 instantes para ajustar y los siguientes 100 para probar el predictor\n\n\n```python\nnp.random.seed(12345)\nt = np.linspace(0, 10, num=200)\nu = np.sin(2.0*np.pi*0.5*t) + 0.25*np.random.randn(len(t))\nN_fit = 100\n\nyhat = {}\nfor L in [10, 20, 30]:\n predictor = Wiener_predictor(L)\n predictor.fit(u[:N_fit])\n yhat[L] = predictor.predict(u[N_fit-L:N_fit], m=100)\n \n```\n\n\n```python\np = [hv.Points((t, u), ['instante', 'u'], label='Datos').opts(color='k')]\nfor L, prediction in yhat.items():\n p.append(hv.Curve((t[N_fit:], prediction), label=f'L={L}'))\nhv.Overlay(p).opts(legend_position='top') \n```\n\n:::{note} \n\nSi $L$ es muy peque\u00f1o el filtro es demasiado simple. Si $L$ es muy grande el filtro se puede sobreajustar al ruido \n\n:::\n\n### Eliminar ruido blanco aditivo\n\nEn este caso asumimos que la se\u00f1al de entrada corresponde a una se\u00f1al deseada (informaci\u00f3n) que ha sido contaminada con ruido aditivo\n\n$$\nu_n = d_n + \\nu_n,\n$$\n\nadicionalmente asumimos que\n- el ruido es estacionario en el sentido amplio y de media cero $\\mathbb{E}[\\nu_n] = 0$\n- el ruido es blanco, es decir no tiene correlaci\u00f3n consigo mismo o con la se\u00f1al deseada\n\n$$\nr_{\\nu d}(k) = 0, \\forall k\n$$\n\n- el ruido tiene una cierta varianza $\\mathbb{E}[\\nu_n^2] = \\sigma_\\nu^2, \\forall n$\n\nNotemos que en este caso $R_{uu} = R_{dd} + R_{\\nu\\nu}$ y $R_{ud} = R_{dd}$, luego\n\nla se\u00f1al recuperada es $\\hat d_n = h^{*} u_n$ y el filtro es\n\n$$\n\\vec h^{*} = \\frac{R_{dd}}{R_{dd} + R_{\\nu\\nu}}\n$$\n\ny su respuesta en frecuencia\n\n$$\nH(f) = \\frac{S_{dd}(f)}{S_{dd}(f) + S_{\\nu\\nu}(f)}\n$$\n\nes decir que \n- en frecuencias donde la $S_{dd}(f) > S_{\\nu\\nu}(f)$, entonces $H(f) = 1$\n- en frecuencias donde la $S_{dd}(f) < S_{\\nu\\nu}(f)$, entonces $H(f) = 0$\n\n\n```python\n\n```\n", "meta": {"hexsha": "2b04aa5a72a52b98485c8a8dfe94d01c7e3213d9", "size": 23448, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/unit3/lecture1.ipynb", "max_stars_repo_name": "phuijse/UACH-INFO183", "max_stars_repo_head_hexsha": "0e1b6bef0bd80cda2753bd11e62016268f2de638", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2018-08-27T23:53:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-16T23:31:05.000Z", "max_issues_repo_path": "lectures/unit3/lecture1.ipynb", "max_issues_repo_name": "phuijse/UACH-INFO183", "max_issues_repo_head_hexsha": "0e1b6bef0bd80cda2753bd11e62016268f2de638", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/unit3/lecture1.ipynb", "max_forks_repo_name": "phuijse/UACH-INFO183", "max_forks_repo_head_hexsha": "0e1b6bef0bd80cda2753bd11e62016268f2de638", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-01-04T17:43:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-07T16:07:18.000Z", "avg_line_length": 32.2087912088, "max_line_length": 295, "alphanum_fraction": 0.539107813, "converted": true, "num_tokens": 5122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121585956185, "lm_q2_score": 0.36658973632215985, "lm_q1q2_score": 0.14110484672676116}} {"text": "\n\n## Data-driven Design and Analyses of Structures and Materials (3dasm)\n\n## Lecture 5\n\n### Miguel A. Bessa | M.A.Bessa@tudelft.nl | Associate Professor\n\n**What:** A lecture of the \"3dasm\" course\n\n**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)\n\n**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)\n\n**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.\n* If working offline: Go through this notebook and read the book.\n* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.\n* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.\n\n**Optional reference (the \"bible\" by the \"bishop\"... pun intended \ud83d\ude06) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.\n\n**References/resources to create this notebook:**\n* [Car figure](https://korkortonline.se/en/theory/reaction-braking-stopping/)\n\nApologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.\n\n## **OPTION 1**. Run this notebook **locally in your computer**:\n1. Confirm that you have the 3dasm conda environment (see Lecture 1).\n\n2. Go to the 3dasm_course folder in your computer and pull the last updates of the [repository](https://github.com/bessagroup/3dasm_course):\n```\ngit pull\n```\n3. Open command window and load jupyter notebook (it will open in your internet browser):\n```\nconda activate 3dasm\njupyter notebook\n```\n4. Open notebook of this Lecture.\n\n## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):\n\n1. go to https://colab.research.google.com\n2. login\n3. File > Open notebook\n4. click on Github (no need to login or authorize anything)\n5. paste the git link: https://github.com/bessagroup/3dasm_course\n6. click search and then click on the notebook for this Lecture.\n\n\n```python\n# Basic plotting tools needed in Python.\n\nimport matplotlib.pyplot as plt # import plotting tools to create figures\nimport numpy as np # import numpy to handle a lot of things!\nfrom IPython.display import display, Math # to print with Latex math\n\n%config InlineBackend.figure_format = \"retina\" # render higher resolution images in the notebook\nplt.style.use(\"seaborn\") # style for plotting that comes from seaborn\nplt.rcParams[\"figure.figsize\"] = (8,4) # rescale figure size appropriately for slides\n```\n\n## Outline for today\n\n* Bayesian inference for one hidden rv\n - Prior\n - Likelihood\n - Marginal likelihood\n - Posterior\n - Gaussian pdf's product\n\n**Reading material**: This notebook + Chapter 3\n\n### Recall the \"slightly more complicated\" car stopping distance problem (with two rv's)\n\nWe defined the governing model with two rv's $z_1$ and $z_2$ as:\n\n$\\require{color}{\\color{red}y} = {\\color{blue}z_1}\\cdot x + {\\color{magenta}z_2}\\cdot x^2$\n\n- ${\\color{red}y}$ is the **output**: the car stopping distance (in meters)\n- ${\\color{blue}z_1}$ is an rv representing the driver's reaction time (in seconds)\n- ${\\color{magenta}z_2}$ is another rv that depends on the coefficient of friction, the inclination of the road, the weather, etc. (in m$^{-1}$s$^{-2}$).\n- $x$ is the **input**: constant car velocity (in m/s).\n\nwhere we knew the \"true\" distributions of the rv's: $z_1 \\sim \\mathcal{N}(\\mu_{z_1}=1.5,\\sigma_{z_1}^2=0.5^2)$, and $z_2 \\sim \\mathcal{N}(\\mu_{z_2}=0.1,\\sigma_{z_2}^2=0.01^2)$.\n\n\n```python\n# This cell is hidden during presentation. It's just to define a function to plot the governing model of\n# the car stopping distance problem. Defining a function that creates a plot allows to repeatedly run\n# this function on cells used in this notebook.\ndef car_fig_2rvs(ax):\n x = np.linspace(3, 83, 1000)\n mu_z1 = 1.5; sigma_z1 = 0.5; # parameters of the \"true\" p(z_1)\n mu_z2 = 0.1; sigma_z2 = 0.01; # parameters of the \"true\" p(z_2)\n mu_y = mu_z1*x + mu_z2*x**2 # From Homework of Lecture 4\n sigma_y = np.sqrt( (x*sigma_z1)**2 + (x**2*sigma_z2)**2 ) # From Homework of Lecture 4\n ax.set_xlabel(\"x (m/s)\", fontsize=20) # create x-axis label with font size 20\n ax.set_ylabel(\"y (m)\", fontsize=20) # create y-axis label with font size 20\n ax.set_title(\"Car stopping distance problem with two rv's\", fontsize=20); # create title with font size 20\n ax.plot(x, mu_y, 'k:', label=\"Governing model $\\mu_y$\")\n ax.fill_between(x, mu_y - 1.9600 * sigma_y,\n mu_y + 1.9600 * sigma_y,\n color='k', alpha=0.2,\n label='95% confidence interval ($\\mu_y \\pm 1.96\\sigma_y$)') # plot 95% credence interval\n ax.legend(fontsize=15)\n```\n\n\n```python\n# This cell is also hidden during presentation.\nfrom scipy.stats import norm # import the normal dist, as we learned before!\ndef samples_y_with_2rvs(N_samples,x): # observations/measurements/samples for car stop. dist. prob. with 2 rv's\n mu_z1 = 1.5; sigma_z1 = 0.5;\n mu_z2 = 0.1; sigma_z2 = 0.01;\n samples_z1 = norm.rvs(mu_z1, sigma_z1, size=N_samples) # randomly draw samples from the normal dist.\n samples_z2 = norm.rvs(mu_z2, sigma_z2, size=N_samples) # randomly draw samples from the normal dist.\n samples_y = samples_z1*x + samples_z2*x**2 # compute the stopping distance for samples of z_1 and z_2\n return samples_y # return samples of y\n```\n\n\n```python\n# vvvvvvvvvvv this is just a trick so that we can run this cell multiple times vvvvvvvvvvv\nfig_car_new, ax_car_new = plt.subplots(1,2); plt.close() # create figure and close it\nif fig_car_new.get_axes():\n del ax_car_new; del fig_car_new # delete figure and axes if they exist\n fig_car_new, ax_car_new = plt.subplots(1,2) # create them again\n# ^^^^^^^^^^^ end of the trick ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nN_samples = 3 # CHANGE THIS NUMBER AND RE-RUN THE CELL\nx = 75; empirical_y = samples_y_with_2rvs(N_samples, x); # Empirical measurements of N_samples at x=75\nempirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); # empirical mean and std\ncar_fig_2rvs(ax_car_new[0]) # a function I created to include the background plot of the governing model\nfor i in range(2): # create two plots (one is zooming in on the error bar)\n ax_car_new[i].errorbar(x , empirical_mu_y,yerr=1.96*empirical_sigma_y, fmt='m*', markersize=15);\n ax_car_new[i].scatter(x*np.ones_like(empirical_y),empirical_y, s=40,\n facecolors='none', edgecolors='k', linewidths=2.0)\nprint(\"Empirical mean[y] is\",empirical_mu_y, \"(real mean[y]=675)\")\nprint(\"Empirical std[y] is\",empirical_sigma_y,\"(real std[y]=67.6)\")\nfig_car_new.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)\n```\n\n### Note: comparison of car stopping distance problem of Lecture 3 & Lecture 4\n\n* The car stopping distance problem in Lecture 3 only had one rv.\n* While this car stopping distance problem (introduced in Lecture 4) **has 2 rv's**.\n - Observing the governing model when knowing the \"true\" distributions of $z_1$ and $z_2$ we see that:\n * The expected value (mean) of $y$ is the same for both problems because $\\mu_{z_2}=0.1$.\n * The variance of $y$ is higher for this problem because of the additional randomness introduced by $z_2$.\n - For example, for $x=75$ m/s the $\\text{std}[y] \\approx 67.6$ m while it was 37.5 when only $z_1$ was an rv.\n\n### Car stopping distance problem with 2 rv's but only 1 rv being unknown\n\n\n\nToday we will finally do some predictions!\n\nRecall the Homework of Lecture 4, and consider the car stopping distance problem for constant velocity $x=75$ m/s and for which **it is known** that $z_2 \\sim \\mathcal{N}(z_2|\\mu_{z_2}=0.1,\\sigma_{z_2}^2=0.01^2)$.\n\nThe only information that we do not know is the driver's reaction time $z$ (here we call it $z$, instead of $z_1$ as in Lecture 4, because this is the only hidden variable so we can **simplify the notation**).\n\n* Can we predict $p(y)$ without knowing $p(z)$?\n\nYes!! If we use Bayes' rule!\n\n### Recall the Homework of Lecture 4\n\n\n\nFrom last lecture's Homework, you demonstrated that the conditional pdf of the stopping distance given the reaction time $z$ (for convenience we write here $z$ instead of $z_1$) is\n\n$$\np(y|z) = \\mathcal{N}\\left(y | \\mu_{y|z}=w z+b, \\sigma_{y|z}^2=s^2\\right)\n$$\n\nwhere $w$, $b$ and $s$ are all constants that you determined to be:\n\n$w=x=75$\n\n$b=x^2\\mu_{z_2}=75^2\\cdot0.1=562.5$\n\n$s^2=(x^2 \\sigma_{z_2})^2=(75^2\\cdot0.01)^2=56.25^2$\n\nbecause we are considering that the car is going at constant velocity $x=75$ m/s and that we know $z_2= \\mathcal{N}(z_2|\\mu_{z_2}=0.1,\\sigma_{z_2}^2=0.01^2)$.\n\n### Solution to Homework of Lecture 4\n\n\n\nWhat we know:\n\n$\\require{color}{\\color{red}y} = {\\color{blue}z_1}\\cdot 75 + {\\color{magenta}z_2}\\cdot 75^2 = 75 {\\color{blue}z_1} + 5625 {\\color{magenta}z_2}$\n\nwhere $z_1 \\sim \\mathcal{N}(\\mu_{z_1}=1.5,\\sigma_{z_1}^2=0.5^2)$, and $z_2 \\sim \\mathcal{N}(\\mu_{z_2}=0.1,\\sigma_{z_2}^2=0.01^2)$.\n\n1. To calculate the conditional pdf $p(y|z_1)$, i.e. the observation distribution, we note that when given $z_1$ we just have $y$ as a function of $z_2$:\n\n$$\ny \\equiv f(z_2) = x z_1 + x^2 z_2 \\Rightarrow z_2 = \\frac{y}{x^2}-\\frac{z_1}{x} \\equiv g(y)\n$$\n\nFrom the change of variables formula (Lecture 3),\n\n$$\\begin{align}\np_{y|z_1}(y) &= p_{z_2}\\left( g(y) \\right) \\left| \\frac{d}{dy}g(y)\\right| \\\\\n&= \\mathcal{N}\\left( \\frac{y}{x^2}-\\frac{z_1}{x}\\left| \\mu_{z_2}, \\sigma_{z_2}^2\\right.\\right) \\left|\\frac{1}{x^2}\\right| \\\\\n&= \\frac{1}{\\sqrt{2\\pi \\sigma_{z_2}^2}} \\exp\\left[ -\\frac{1}{2\\sigma_{z_2}^2}\\left( \\frac{y}{x^2}-\\frac{z_1}{x}-\\mu_{z_2} \\right)^2 \\right] \\left|\\frac{1}{x^2}\\right|\\\\\n&= \\frac{1}{\\sqrt{2\\pi \\left(x^2\\sigma_{z_2}\\right)^2}} \\exp\\left[ -\\frac{1}{2\\left(x^2\\sigma_{z_2}\\right)^2}\\left( y-x z_1-x^2\\mu_{z_2} \\right)^2 \\right] \n\\end{align}\n$$\n\nSo, the conditional pdf $p(y|z_1)$ is also a Gaussian:\n\n$$\np(y|z_1) = \\mathcal{N}\\left( y| \\mu_{y|z_1}=x^2\\mu_{z_2}+x z_1, \\sigma_{y|z_1}=\\left( x^2 \\sigma_{z_2}\\right)^2 \\right)\n$$\n\n(Alternative way to answer Question 1 without using the change of variables formula)\n\nThere is a different way to derive the same result without even using the change of variables formula. We can obtain the same result as above by calculating the joint distribution $p(y,z_1,z_2)$:\n$$\\begin{align}\np(y,z_1,z_2) &= p(y|z_1, z_2) p(z_1, z_2) \\\\\n&= p(y|z_1,z_2) p(z_1) p(z_2) \\\\\n&= \\delta(y-x z_1 - x^2 z_2) p(z_1) p(z_2) \\\\\n&= \\frac{1}{|x^2|} \\delta(\\frac{y-x z_1}{x^2} - z_2) p(z_1) p(z_2) \\\\\n&= \\frac{1}{|x^2|} p_{z_2}(\\frac{y-x z_1}{x^2}) p(z_1) \\\\\n&= \\mathcal{N}\\left( y| x^2\\mu_{z_2}+x z_1, \\left( x^2 \\sigma_{z_2}\\right)^2 \\right) p(z_1)\n\\end{align}\n$$\n\n\n2. The joint distribution is simply $p(y, z_1)$:\n\n$$\np(y, z_1) = p(y|z_1)p(z_1) = \\mathcal{N}\\left( y| \\mu_{y|z_1}=x^2\\mu_{z_2}+x z_1, \\sigma_{y|z_1}=\\left( x^2 \\sigma_{z_2}\\right)^2 \\right) \\mathcal{N}\\left( z_1| \\mu_{z_1}=1.5, \\sigma_{z_1}=0.5^2 \\right)\n$$\n\nwhich we will learn how to calculate in this lecture (spoiler alert: it's another gaussian \ud83d\ude06)\n\n3. The covariance matrix is calculated as:\n\n$\n\\mathbb{E}[z_1] = \\mu_{z_1} \\, , \\quad \\mathbb{V}[z_1] = \\sigma_{z_1}^2\n$\n\n$\n\\mathbb{E}[y] = \\mathbb{E}[z_1 x + x^2 z_2]=\\mathbb{E}[z_1]x + x^2 \\mathbb{E}[z_2] = x\\mu_{z_1}+x^2\\mu_{z_2}\n$\n\n$\\begin{align}\n\\mathbb{E}[y^2] &= \\mathbb{E}\\left[ (z_1 x + x^2 z_2)(z_1 x + x^2 z_2) \\right]\\\\\n&= \\mathbb{E}\\left[ z_1^2 x^2+2x^3z_1 z_2 + x^4 z_2^2 \\right] \\\\\n&= x^2 \\mathbb{E}[z_1^2] + 2x^3 \\mathbb{E}[z_1 z_2] + x^4 \\mathbb{E}[z_2^2] \\\\\n&= x^2\\left( \\sigma_{z_1}^2 + \\mu_{z_1}^2\\right) + 2x^3 \\mu_{z_1}\\mu_{z_2} + x^4\\left( \\sigma_{z_2}^2 + \\mu_{z_2}^2\\right)\n\\end{align}\n$\n\n$\\begin{align}\n\\mathbb{V}[y] &= \\mathbb{E}[y^2]-\\mathbb{E}[y]^2\\\\\n&= x^2\\left( \\sigma_{z_1}^2 + \\mu_{z_1}^2\\right) + 2x^3 \\mu_{z_1}\\mu_{z_2} + x^4\\left( \\sigma_{z_2}^2 + \\mu_{z_2}^2\\right) - \\left( x\\mu_{z_1}+x^2\\mu_{z_2}\\right)^2 \\\\\n&= \\left( x\\sigma_{z_1}\\right)^2+\\left(x^2\\sigma_{z_2}\\right)^2 \\\\\n\\end{align}\n$\n\n$\\begin{align}\n\\text{Cov}[y, z_1] &= \\mathbb{E}[y z_1] - \\mathbb{E}[y] \\mathbb{E}[z_1] \\\\\n&= \\mathbb{E}[z_1^2 x + x^2 z_2 z_1] - \\left(x\\mu_{z_1}+x^2\\mu_{z_2}\\right) \\mu_{z_1} \\\\\n&= x\\mathbb{E}[z_1^2]+x^2\\mathbb{E}[z_1 z_2] - \\left(x\\mu_{z_1}+x^2\\mu_{z_2}\\right) \\mu_{z_1} \\\\\n&= x\\left( \\sigma_{z_1}^2 + \\mu_{z_1}^2\\right)+x^2\\mu_{z_1}\\mu_{z_2} - x\\mu_{z_1}-x^2x\\mu_{z_1}\\mu_{z_2} \\\\\n&= x\\sigma_{z_1}^2\n\\end{align}\n$\n\nFrom where we can finally calculate the Covariance matrix:\n\n$$\n\\begin{align}\n\\boldsymbol{\\Sigma} &= \\text{Cov}\\begin{bmatrix}y\\\\ z_1\\end{bmatrix}\n = \\begin{bmatrix}\n\\mathbb{V}[y] & \\text{Cov}[y,z_1] \\\\\n\\text{Cov}[z_1,y] & \\mathbb{V}[z_1]\n\\end{bmatrix}\\\\\n&= \\begin{bmatrix}\n\\left( x\\sigma_{z_1}\\right)^2+\\left(x^2\\sigma_{z_2}\\right)^2 & x\\sigma_{z_1}^2 \\\\\nx\\sigma_{z_1}^2 & \\sigma_{z_1}^2\n\\end{bmatrix}\n\\end{align}\n$$\n\n### Understanding the Bayes' rule\n\n$\\require{color}$\n$$\n{\\color{green}p(z|y)} = \\frac{ {\\color{blue}p(y|z)}{\\color{red}p(z)} } {p(y)}\n$$\n\n* ${\\color{red}p(z)}$ is the **prior distribution**\n* ${\\color{blue}p(y|z)}$ is the **observation distribution** (conditional pdf)\n* $p(y)$ is the **marginal distribution**\n* ${\\color{green}p(z|y)}$ is the **posterior distribution**\n\n### A note about the term \"distribution\"\n\nThe term distribution can mean two things:\n1. For **continuous** rv's, the term *distribution* means *probability density function* (pdf).\n\n2. For **discrete** rv's the term *distribution* means *probability mass function* (pmf), as we will see later in the course.\n\nWe won't talk about categorical distributions or pmf's for a while. So, for now, when you see the term *distribution* it is the same as saying pdf.\n\n### Understanding the Bayes' rule\n\nLet's start by understanding the usefulness of Bayes' rule by calculating the posterior $p(z|y)$ for the car stopping distance problem (Homework of Lecture 4).\n\nAs we mentioned, for our problem we know the **observation distribution**:\n\n$p(y|z) = \\mathcal{N}\\left(y | \\mu_{y|z}=w z+b, \\sigma_{y|z}^2\\right)$\n\nwhere $\\sigma_{y|z} = \\text{const}$, as well as $w$ and $b$. \n\nbut we **don't know** the prior $p(z)$.\n\n### Prior: our beliefs about the problem\n\nIf we have absolutely no clue about what the distribution of the hidden rv $z$ is, then we can use a **Uniform distribution** (a.k.a. uninformative prior).\n\nThis distribution assigns equal probability to any value of $z$ within an interval $z \\in (z_{min}, z_{max})$.\n\n$$\np(z) = \\frac{1}{C_z}\n$$\n\nwhere $C_z = z_{max}-z_{min}$ is the **normalization constant** of the Uniform pdf, i.e. the value that guarantees that $p(z)$ integrates to one.\n\nFor the time being, we will not assume any particular values for $z_{max}$ and $z_{min}$. So, we will consider the completely uninformative prior: $z_{max}\\rightarrow \\infty$ and $z_{min}\\rightarrow -\\infty$. If we had some information, we could consider some values for these bounds (e.g. $z_{min} = 0$ seconds would be the limit of the fastest reaction time that is humanly possible, and $z_{max} = 3$ seconds would be the slowest reaction time of a human being).\n\n### Summary of our Model\n\n1. The **observation distribution**:\n\n$$\\begin{align}\np(y|z) &= \\mathcal{N}\\left(y | \\mu_{y|z}=w z+b, \\sigma_{y|z}^2\\right) \\\\\n&= \\frac{1}{C_{y|z}} \\exp\\left[ -\\frac{1}{2\\sigma_{y|z}^2}(y-\\mu_{y|z})^2\\right]\n\\end{align}\n$$\n\nwhere $C_{y|z} = \\sqrt{2\\pi \\sigma_{y|z}^2}$ is the **normalization constant** of the Gaussian pdf, and where $\\mu_{y|z}=w z+b$, with $w$, $b$ and $\\sigma_{y|z}^2$ being constants, as previously mentioned.\n\n2. and the **prior distribution**: $p(z) = \\frac{1}{C_z}$\n\nwhere $C_z = z_{max}-z_{min}$ is the **normalization constant** of the Uniform pdf, i.e. the value that guarantees that $p(z)$ integrates to one.\n\n### Posterior from Bayes' rule\n\nSince we have defined the **observation distribution** and the **prior distribution**, we can now compute the posterior distribution from Bayes' rule.\n\nBut this requires a bit of algebra... Let's do it!\n\nFirst, in order to apply Bayes' rule $p(z|y) = \\frac{ p(y|z)p(z)}{p(y)}$ we need to calculate $p(y)$.\n\n$p(y)$ is obtained by marginalizing the joint distribution wrt $z$:\n\n$\np(y) = \\int p(y|z)p(z) dz\n$\n\nwhich implies an integration over $z$. So, let's rewrite $p(y|z)$ so that the integration becomes easier.\n\n$$\\begin{align}\np(y|z) &= \\mathcal{N}\\left(y | \\mu_{y|z}=w z+b, \\sigma_{y|z}^2\\right) \\\\\n&= \\frac{1}{C_{y|z}} \\exp\\left[ -\\frac{1}{2\\sigma_{y|z}^2}(y-(w z+b))^2\\right] \\\\\n&= \\frac{1}{C_{y|z}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\} \\\\\n&= \\frac{1}{|w|}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\}\n\\end{align}\n$$\n\nNote: This Gaussian pdf $\\mathcal{N}\\left(z | \\frac{y-b}{w}, \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2\\right)$ is unnormalized when written wrt $z$ (due to $\\frac{1}{|w|}$).\n\nWe can now calculate the marginal distribution $p(y)$:\n\n$$\n\\begin{align}\np(y) &= \\int p(y|z)p(z) dz \\\\\n&= \\int \\frac{1}{|w|}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\} \\frac{1}{C_z} dz\n\\end{align}\n$$\n\nWe can rewrite this expression as,\n\n$$\\require{color}\n\\begin{align}\np(y) &= \\frac{1}{|w|\\cdot C_z} {\\color{blue}\\int \\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\} dz} \\\\\n\\end{align}\n$$\n\nWhat is the result for the blue term?\n\nFrom where we conclude that the marginal distribution is:\n\n$$\\require{color}\np(y) = \\frac{1}{|w| C_z }\n$$\n\nSo, now we can determine the posterior:\n\n$$\\require{color}\n\\begin{align}\np(z|y) &= \\frac{ p(y|z)p(z)}{p(y)} \\\\\n&= |w| C_z \\cdot \\frac{1}{|w|}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\} \\cdot \\frac{1}{C_z}\\\\\n&= \\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}}\\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\}\n\\end{align}\n$$\n\nwhich is a **normalized** Gaussian pdf in $z$: $\\mathcal{N}\\left(z | \\frac{y-b}{w}, \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2\\right)$\n\n* **This is what the Bayes' rule does!** Computes the posterior $p(z|y)$ from $p(y|z)$ and $p(z)$.\n\n## Why should we care about the Bayes' rule?\n\nThere are a few reasons:\n\n1. As we will see, models are usually (always?) wrong.\n\n\n2. But our beliefs may be a bit closer to reality! Bayes' rule enables us to get better models if our beliefs are reasonable!\n\n\n3. We don't observe distributions. We observe **DATA**. Bayes' rule is a very powerful way to predict the distribution of our quantity of interest (here: $y$) from data!\n\n## Bayes' rule applied to observed data\n\nPreviously, we already introduced Bayes' rule when applied to observed data $\\mathcal{D}_y$.\n\n$\\require{color}$\n$$\n{\\color{green}p(z|y=\\mathcal{D}_y)} = \\frac{ {\\color{blue}p(y=\\mathcal{D}_y|z)}{\\color{red}p(z)} } {p(y=\\mathcal{D}_y)} = \\frac{ {\\color{magenta}p(y=\\mathcal{D}_y, z)} } {p(y=\\mathcal{D}_y)}\n$$\n\n* ${\\color{red}p(z)}$ is the **prior** distribution\n* ${\\color{blue}p(y=\\mathcal{D}_y|z)}$ is the **likelihood** function\n* ${\\color{magenta}p(y=\\mathcal{D}_y, z)}$ is the **joint likelihood** (product of likelihood function with prior distribution)\n* $p(y=\\mathcal{D}_y)$ is the **marginal likelihood**\n* ${\\color{green}p(z|y=\\mathcal{D}_y)}$ is the **posterior**\n\nWe can write Bayes' rule as posterior $\\propto$ likelihood $\\times$ prior , where we are ignoring the denominator $p(y=\\mathcal{D}_y)$ because it is just a **constant** independent of the hidden variable $z$.\n\n## Bayes' rule applied to observed data\n\nBut remember that Bayes' rule is just a way to calculate the posterior:\n\n$$\np(z|y=\\mathcal{D}_y) = \\frac{ p(y=\\mathcal{D}_y|z)p(z) } {p(y=\\mathcal{D}_y)}\n$$\n\nUsually, what we really want is to be able to predict the distribution of the quantity of interest (here: $y$) after observing some data $\\mathcal{D}_y$:\n\n$$\\require{color}\n{\\color{orange}p(y|y=\\mathcal{D}_y)} = \\int p(y|z) p(z|y=\\mathcal{D}_y) dz\n$$\n\nwhich is often written in simpler notation: $p(y|\\mathcal{D}_y) = \\int p(y|z) p(z|\\mathcal{D}_y) dz$\n\n### Bayesian inference for car stopping distance problem\n\nNow we will solve the first Bayesian ML problem from some given data $y=\\mathcal{D}_y$:\n\n| $y_i$ (m) |\n| ---- |\n| 601.5 |\n| 705.9 |\n| 693.8 |\n| ... |\n| 711.3 |\n\nwhere the data $\\mathcal{D}_y$ could be a Pandas dataframe with $N$ data points ($N$ rows).\n\n* **Very Important Question (VIQ)**: Can we calculate the **likelihood** function from this data?\n\n### Likelihood for car stopping distance problem\n\nOf course! As we saw a few cells ago, the **likelihood** is obtained by evaluating the **observation distribution** at the data $\\mathcal{D}_y$.\n\nNoting that each observation in $\\mathcal{D}_y$ is independent of each other, then:\n\n$$\np(y=\\mathcal{D}_y | z) = \\prod_{i=1}^{N} p(y=y_i|z) = p(y=y_1|z)p(y=y_2|z) \\cdots p(y=y_N|z)\n$$\n\nwhich gives the **probability density** of observing that data if using our observation distribution (part of our model!).\n\n#### Calculating the likelihood\n\nLet's calculate it:\n\n$$\n\\begin{align}\np(y=\\mathcal{D}_y | z) &= \\prod_{i=1}^{N} p(y=y_i|z) \\\\\n&= \\prod_{i=1}^{N} \\frac{1}{C_{y|z}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2\\right\\}\n\\end{align}\n$$\n\nThis seems a bit daunting... I know. Do not dispair yet!\n\n##### Product of Gaussian pdf's of the same rv $z$\n\nIt can be shown that the product of $N$ univariate Gaussian pdf's of the same rv $z$ is:\n\n$$\n\\prod_{i=1}^{N} \\mathcal{N}(z|\\mu_i, \\sigma_i^2) = C \\cdot \\mathcal{N}(z|\\mu, \\sigma^2)\n$$\n\nwith mean: $\\mu = \\sigma^2 \\left( \\sum_{i=1}^{N} \\frac{\\mu_i}{\\sigma_i^2}\\right)$\n\nvariance: $\\sigma^2= \\frac{1}{\\sum_{i=1}^{N} \\frac{1}{\\sigma_i^2}}$\n\nand normalization constant: $C = \\frac{1}{\\left(2\\pi\\right)^{(N-1)/2}}\\sqrt{\\frac{\\sigma^2}{\\prod_{i=1}^N \\sigma_i^2}} \\exp\\left[-\\frac{1}{2}\\left(\\sum_{i=1}^{N} \\frac{\\mu_i^2}{\\sigma_i^2} - \\frac{\\mu^2}{\\sigma^2}\\right)\\right]$\n\nCuriosity: the normalization constant $C$ is itself a Gaussian! You can see it more clearly if you consider $N=2$\n\nNote that the normalization constant shown in the previous cell can also be written as:\n\n$$\nC = \\frac{1}{\\left(2\\pi\\right)^{(N-1)/2}}\\sqrt{\\frac{\\sigma^2}{\\prod_{i=1}^N \\sigma_i^2}} \\exp\\left[-\\frac{1}{2}\\left(\\sum_{i=1}^{N-1}\\sum_{j=i+1}^{N} \\frac{(\\mu_i-\\mu_j)^2}{\\sigma_i^2 \\sigma_j^2}\\sigma^2\\right)\\right]\n$$\n\n# HOMEWORK\n\nShow that the product of two Gaussian pdf's for the same rv $z$ is:\n\n$\\mathcal{N}(z|\\mu_1, \\sigma_1^2)\\cdot \\mathcal{N}(z|\\mu_2, \\sigma_2^2)= C \\cdot \\mathcal{N}(z | \\mu, \\sigma^2)$\n\n$$\n\\begin{align}\n\\sigma^2&=\\frac{1}{\\frac{1}{\\sigma_1^2}+\\frac{1}{\\sigma_2^2}}\\\\\n\\mu&=\\sigma^2\\left(\\frac{\\mu_1}{\\sigma_1^2} + \\frac{\\mu_2}{\\sigma_2^2}\\right)\\\\\nC &= \\frac{1}{\\sqrt{2\\pi(\\sigma_1^2+\\sigma_2^2)}} \\exp\\left[-\\frac{1}{2(\\sigma_1^2+\\sigma_2^2)}(\\mu_1-\\mu_2)^2\\right]\n\\end{align}\n$$\n\n#### Side note\n\nIt's interesting to note that the product of MVN's for the same rv's $\\mathbf{z}$ is also a Gaussian!\n\nTo keep things simple, here's the result for the product of 2 Gaussian pdf's:\n\n$\\mathcal{N}(\\mathbf{z}|\\boldsymbol{\\mu}_1, \\boldsymbol{\\Sigma}_1)\\cdot \\mathcal{N}(\\mathbf{z}|\\boldsymbol{\\mu}_2, \\boldsymbol{\\Sigma}_2)= C \\cdot \\mathcal{N}(\\mathbf{z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$\n\nwhere\n\n$\\boldsymbol{\\mu} = \\boldsymbol{\\Sigma}\\left(\\boldsymbol{\\Sigma}_1^{-1}\\boldsymbol{\\mu}_1 + \\boldsymbol{\\Sigma}_2^{-1}\\boldsymbol{\\mu}_2 \\right)$\n\n$\\boldsymbol{\\Sigma} = \\left( \\boldsymbol{\\Sigma}_1^{-1}+\\boldsymbol{\\Sigma}_2^{-1}\\right)^{-1}$\n\n$\n\\begin{align}\nC &= \\mathcal{N}_{\\boldsymbol{\\mu}_1}\\left(\\boldsymbol{\\mu}_2, \\left( \\boldsymbol{\\Sigma}_1+\\boldsymbol{\\Sigma}_2\\right)\\right)\\\\\n&= \\frac{1}{\\sqrt{\\det[2\\pi \\left( \\boldsymbol{\\Sigma}_1+\\boldsymbol{\\Sigma}_2 \\right)]}} \\exp\\left[-\\frac{1}{2} \\left( \\boldsymbol{\\mu}_1-\\boldsymbol{\\mu}_2\\right)^T\\cdot\\left( \\boldsymbol{\\Sigma}_1+\\boldsymbol{\\Sigma}_2 \\right)^{-1}\\left( \\boldsymbol{\\mu}_1-\\boldsymbol{\\mu}_2 \\right) \\right]\\\\\n\\end{align}\n$\n\n#### Back to calculating the likelihood\n\n$$\n\\begin{align}\np(y=\\mathcal{D}_y | z) &= \\prod_{i=1}^{N} p(y=y_i|z) \\\\\n&= \\prod_{i=1}^{N} \\frac{1}{|w|} \\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2\\right\\} \\\\\n&= \\frac{1}{|w|^N} \\prod_{i=1}^{N} \\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2\\right\\}\n\\end{align}\n$$\n\nSo, using the result of a product of $N$ Gaussian pdf's to calculate the likelihood, and noting that $\\sigma_i = \\frac{\\sigma_{y|z}}{w}$ and $\\mu_i = \\frac{y_i - b}{w}$ we get:\n\n$$\np(y=\\mathcal{D}_y | z) = \\frac{1}{|w|^N} \\cdot C \\cdot \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left[ -\\frac{1}{2\\sigma^2}(z-\\mu)^2\\right]\n$$\n\nwhere\n\n$\\mu = \\frac{\\sigma^2}{\\sigma_i^2} \\sum_{i=1}^N \\mu_i = \\frac{w^2\\sigma^2}{\\sigma_{y|z}^2} \\sum_{i=1}^N \\mu_i$\n\n$\\sigma^2 = \\frac{1}{ \\sum_{i=1}^N \\frac{1}{\\sigma_i^2} } = \\frac{1}{ \\sum_{i=1}^N \\frac{w^2}{\\sigma_{y|z}^2} } = \\frac{\\sigma_{y|z}^2}{w^2 N}$\n\n$\nC = \\frac{1}{\\left(2\\pi\\right)^{(N-1)/2}} \\sqrt{\\frac{\\sigma^2}{\\left( \\frac{\\sigma_{y|z}^2}{w^2}\\right)^N}} \\exp\\left[-\\frac{1}{2}\\left(\\frac{w^2}{\\sigma_{y|z}^2}\\sum_{i=1}^N \\mu_i - \\frac{\\mu^2}{\\sigma^2}\\right) \\right] = \\frac{1}{\\left(2\\pi\\right)^{(N-1)/2}} \\sqrt{\\frac{\\sigma^2}{\\left( \\frac{\\sigma_{y|z}^2}{w^2}\\right)^N}} \n$\n\n#### Calculating the marginal likelihood\n\n$$\\begin{align}\np(y=\\mathcal{D}_y) &= \\int p(y=\\mathcal{D}_y | z) p(z) dz \\\\\n&= \\int \\frac{1}{|w|^N} C \\cdot \\mathcal{N}(z|\\mu, \\sigma^2)\\cdot \\frac{1}{C_z} dz\\\\\n&= \\frac{C}{|w|^N C_z} \\int \\mathcal{N}(z|\\mu, \\sigma^2)dz = \\frac{C}{|w|^N C_z} \\\\\n\\end{align}\n$$\n\nWe can now calculate the posterior:\n\n$$\\begin{align}\np(z|y=\\mathcal{D}_y) &= \\frac{ p(y=\\mathcal{D}_y|z)p(z) } {p(y=\\mathcal{D}_y)} \\\\\n&= \\frac{1}{p(y=\\mathcal{D}_y)} \\cdot \\frac{1}{|w|^N} C \\cdot \\mathcal{N}(z|\\mu,\\sigma^2) \\cdot \\frac{1}{C_z} \\\\\n&= \\mathcal{N}(z|\\mu, \\sigma^2)\n\\end{align}\n$$\n\n#### Calculating the Posterior Predictive Distribution (PPD)\n\nHaving found the posterior, we can determine the PPD:\n\n$$\np(y|\\mathcal{D}_y) = \\int p(y| z) p(z|\\mathcal{D}_y) dz\n$$\n\nTo calculate this, we will have to use the identity for a product of two Gaussians.\n\n$$\n\\begin{align}\np(y|\\mathcal{D}_y) &= \\int \\frac{1}{|w|} \\mathcal{N}\\left(z|\\frac{y-b}{w}, \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2\\right) \\mathcal{N}(z|\\mu, \\sigma^2) dz \\\\\n&= \\int \\frac{1}{|w|} C^* \\mathcal{N}\\left(z|\\mu^*, \\left(\\sigma^*\\right)^2\\right) dz \\\\\n\\end{align}\n$$\n\n#### Calculating the Posterior Predictive Distribution (PPD)\n\nWe can find these parameters from the identity for a product of two Gaussians.\n\n$$\np(y|\\mathcal{D}_y) = \\int \\frac{1}{|w|} C^* \\mathcal{N}\\left(z|\\mu^*, \\left(\\sigma^*\\right)^2\\right) dz\n$$\n\nwhere\n\n$\\mu^* = \\left(\\sigma^* \\right)^2 \\left( \\frac{\\mu}{\\sigma^2} + \\frac{(y-b)/w}{\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2} \\right) = \\left(\\sigma^* \\right)^2 \\left( \\frac{\\mu}{\\sigma^2} + \\frac{(y-b)\\cdot w}{\\sigma_{y|z}^2} \\right)$\n\n$\\left( \\sigma^* \\right)^2 = \\frac{1}{\\frac{1}{\\sigma^2}+\\frac{1}{\\left( \\frac{\\sigma_{y|z}}{w}\\right)^2}}= \\frac{1}{\\frac{1}{\\sigma^2}+\\frac{w^2}{\\sigma_{y|z}^2}}$\n\n$C^* = \\frac{1}{\\sqrt{2\\pi \\left( \\sigma^2 + \\frac{\\sigma_{y|z}^2}{w^2} \\right)}}\\exp\\left[ - \\frac{\\left(\\mu - \\frac{y-b}{w}\\right)^2}{2\\left( \\sigma^2+\\frac{\\sigma_{y|z}^2}{w^2}\\right)}\\right]$\n\n## Next class\n\nIn the next class we will finish this example, by solving this integral to determine the PPD $p(y|\\mathcal{D}_y)$.\n\n### See you next class\n\nHave fun!\n", "meta": {"hexsha": "3096a1f8ea971e9d1d2f986de71cf8c7acc6a115", "size": 195742, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture5/3dasm_Lecture5.ipynb", "max_stars_repo_name": "shushu-qin/3dasm_course", "max_stars_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-07T18:45:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T21:45:27.000Z", "max_issues_repo_path": "Lectures/Lecture5/3dasm_Lecture5.ipynb", "max_issues_repo_name": "shushu-qin/3dasm_course", "max_issues_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture5/3dasm_Lecture5.ipynb", "max_forks_repo_name": "shushu-qin/3dasm_course", "max_forks_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2022-02-07T18:45:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T19:30:17.000Z", "avg_line_length": 155.5977742448, "max_line_length": 150788, "alphanum_fraction": 0.8648271705, "converted": true, "num_tokens": 10621, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.39606818053136394, "lm_q2_score": 0.3557749071749625, "lm_q1q2_score": 0.1409111201635023}} {"text": "```python\n%run ../../common/import_all.py\n\nfrom common.setup_notebook import set_css_style, setup_matplotlib, config_ipython\nconfig_ipython()\nsetup_matplotlib()\nset_css_style()\n```\n\n\n\n\n\n\n\n\n\n\n# The $\\chi^2$ test\n\n## What is\n\nThe $\\chi^2$ test is a statistical hypothesis test in which the distribution of the test statistic calculated on the data is a [$\\chi^2$ distribution](../distributions-measures/famous-distributions.ipynb#Chi-squared,-$\\chi^2$) under null hypothesis. The assumption is that data is normally distributed and independent so the $\\chi^2$ test can also be used to reject the hypothesis that data are independent.\n\n\n\nIt is used with categorical data to see if the number of individuals in each category is consistent with the expected values. In practice, the test is used to determine if there is a significant difference between the expected frequencies and the observed frequencies of the outcomes of an experiment in one or more categories, that is, if the observed differences are due to chance. The idea is: is the number of individuals falling into each category significantly different from the number you would expect under the null hypothesis? Is this difference between expected and observed data due to sampling or is it real?\n\nThe $\\chi^2$ is defined as \n\n$$\n\\chi^2 = \\sum_i \\frac{(o_i - h_i)^2}{h_i} \\ ,\n$$\n\nwhere $o_i$ is the observed value and $h_i$ the null hypothesis value. \n\nThe computed $\\chi^2$ has to be compared to [table values](http://sites.stat.psu.edu/~mga/401/tables/Chi-square-table.pdf) for the $\\chi^2$ distribution at the chosen level of significance and for given degrees of freedom one has in order to decide if the null hypothesis can be rejected or not, using the [$p$-value](../concepts/p-value-confidence-level.ipynb).\n\n## An example\n\nThis simple example has been adapted from [[here]](#example-dice).\n\nLet's say that we have a (6-faces) dice and we want to know if it is fair, that is, if each of the faces is equiprobable or if there is any bias towards a face. We throw the dice 60 times: in the case of a fair dice we would have each face appearing 10 times (60/6 where 6 is the number of possible results). This will be the null hypothesis.\n\nWe build a table containing the actual counts we get for each face, and said null hypothesis: \n\n| | | | | | | |\n| :--- |:---:|:---:|:---:|:---:|:---:|:---:|\n| | **Face 1** | **Face 2** | **Face 3** | **Face 4** | **Face 5** | **Face 6** | \n| **Observations** | 5 | 8 | 9 | 8 | 10 | 20 |\n| **Null Hypothesis** | 10 | 10 | 10 | 10 | 10 | 10 |\n\n\nThe $\\chi^2$ gets calculated as\n\n\\begin{align}\n\\chi^2 = \\frac{(5-10)^2}{10} + \\frac{(8-10)^2}{10} + \\frac{(9-10)^2}{10} &+& \\\\ \n\\frac{(8-10)^2}{10} + \\frac{(10-10)^2}{10} + \\frac{(20-10)^2}{10} = 13.4\n\\end{align}\n\nThe number of degrees of freedom is the number of terms minus 1 , so $6-1=5$. Looking up for the values of the $\\chi^2$ distribution at this number of degrees of freedom and for a confidence level of $95\\%$ we get a value of $11.070$. Because our calculated $\\chi^2$ exceeds the table value, this means that the $p$-value associated to it is smaller than $0.05$, so we can discard the null hypothesis at that significance level. \n\nNevertheless, note that if we choose a confidence level of $99\\%$ instead, so want to be safer, we cannot discard the null hypothesis as the table value for the $\\chi^2$ at that level is $15.086$, bigger than our calculated one, hence the $p$-value does pass the required threshold of $0.01$. \n\n## A typical use case: the goodness of a distribution fit\n\nThe $\\chi^2$ test is widely used to determine how good a fit is, that is, how well a statistical model describes (fits) the observational points. The number of degrees of freedom to be used to retrieve the comparison with table values is the total number of observations minus the number of fit parameters. \n\n### Fitting a uniform distribution\n\nLet's say we have $n$ data points and we bin them into $b$ bins, the expected occurrence frequency of each bin (the number per bin expected) would be, given that the distribution is uniform,\n\n$$\nh_i = \\frac{n}{b} \\ \\ \\forall i \\ ,\n$$ \n\n$i$ being the index of the bins.\n\nOur $\\chi^2$ test statistic is\n\n$$\n\\chi^2 = \\sum_{i=1}^b \\frac{(o_i - h_i)^2}{h_i}\n$$\n\nwhere $o_i$ is the observed number of data points in the bin. \n\n\n### Fitting a non-uniform distribution\n\nIn that case the hypothesis values have to be computed from the hypothesis distribution.\n\n## References \n\n1. [The original source for the dice example](http://ccnmtl.columbia.edu/projects/qmss/the_chisquare_test/about_the_chisquare_test.html)\n\n\n```python\n\n```\n", "meta": {"hexsha": "b2f45e72cc91689b81249dc9632fd27cf20be1bb", "size": 9616, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "prob-stats-data-analysis/testing/chi-squared.ipynb", "max_stars_repo_name": "walkenho/tales-science-data", "max_stars_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-11T09:39:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-11T09:39:10.000Z", "max_issues_repo_path": "prob-stats-data-analysis/testing/chi-squared.ipynb", "max_issues_repo_name": "walkenho/tales-science-data", "max_issues_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prob-stats-data-analysis/testing/chi-squared.ipynb", "max_forks_repo_name": "walkenho/tales-science-data", "max_forks_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2713178295, "max_line_length": 631, "alphanum_fraction": 0.5411813644, "converted": true, "num_tokens": 1812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43398145016252104, "lm_q2_score": 0.3242354055108441, "lm_q1q2_score": 0.1407121514776292}} {"text": "\n\n\n```python\nfrom google.colab import drive\ndrive.mount('/content/drive')\n```\n\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n\n\n# Neuromatch Academy: Week 3, Day 2, Tutorial 1\n# Neuronal Network Dynamics: Neural Rate Models\n\n__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva \n\n__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom\n\n\n---\n# Tutorial Objectives\n\nThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. \n\nThe activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). \n\nHow the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.\n\nIn this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.\n\nIn this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. \n\n**Steps:**\n- Write the equation for the firing rate dynamics of a 1D excitatory population.\n- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.\n- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. \n- Investigate the stability of the fixed points by linearizing the dynamics around them.\n \n\n\n---\n# Setup\n\n\n```python\n# Imports\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt # root-finding algorithm\n```\n\n\n```python\n# @title Figure Settings\nimport ipywidgets as widgets # interactive display\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n```\n\n\n```python\n# @title Helper functions\n\n\ndef plot_fI(x, f):\n plt.figure(figsize=(6, 4)) # plot the figure\n plt.plot(x, f, 'k')\n plt.xlabel('x (a.u.)', fontsize=14)\n plt.ylabel('F(x)', fontsize=14)\n plt.show()\n\n\ndef plot_dr_r(r, drdt, x_fps=None):\n plt.figure()\n plt.plot(r, drdt, 'k')\n plt.plot(r, 0. * r, 'k--')\n if x_fps is not None:\n plt.plot(x_fps, np.zeros_like(x_fps), \"ko\", ms=12)\n plt.xlabel(r'$r$')\n plt.ylabel(r'$\\frac{dr}{dt}$', fontsize=20)\n plt.ylim(-0.1, 0.1)\n\n\ndef plot_dFdt(x, dFdt):\n plt.figure()\n plt.plot(x, dFdt, 'r')\n plt.xlabel('x (a.u.)', fontsize=14)\n plt.ylabel('dF(x)', fontsize=14)\n plt.show()\n```\n\n---\n# Section 1: Neuronal network dynamics\n\n\n```python\n# @title Video 1: Dynamic networks\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"p848349hPyw\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=p848349hPyw\n\n\n\n\n\n\n\n\n\n\n\n## Section 1.1: Dynamics of a single excitatory population\n\nIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\n\n\\begin{align}\n\\tau \\frac{dr}{dt} &= -r + F(w\\cdot r + I_{\\text{ext}}) \\quad\\qquad (1)\n\\end{align}\n\n$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\\text{ext}}$ represents the external input, and the transfer function $F(\\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.\n\nTo start building the model, please execute the cell below to initialize the simulation parameters.\n\n\n```python\n# @markdown *Execute this cell to set default parameters for a single excitatory population model*\n\n\ndef default_pars_single(**kwargs):\n pars = {}\n\n # Excitatory parameters\n pars['tau'] = 1. # Timescale of the E population [ms]\n pars['a'] = 1.2 # Gain of the E population\n pars['theta'] = 2.8 # Threshold of the E population\n\n # Connection strength\n pars['w'] = 0. # E to E, we first set it to 0\n\n # External input\n pars['I_ext'] = 0.\n\n # simulation parameters\n pars['T'] = 20. # Total duration of simulation [ms]\n pars['dt'] = .1 # Simulation time step [ms]\n pars['r_init'] = 0.2 # Initial value of E\n\n # External parameters if any\n pars.update(kwargs)\n\n # Vector of discretized time points [ms]\n pars['range_t'] = np.arange(0, pars['T'], pars['dt'])\n\n return pars\n\n```\n\nYou can now use:\n- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. \n- `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step\n- To update an existing parameter dictionary, use `pars['New_para'] = value`\n\nBecause `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax.\n\n## Section 1.2: F-I curves\nIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.\n\nThe transfer function $F(\\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. \n\nA sigmoidal $F(\\cdot)$ is parameterized by its gain $a$ and threshold $\\theta$.\n\n$$ F(x;a,\\theta) = \\frac{1}{1+\\text{e}^{-a(x-\\theta)}} - \\frac{1}{1+\\text{e}^{a\\theta}} \\quad(2)$$\n\nThe argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\\theta)=0$.\n\nMany other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$.\n\n### Exercise 1: Implement F-I curve \n\nLet's first investigate the activation functions before simulating the dynamics of the entire population. \n\nIn this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\\theta$ as parameters.\n\n\n```python\ndef F(x, a, theta):\n \"\"\"\n Population activation function.\n\n Args:\n x (float): the population input\n a (float): the gain of the function\n theta (float): the threshold of the function\n\n Returns:\n float: the population activation response F(x) for input x\n \"\"\"\n #################################################\n ## TODO for students: compute f = F(x) ##\n # Fill out function and remove\n #raise NotImplementedError(\"Student excercise: implement the f-I function\")\n #################################################\n\n # Define the sigmoidal transfer function f = F(x)\n f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1\n\n return f\n\n\npars = default_pars_single() # get default parameters\nx = np.arange(0, 10, .1) # set the range of input\n\n# Uncomment below to test your function\nf = F(x, pars['a'], pars['theta'])\nplot_fI(x, f)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_45ddc05f.py)\n\n*Example output:*\n\n\n\n\n\n### Interactive Demo: Parameter exploration of F-I curve\nHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?\n\n\n```python\n# @title\n\n# @markdown Make sure you execute this cell to enable the widget!\n\n\ndef interactive_plot_FI(a, theta):\n \"\"\"\n Population activation function.\n\n Expecxts:\n a : the gain of the function\n theta : the threshold of the function\n\n Returns:\n plot the F-I curve with give parameters\n \"\"\"\n\n # set the range of input\n x = np.arange(0, 10, .1)\n plt.figure()\n plt.plot(x, F(x, a, theta), 'k')\n plt.xlabel('x (a.u.)', fontsize=14)\n plt.ylabel('F(x)', fontsize=14)\n plt.show()\n\n\n_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))\n```\n\n\n interactive(children=(FloatSlider(value=1.5, description='a', max=3.0, min=0.3, step=0.3), FloatSlider(value=3\u2026\n\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_89df88ac.py)\n\n\n\n## Section 1.3: Simulation scheme of E dynamics\n\nBecause $F(\\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\\Delta t$:\n\n\\begin{align}\n&\\frac{dr}{dt} \\approx \\frac{r[k+1]-r[k]}{\\Delta t} \n\\end{align}\nwhere $r[k] = r(k\\Delta t)$. \n\nThus,\n\n$$\\Delta r[k] = \\frac{\\Delta t}{\\tau}[-r[k] + F(w\\cdot r[k] + I_{\\text{ext}}(k;a,\\theta))]$$\n\n\nHence, Equation (1) is updated at each time step by:\n\n$$r[k+1] = r[k] + \\Delta r[k]$$\n\n\n\n```python\n# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*\n\n\ndef simulate_single(pars):\n \"\"\"\n Simulate an excitatory population of neurons\n\n Args:\n pars : Parameter dictionary\n\n Returns:\n rE : Activity of excitatory population (array)\n\n Example:\n pars = default_pars_single()\n r = simulate_single(pars)\n \"\"\"\n\n # Set parameters\n tau, a, theta = pars['tau'], pars['a'], pars['theta']\n w = pars['w']\n I_ext = pars['I_ext']\n r_init = pars['r_init']\n dt, range_t = pars['dt'], pars['range_t']\n Lt = range_t.size\n\n # Initialize activity\n r = np.zeros(Lt)\n r[0] = r_init\n I_ext = I_ext * np.ones(Lt)\n\n # Update the E activity\n for k in range(Lt - 1):\n dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))\n r[k+1] = r[k] + dr\n\n return r\n\nhelp(simulate_single)\n```\n\n Help on function simulate_single in module __main__:\n \n simulate_single(pars)\n Simulate an excitatory population of neurons\n \n Args:\n pars : Parameter dictionary\n \n Returns:\n rE : Activity of excitatory population (array)\n \n Example:\n pars = default_pars_single()\n r = simulate_single(pars)\n \n\n\n### Interactive Demo: Parameter Exploration of single population dynamics\n\nNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\\text{ext}}$. Explore these dynamics in this interactive demo.\n\nHow does $r_{\\text{sim}}(t)$ change with different $I_{\\text{ext}}$ values? How does it change with different $\\tau$ values? Investigate the relationship between $F(I_{\\text{ext}}; a, \\theta)$ and the steady value of $r(t)$. \n\nNote that, $r_{\\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.\n\n\n```python\n# @title\n\n# @markdown Make sure you execute this cell to enable the widget!\n\n# get default parameters\npars = default_pars_single(T=20.)\n\n\ndef Myplot_E_diffI_difftau(I_ext, tau):\n # set external input and time constant\n pars['I_ext'] = I_ext\n pars['tau'] = tau\n\n # simulation\n r = simulate_single(pars)\n\n # Analytical Solution\n r_ana = (pars['r_init']\n + (F(I_ext, pars['a'], pars['theta'])\n - pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))\n\n # plot\n plt.figure()\n plt.plot(pars['range_t'], r, 'b', label=r'$r_{\\mathrm{sim}}$(t)', alpha=0.5,\n zorder=1)\n plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),\n label=r'$r_{\\mathrm{ana}}$(t)', zorder=2)\n plt.plot(pars['range_t'],\n F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),\n 'k--', label=r'$F(I_{\\mathrm{ext}})$')\n plt.xlabel('t (ms)', fontsize=16.)\n plt.ylabel('Activity r(t)', fontsize=16.)\n plt.legend(loc='best', fontsize=14.)\n plt.show()\n\n\n_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),\n tau=(1., 5., 0.2))\n```\n\n\n interactive(children=(FloatSlider(value=5.0, description='I_ext', max=10.0, step=1.0), FloatSlider(value=3.0, \u2026\n\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_bfdcc723.py)\n\n\n\n## Think!\nAbove, we have numerically solved a system driven by a positive input and that, if $w_{EE} \\neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.\n- Why doesn't the solution of the system \"explode\" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? \n- Which parameter would you change in order to increase the maximum value of the response? \n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_d28fd68a.py)\n\n\n\n---\n# Section 2: Fixed points of the single population system\n\n\n\n```python\n# @title Video 2: Fixed point\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"Ox3ELd1UFyo\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=Ox3ELd1UFyo\n\n\n\n\n\n\n\n\n\n\n\nAs you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\\displaystyle \\frac{dr}{dt}=0$. \n\nWe can find that the steady state of the Equation. (1) by setting $\\displaystyle{\\frac{dr}{dt}=0}$ and solve for $r$:\n\n$$-r_{\\text{steady}} + F(w\\cdot r_{\\text{steady}} + I_{\\text{ext}};a,\\theta) = 0, \\qquad (3)$$\n\nWhen it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.\n\nFrom the Interactive Demo, one could also notice that the value of $\\tau$ influences how quickly the activity will converge to the steady state from its initial value. \n\nIn the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\\tau$ in determining the convergence to the fixed point: \n\n$$\\displaystyle{r(t) = \\big{[}F(I_{\\text{ext}};a,\\theta) -r(t=0)\\big{]} (1-\\text{e}^{-\\frac{t}{\\tau}})} + r(t=0)$$ \\\\\n\nWe can now numerically calculate the fixed point with a root finding algorithm.\n\n## Exercise 2: Visualization of the fixed points\n\nWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\\displaystyle{\\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. \n\nHere, let us, for example, set $w=5.0$ and $I^{\\text{ext}}=0.5$. From Equation (1), you can obtain\n\n$$\\frac{dr}{dt} = [-r + F(w\\cdot r + I^{\\text{ext}})]\\,/\\,\\tau $$\n\nThen, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points. \n\n\n```python\ndef compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):\n \"\"\"Given parameters, compute dr/dt as a function of r.\n\n Args:\n r (1D array) : Average firing rate of the excitatory population\n I_ext, w, a, theta, tau (numbers): Simulation parameters to use\n other_pars : Other simulation parameters are unused by this function\n\n Returns\n drdt function for each value of r\n \"\"\"\n #########################################################################\n # TODO compute drdt and disable the error\n #raise NotImplementedError(\"Finish the compute_drdt function\")\n #########################################################################\n\n # Calculate drdt\n x = w*r + I_ext\n drdt = (-r+F(x,a,theta))/tau\n\n return drdt\n\n\n# Define a vector of r values and the simulation parameters\nr = np.linspace(0, 1, 1000)\npars = default_pars_single(I_ext=0.5, w=5)\n\n# Uncomment to test your function\ndrdt = compute_drdt(r, **pars)\nplot_dr_r(r, drdt)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_c5280901.py)\n\n*Example output:*\n\n\n\n\n\n## Exercise 3: Fixed point calculation\n\nWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\\text{guess}}$) for the root-finding algorithm to start from. From the line $\\displaystyle{\\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).\n\nThe next cell defines three helper functions that we will use:\n\n- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value\n- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\\rm fp}$ for which $\\displaystyle{\\frac{dr}{dt}} = 0$ are the true fixed points\n- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions\n\n\n```python\n# @markdown *Execute this cell to enable the fixed point functions*\n\ndef my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):\n \"\"\"\n Calculate the fixed point through drE/dt=0\n\n Args:\n r_guess : Initial value used for scipy.optimize function\n a, theta, w, I_ext : simulation parameters\n\n Returns:\n x_fp : value of fixed point\n \"\"\"\n # define the right hand of E dynamics\n def my_WCr(x):\n r = x\n drdt = (-r + F(w * r + I_ext, a, theta))\n y = np.array(drdt)\n\n return y\n\n x0 = np.array(r_guess)\n x_fp = opt.root(my_WCr, x0).x.item()\n\n return x_fp\n\n\ndef check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):\n \"\"\"\n Verify |dr/dt| < mytol\n\n Args:\n fp : value of fixed point\n a, theta, w, I_ext: simulation parameters\n mytol : tolerance, default as 10^{-4}\n\n Returns :\n Whether it is a correct fixed point: True/False\n \"\"\"\n # calculate Equation(3)\n y = x_fp - F(w * x_fp + I_ext, a, theta)\n\n # Here we set tolerance as 10^{-4}\n return np.abs(y) < mytol\n\n\ndef my_fp_finder(pars, r_guess_vector, mytol=1e-4):\n \"\"\"\n Calculate the fixed point(s) through drE/dt=0\n\n Args:\n pars : Parameter dictionary\n r_guess_vector : Initial values used for scipy.optimize function\n mytol : tolerance for checking fixed point, default as 10^{-4}\n\n Returns:\n x_fps : values of fixed points\n\n \"\"\"\n x_fps = []\n correct_fps = []\n for r_guess in r_guess_vector:\n x_fp = my_fp_single(r_guess, **pars)\n if check_fp_single(x_fp, **pars, mytol=mytol):\n x_fps.append(x_fp)\n\n return x_fps\n\nhelp(my_fp_finder)\n```\n\n Help on function my_fp_finder in module __main__:\n \n my_fp_finder(pars, r_guess_vector, mytol=0.0001)\n Calculate the fixed point(s) through drE/dt=0\n \n Args:\n pars : Parameter dictionary\n r_guess_vector : Initial values used for scipy.optimize function\n mytol : tolerance for checking fixed point, default as 10^{-4}\n \n Returns:\n x_fps : values of fixed points\n \n\n\n\n```python\nr = np.linspace(0, 1, 1000)\npars = default_pars_single(I_ext=0.5, w=5)\ndrdt = compute_drdt(r, **pars)\n\n#############################################################################\n# TODO for students:\n# Define initial values close to the intersections of drdt and y=0\n# (How many initial values? Hint: How many times do the two lines intersect?)\n# Calculate the fixed point with these initial values and plot them\n#############################################################################\nr_guess_vector = [.1,.4,.95]\n\n# Uncomment to test your values\nx_fps = my_fp_finder(pars, r_guess_vector)\nplot_dr_r(r, drdt, x_fps)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_0637b6bf.py)\n\n*Example output:*\n\n\n\n\n\n## Interactive Demo: fixed points as a function of recurrent and external inputs.\n\nYou can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\\text{ext}}$ take different values. How does the number of fixed points change?\n\n\n```python\n# @title\n\n# @markdown Make sure you execute this cell to enable the widget!\n\n\ndef plot_intersection_single(w, I_ext):\n # set your parameters\n pars = default_pars_single(w=w, I_ext=I_ext)\n\n # find fixed points\n r_init_vector = [0, .4, .9]\n x_fps = my_fp_finder(pars, r_init_vector)\n\n # plot\n r = np.linspace(0, 1., 1000)\n drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']\n\n plot_dr_r(r, drdt, x_fps)\n\n_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),\n I_ext=(0, 3, 0.1))\n```\n\n\n interactive(children=(FloatSlider(value=4.0, description='w', max=7.0, min=1.0, step=0.2), FloatSlider(value=1\u2026\n\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_efa03467.py)\n\n\n\n---\n# Summary\n\nIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.\n\nWe learned about:\n- The effect of the input parameters and the time constant of the network on the dynamics of the population.\n- How to find the fixed point(s) of the system.\n\nNext, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:\n\n- How to determine the stability of a fixed point by linearizing the system.\n- How to add realistic inputs to our model.\n\n---\n# Bonus 1: Stability of a fixed point\n\n\n```python\n# @title Video 3: Stability of fixed points\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"KKMlWWU83Jg\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=KKMlWWU83Jg\n\n\n\n\n\n\n\n\n\n\n\n#### Initial values and trajectories\n\nHere, let us first set $w=5.0$ and $I_{\\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \\equiv r_{\\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.\n\n\n```python\n# @markdown Execute this cell to see the trajectories!\n\npars = default_pars_single()\npars['w'] = 5.0\npars['I_ext'] = 0.5\n\nplt.figure(figsize=(8, 5))\nfor ie in range(10):\n pars['r_init'] = 0.1 * ie # set the initial value\n r = simulate_single(pars) # run the simulation\n\n # plot the activity with given initial\n plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,\n label=r'r$_{\\mathrm{init}}$=%.1f' % (0.1 * ie))\n\nplt.xlabel('t (ms)')\nplt.title('Two steady states?')\nplt.ylabel(r'$r$(t)')\nplt.legend(loc=[1.01, -0.06], fontsize=14)\nplt.show()\n```\n\n## Interactive Demo: dynamics as a function of the initial value\n\nLet's now set $r_{\\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?\n\n\n```python\n# @title\n\n# @markdown Make sure you execute this cell to enable the widget!\n\npars = default_pars_single(w=5.0, I_ext=0.5)\n\ndef plot_single_diffEinit(r_init):\n pars['r_init'] = r_init\n r = simulate_single(pars)\n\n plt.figure()\n plt.plot(pars['range_t'], r, 'b', zorder=1)\n plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)\n plt.xlabel('t (ms)', fontsize=16)\n plt.ylabel(r'$r(t)$', fontsize=16)\n plt.ylim(0, 1.0)\n plt.show()\n\n\n_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_71581b5a.py)\n\n\n\n### Stability analysis via linearization of the dynamics\n\nJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system \n$$\\frac{dx}{dt} = \\lambda (x - b),$$ \nhas a fixed point for $x=b$. The analytical solution of such a system can be found to be:\n$$x(t) = b + \\big{(} x(0) - b \\big{)} \\text{e}^{\\lambda t}.$$ \nNow consider a small perturbation of the activity around the fixed point: $x(0) = b+ \\epsilon$, where $|\\epsilon| \\ll 1$. Will the perturbation $\\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as:\n $$\\epsilon (t) = x(t) - b = \\epsilon \\text{e}^{\\lambda t}$$\n\n- if $\\lambda < 0$, $\\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is \"**stable**\".\n\n- if $\\lambda > 0$, $\\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, \"**unstable**\" .\n\n### Compute the stability of Equation $1$\n\nSimilar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\\epsilon$, i.e. $r = r^{*} + \\epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\\epsilon(t)$:\n\n\\begin{align}\n\\tau \\frac{d\\epsilon}{dt} \\approx -\\epsilon + w F'(w\\cdot r^{*} + I_{\\text{ext}};a,\\theta) \\epsilon \n\\end{align}\n\nwhere $F'(\\cdot)$ is the derivative of the transfer function $F(\\cdot)$. We can rewrite the above equation as:\n\n\\begin{align}\n\\frac{d\\epsilon}{dt} \\approx \\frac{\\epsilon}{\\tau }[-1 + w F'(w\\cdot r^* + I_{\\text{ext}};a,\\theta)] \n\\end{align}\n\nThat is, as in the linear system above, the value of\n\n$$\\lambda = [-1+ wF'(w\\cdot r^* + I_{\\text{ext}};a,\\theta)]/\\tau \\qquad (4)$$\n\ndetermines whether the perturbation will grow or decay to zero, i.e., $\\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system.\n\n## Exercise 4: Compute $dF$\n\nThe derivative of the sigmoid transfer function is:\n\\begin{align} \n\\frac{dF}{dx} & = \\frac{d}{dx} (1+\\exp\\{-a(x-\\theta)\\})^{-1} \\\\\n& = a\\exp\\{-a(x-\\theta)\\} (1+\\exp\\{-a(x-\\theta)\\})^{-2}. \\qquad (5)\n\\end{align}\n\nLet's now find the expression for the derivative $\\displaystyle{\\frac{dF}{dx}}$ in the following cell and plot it.\n\n\n```python\ndef dF(x, a, theta):\n \"\"\"\n Population activation function.\n\n Args:\n x : the population input\n a : the gain of the function\n theta : the threshold of the function\n\n Returns:\n dFdx : the population activation response F(x) for input x\n \"\"\"\n\n ###########################################################################\n # TODO for students: compute dFdx ##\n raise NotImplementedError(\"Student excercise: compute the deravitive of F\")\n ###########################################################################\n\n # Calculate the population activation\n dFdx = ...\n\n return dFdx\n\n\npars = default_pars_single() # get default parameters\nx = np.arange(0, 10, .1) # set the range of input\n\n# Uncomment below to test your function\n# df = dF(x, pars['a'], pars['theta'])\n# plot_dFdt(x, df)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_ce2e3bc5.py)\n\n*Example output:*\n\n\n\n\n\n## Exercise 5: Compute eigenvalues\n\nAs discussed above, for the case with $w=5.0$ and $I_{\\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?\n\nNote that the expression of the eigenvalue at fixed point $r^*$\n$$\\lambda = [-1+ wF'(w\\cdot r^* + I_{\\text{ext}};a,\\theta)]/\\tau$$\n\n\n```python\ndef eig_single(fp, tau, a, theta, w, I_ext, **other_pars):\n \"\"\"\n Args:\n fp : fixed point r_fp\n tau, a, theta, w, I_ext : Simulation parameters\n\n Returns:\n eig : eigevalue of the linearized system\n \"\"\"\n #####################################################################\n ## TODO for students: compute eigenvalue and disable the error\n raise NotImplementedError(\"Student excercise: compute the eigenvalue\")\n ######################################################################\n # Compute the eigenvalue\n eig = ...\n\n return eig\n\n\n# Find the eigenvalues for all fixed points of Exercise 2\npars = default_pars_single(w=5, I_ext=.5)\nr_guess_vector = [0, .4, .9]\nx_fp = my_fp_finder(pars, r_guess_vector)\n\n# Uncomment below lines after completing the eig_single function.\n\n# for fp in x_fp:\n# eig_fp = eig_single(fp, **pars)\n# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')\n```\n\n**SAMPLE OUTPUT**\n\n```\nFixed point1 at 0.042 with Eigenvalue=-0.583\nFixed point2 at 0.447 with Eigenvalue=0.498\nFixed point3 at 0.900 with Eigenvalue=-0.626\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_e285f60d.py)\n\n\n\n## Think! \nThroughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$? \n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_DynamicNetworks/solutions/W3D2_Tutorial1_Solution_7b1fce0e.py)\n\n\n\n---\n# Bonus 2: Noisy input drives the transition between two stable states\n\n\n\n## Ornstein-Uhlenbeck (OU) process\n\nAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\\eta(t)$ follows: \n\n$$\\tau_\\eta \\frac{d}{dt}\\eta(t) = -\\eta (t) + \\sigma_\\eta\\sqrt{2\\tau_\\eta}\\xi(t)$$\n\nExecute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.\n\n\n```python\n# @title OU process `my_OU(pars, sig, myseed=False)`\n\n# @markdown Make sure you execute this cell to visualize the noise!\n\n\ndef my_OU(pars, sig, myseed=False):\n \"\"\"\n A functions that generates Ornstein-Uhlenback process\n\n Args:\n pars : parameter dictionary\n sig : noise amplitute\n myseed : random seed. int or boolean\n\n Returns:\n I : Ornstein-Uhlenbeck input current\n \"\"\"\n\n # Retrieve simulation parameters\n dt, range_t = pars['dt'], pars['range_t']\n Lt = range_t.size\n tau_ou = pars['tau_ou'] # [ms]\n\n # set random seed\n if myseed:\n np.random.seed(seed=myseed)\n else:\n np.random.seed()\n\n # Initialize\n noise = np.random.randn(Lt)\n I_ou = np.zeros(Lt)\n I_ou[0] = noise[0] * sig\n\n # generate OU\n for it in range(Lt - 1):\n I_ou[it + 1] = (I_ou[it]\n + dt / tau_ou * (0. - I_ou[it])\n + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])\n\n return I_ou\n\n\npars = default_pars_single(T=100)\npars['tau_ou'] = 1. # [ms]\nsig_ou = 0.1\nI_ou = my_OU(pars, sig=sig_ou, myseed=2020)\nplt.figure(figsize=(10, 4))\nplt.plot(pars['range_t'], I_ou, 'r')\nplt.xlabel('t (ms)')\nplt.ylabel(r'$I_{\\mathrm{OU}}$')\nplt.show()\n```\n\n## Example: Up-Down transition\n\nIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.\n\n\n```python\n# @title Simulation of an E population with OU inputs\n\n# @markdown Make sure you execute this cell to spot the Up-Down states!\n\npars = default_pars_single(T=1000)\npars['w'] = 5.0\nsig_ou = 0.7\npars['tau_ou'] = 1. # [ms]\npars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)\n\nr = simulate_single(pars)\n\nplt.figure(figsize=(10, 4))\nplt.plot(pars['range_t'], r, 'b', alpha=0.8)\nplt.xlabel('t (ms)')\nplt.ylabel(r'$r(t)$')\nplt.show()\n```\n", "meta": {"hexsha": "8454d816fb6047dddc6495cbb44687a943b0752c", "size": 801174, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/W3D2_DynamicNetworks/student/W3D2_Tutorial1.ipynb", "max_stars_repo_name": "hnoamany/course-content", "max_stars_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tutorials/W3D2_DynamicNetworks/student/W3D2_Tutorial1.ipynb", "max_issues_repo_name": "hnoamany/course-content", "max_issues_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/W3D2_DynamicNetworks/student/W3D2_Tutorial1.ipynb", "max_forks_repo_name": "hnoamany/course-content", "max_forks_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 801174.0, "max_line_length": 801174, "alphanum_fraction": 0.9391230869, "converted": true, "num_tokens": 9615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.2814056194821861, "lm_q1q2_score": 0.14070280974109306}} {"text": "# Plotting with Matplotlib\n\n## Prepare for action\n\n\n```\nimport numpy as np\nimport scipy as sp\nimport sympy\n\n# Pylab combines the pyplot functionality (for plotting) with the numpy\n# functionality (for mathematics and for working with arrays) in a single namespace\n# aims to provide a closer MATLAB feel (the easy way). Note that his approach\n# should only be used when doing some interactive quick and dirty data inspection.\n# DO NOT USE THIS FOR SCRIPTS\n#from pylab import *\n\n# the convienient Matplotib plotting interface pyplot (the tidy/right way)\n# use this for building scripts. The examples here will all use pyplot.\nimport matplotlib.pyplot as plt\n\n# for using the matplotlib API directly (the hard and verbose way)\n# use this when building applications, and/or backends\nimport matplotlib as mpl\n```\n\nHow would you like the IPython notebook show your plots? In order to use the\nmatplotlib IPython magic youre IPython notebook should be launched as\n\n ipython notebook --matplotlib=inline\n\nMake plots appear as a pop up window, chose the backend: 'gtk', 'inline', 'osx', 'qt', 'qt4', 'tk', 'wx'\n \n %matplotlib qt\n \nor inline the notebook (no panning, zooming through the plot). Not working in IPython 0.x\n \n %matplotib inline\n \n\n\n```\n# activate pop up plots\n#%matplotlib qt\n# or change to inline plots\n%matplotlib inline\n```\n\n ERROR: Line magic function `%matplotlib` not found.\n\n### Matplotlib documentation\n\nFinding your own way (aka RTFM). Hint: there is search box available!\n\n* http://matplotlib.org/contents.html\n\nThe Matplotlib API docs:\n\n* http://matplotlib.org/api/index.html\n\nPyplot, object oriented plotting:\n\n* http://matplotlib.org/api/pyplot_api.html\n* http://matplotlib.org/api/pyplot_summary.html\n\nExtensive gallery with examples:\n\n* http://matplotlib.org/gallery.html\n\n### Tutorials for those who want to start playing\n\nIf reading manuals is too much for you, there is a very good tutorial available here:\n\n* http://nbviewer.ipython.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb\n\nNote that this tutorial uses\n\n from pylab import *\n\nwhich is usually not adviced in more advanced script environments. When using\n \n import matplotlib.pyplot as plt\n\nyou need to preceed all plotting commands as used in the above tutorial with\n \n plt.\n\n\nGive me more!\n\n[EuroScipy 2012 Matlotlib tutorial](http://www.loria.fr/~rougier/teaching/matplotlib/). Note that here the author uses ```from pylab import * ```. When using ```import matplotliblib.pyplot as plt``` the plotting commands need to be proceeded with ```plt.```\n\n\n## Plotting template starting point\n\n\n```\n# some sample data\nx = np.arange(-10,10,0.1)\n```\n\nTo change the default plot configuration values.\n\n\n```\npage_width_cm = 13\ndpi = 200\ninch = 2.54 # inch in cm\n# setting global plot configuration using the RC configuration style\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize=12) # tick labels\nplt.rc('ytick', labelsize=20) # tick labels\nplt.rc('axes', labelsize=20) # axes labels\n# If you don\u2019t need LaTeX, don\u2019t use it. It is slower to plot, and text\n# looks just fine without. If you need it, e.g. for symbols, then use it.\n#plt.rc('text', usetex=True) #<- P-E: Doesn't work on my Mac\n```\n\n\n```\n# create a figure instance, note that figure size is given in inches!\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6))\n# set the big title (note aligment relative to figure)\nfig.suptitle(\"suptitle 16, figure alignment\", fontsize=16)\n\n# actual plotting\nax.plot(x, x**2, label=\"label 12\")\n\n\n# set axes title (note aligment relative to axes)\nax.set_title(\"title 14, axes alignment\", fontsize=14)\n\n# axes labels\nax.set_xlabel('xlabel 12')\nax.set_ylabel(r'$y_{\\alpha}$ 12', fontsize=8)\n\n# legend\nax.legend(fontsize=12, loc=\"best\")\n\n# saving the figure in different formats\nfig.savefig('figure-%03i.png' % dpi, dpi=dpi)\nfig.savefig('figure.svg')\nfig.savefig('figure.eps')\n```\n\n\n```\n# following steps are only relevant when using figures as pop up windows (with %matplotlib qt)\n# to update a figure with has been modified\nfig.canvas.draw()\n# show a figure\nfig.show()\n```\n\n## Exercise\n\nThe current section is about you trying to figure out how to do several plotting features. You should use the previously mentioned resources to find how to do that. In many cases, google is your friend!\n\n* add a grid to the plot\n\n\n\n```\nplt.plot(x,x**2)\nplt.grid(True)\n#Write code to show grid in plot here\nplt.show()\n```\n\n* change the location of the legend to different places\n\n\n\n```\nplt.plot(x,x**2, label=\"label 12\")\nplt.legend(fontsize=12, loc=\"upper center\")\nplt.show()\n```\n\n* find a way to control the line type and color, marker type and color, control the frequency of the marks (`markevery`). See plot options at: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot \n\n\n\n```\nplt.plot(x,x**2 ,ls='-',color=\"green\", lw=2, marker='*', markersize=15,markevery=5)\nplt.show()\n```\n\n* add different sub-plots\n\n\n\n```\nfig, ax = plt.subplots(nrows=3, ncols=1)\nax[0].plot(x,x**2)\nax[1].plot(x, x**2, x, x**3)\nax[1].axis('tight')\nax[1].set_title(\"tight axes\")\nax[2].plot(x, 5*np.sqrt(x), x, 10*x**3)\nax[2].axis('tight')\nax[2].set_title(\"tight axes\")\nplt.show()\n```\n\n* size the figure such that when included on an A4 page the fonts are given in their true size\n\n\n\n```\nfig, ax = plt.subplots(nrows=3, ncols=1)\nax[0].plot(x,x**2)\nax[1].plot(x, x**2, x, x**3)\nax[1].axis('tight')\nax[1].set_title(\"tight axes\")\nax[2].plot(x, 5*np.sqrt(x), x, 10*x**3)\nax[2].axis('tight')\nax[2].set_title(\"tight axes\")\nfig.set_size_inches(11.69,8.27)\nplt.show()\n```\n\n* make a contour plot\n\n\n\n```\nimport matplotlib.mlab as mlab\nX, Y = np.meshgrid(x,x)\nZ1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)\nZ2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)\n# difference of Gaussians\nZ = 10.0 * (Z2 - Z1)\nplt.figure()\nCS = plt.contour(X, Y, Z)\nplt.title('Simplest default with labels')\nplt.show()\n```\n\n* use twinx() to create a second axis on the right for the second plot\n\n\n\n```\nfig, ax1=plt.subplots()\nax1.plot(x,x**2)\nax2=ax1.twinx()\nax2.plot(x,x**4, 'r')\nplt.show()\n```\n\n* add horizontal and vertical lines using axvline(), axhline()\n\n\n\n```\nplt.plot(x,x**2)\nplt.axvline(x=0, ymin=0, ymax=1)\nplt.axhline(y=10, xmin=-1, xmax=1)\nplt.show()\n```\n\n* autoformat dates for nice printing on the x-axis using fig.autofmt_xdate()\n\n\n```\nimport datetime\ndates = np.array([datetime.datetime.now() + datetime.timedelta(days=i) for i in xrange(24)])\nfig, ax = plt.subplots(nrows=1, ncols=1)\nplt.plot(dates,xrange(24))\nfig.autofmt_xdate(bottom=0.2,rotation=90,ha='right')\nplt.show()\n```\n\n## Advanced exercises\n\nWe are going to play a bit with regression\n\n* Create a vector x of equally spaced number between $x \\in [0, 5\\pi]$ of 1000 points (keyword: linspace)\n\n\n```\nx=np.linspace(0,5*np.pi,1000)\n```\n\n* create a vector y, so that y=sin(x) with some random noise\n\n\n```\n#print [np.random.random()-0.5 for r in xrange(1000)] \nnoise= [np.random.random()-0.5 for r in xrange(1000)]\ny=np.sin(x)+noise\n\n```\n\n* plot it like this: \n\n\n```\nplt.plot(x,y,ls='',color=\"blue\", lw=2, marker='o', markersize=5, label=\"y=sin(x)\")\nplt.plot(x,np.sin(x),ls='--',color=\"black\")\nplt.legend(fontsize=12, loc=\"upper right\")\nplt.show()\n```\n\nTry to do a polynomial fit on y(x) with different polynomial degree (Use numpy.polyfit to obtain coefficients)\n\nPlot it like this (use np.poly1d(coef)(x) to plot polynomials) \n\n\n\n```\nfig, ax = plt.subplots()\nax.plot(x,y,ls='',color=\"blue\", lw=2, marker='o', markersize=5)\nax.plot(x,np.sin(x),ls='--',color=\"black\",label=\"y=sin(x)\")\n\nfor i in range(10):\n p=np.poly1d(np.polyfit(x,y,i))\n leg=\"deg=\" + str(i)\n ax.plot(x,p(x), lw=1,label=leg)\n \n# Shink current axis by 20%\nbox = ax.get_position()\nax.set_position([box.x0, box.y0, box.width * 0.8, box.height])\nax.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.show()\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "4452ba7ccc3887891c1c8665b74c096e15095585", "size": 17331, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lesson 3/results/ewma.ipynb", "max_stars_repo_name": "gtpedrosa/Python4WindEnergy", "max_stars_repo_head_hexsha": "f8ad09018420cfb3a419173f97b129de7118d814", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2015-01-19T18:21:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-27T22:41:06.000Z", "max_issues_repo_path": "lesson 3/results/ewma.ipynb", "max_issues_repo_name": "arash7444/Python4WindEnergy", "max_issues_repo_head_hexsha": "8f97a5f86e81ce01d80dafb6f8104165fd3ad397", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-05-24T06:07:07.000Z", "max_issues_repo_issues_event_max_datetime": "2016-05-24T08:26:29.000Z", "max_forks_repo_path": "lesson 3/results/ewma.ipynb", "max_forks_repo_name": "arash7444/Python4WindEnergy", "max_forks_repo_head_hexsha": "8f97a5f86e81ce01d80dafb6f8104165fd3ad397", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2015-06-26T14:44:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-07T18:36:52.000Z", "avg_line_length": 28.4114754098, "max_line_length": 583, "alphanum_fraction": 0.5111649645, "converted": true, "num_tokens": 2262, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.2942149721629888, "lm_q1q2_score": 0.14021686877114636}} {"text": "###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License \u00a9 2017 L.A. Barba, N.C. Clementi\n\n\n```python\n# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../style/custom.css'\nHTML(open(css_file, \"r\").read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n# Linear regression with real data\n\n## Earth temperature over time\n\nIn this lesson, we will apply all that we've learned (and more) to analyze real data of Earth temperature over time.\n\nIs global temperature rising? How much? This is a question of burning importance in today's world!\n\nData about global temperatures are available from several sources: NASA, the National Climatic Data Center (NCDC) and the University of East Anglia in the UK. Check out the [University Corporation for Atmospheric Research](https://www2.ucar.edu/climate/faq/how-much-has-global-temperature-risen-last-100-years) (UCAR) for an in-depth discussion.\n\nThe [NASA Goddard Space Flight Center](http://svs.gsfc.nasa.gov/goto?3901) is one of our sources of global climate data. They produced the video below showing a color map of the changing global surface **temperature anomalies** from 1880 to 2015.\n\nThe term [global temperature anomaly](https://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php) means the difference in temperature with respect to a reference value or a long-term average. It is a very useful way of looking at the problem and in many ways better than absolute temperature. For example, a winter month may be colder than average in Washington DC, and also in Miami, but the absolute temperatures will be different in both places.\n\n\n```python\nfrom IPython.display import YouTubeVideo\nYouTubeVideo('gGOzHVUQCw0')\n```\n\n\n\n\n\n\n\n\n\n\nHow would we go about understanding the _trends_ from the data on global temperature?\n\nThe first step in analyzing unknown data is to generate some simple plots using **Matplotlib**. We are going to look at the temperature-anomaly history, contained in a file, and make our first plot to explore this data. \n\nWe are going to smooth the data and then we'll fit a line to it to find a trend, plotting along the way to see how it all looks.\n\nLet's get started!\n\n## Step 1: Read a data file\n\nWe took the data from the [NOAA](https://www.ncdc.noaa.gov/cag/) (National Oceanic and Atmospheric Administration) webpage. Feel free to play around with the webpage and analyze data on your own, but for now, let's make sure we're working with the same dataset.\n\n\nWe have a file named `land_global_temperature_anomaly-1880-2016.csv` in our `data` folder. This file contains the year on the first column, and averages of land temperature anomaly listed sequentially on the second column, from the year 1880 to 2016. We will load the file, then make an initial plot to see what it looks like.\n\n\n##### Note:\n\nIf you downloaded this notebook alone, rather than the full collection for this course, you may not have the data file on the location we assume below. In that case, you can download the data if you add a code cell, and execute the following code in it:\n\n```Python\nfrom urllib.request import urlretrieve\nURL = 'http://go.gwu.edu/engcomp1data5?accessType=DOWNLOAD'\nurlretrieve(URL, 'land_global_temperature_anomaly-1880-2016.csv')\n```\nThe data file will be downloaded to your working directory, and you will then need to remove the path information, i.e., the string `'../../data/'`, from the definition of the variable `fname` below.\n\nLet's start by importing NumPy.\n\n\n```python\nimport numpy\n```\n\nTo load our data from the file, we'll use the function [`numpy.loadtxt()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html), which lets us immediately save the data into NumPy arrays. (We encourage you to read the documentation for details on how the function works.) Here, we'll save the data into the arrays `year` and `temp_anomaly`. \n\n\n```python\nfname = 'data/land_global_temperature_anomaly-1880-2016.csv'\n\nyear, temp_anomaly = numpy.loadtxt(fname, delimiter=',', skiprows=5, unpack=True)\n```\n\n##### Exercise\n\nInspect the data by printing `year` and `temp_anomaly`.\n\n## Step 2: Plot the data\n\nLet's first load the **Matplotlib** module called `pyplot`, for making 2D plots. Remember that to get the plots inside the notebook, we use a special \"magic\" command, `%matplotlib inline`:\n\n\n```python\nfrom matplotlib import pyplot\n%matplotlib inline\n```\n\nThe `plot()` function of the `pyplot` module makes simple line plots. We avoid that stuff that appeared on top of the figure, that `Out[x]: [< ...>]` ugliness, by adding a semicolon at the end of the plotting command.\n\n\n```python\npyplot.plot(year, temp_anomaly);\n```\n\nNow we have a line plot, but if you see this plot without any information you would not be able to figure out what kind of data it is! We need labels on the axes, a title and why not a better color, font and size of the ticks. \n**Publication quality** plots should always be your standard for plotting. \nHow you present your data will allow others (and probably you in the future) to better understand your work. \n\nWe can customize the style of our plots using **Matplotlib**'s [`rcParams`](https://matplotlib.org/api/matplotlib_configuration_api.html#matplotlib.rcParams). It lets us set some style options that apply for all the plots we create in the current session.\nHere, we'll make the font of a specific size and type. You can also customize other parameters like line width, color, and so on (check out the documentation).\n\n\n```python\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\n```\n\nWe'll redo the same plot, but now we'll add a few things to make it prettier and **publication quality**. We'll add a title, label the axes and, show a background grid. Study the commands below and looks at the result!\n\n\n```python\n#You can set the size of the figure by doing:\npyplot.figure(figsize=(10,5))\n\n#Plotting\npyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1) \npyplot.title('Land global temperature anomalies. \\n')\npyplot.xlabel('Year')\npyplot.ylabel('Land temperature anomaly [\u00b0C]')\npyplot.grid();\n```\n\nBetter, no? Feel free to play around with the parameters and see how the plot changes. There's nothing like trial and error to get the hang of it. \n\n## Step 3: Least-squares linear regression \n\nIn order to have an idea of the general behavior of our data, we can find a smooth curve that (approximately) fits the points. We generally look for a curve that's simple (e.g., a polynomial), and does not reproduce the noise that's always present in experimental data. \n\nLet $f(x)$ be the function that we'll fit to the $n+1$ data points: $(x_i, y_i)$, $i = 0, 1, ... ,n$:\n\n$$ \n f(x) = f(x; a_0, a_1, ... , a_m) \n$$\n\nThe notation above means that $f$ is a function of $x$, with $m+1$ variable parameters $a_0, a_1, ... , a_m$, where $m < n$. We need to choose the form of $f(x)$ a priori, by inspecting the experimental data and knowing something about the phenomenon we've measured. Thus, curve fitting consists of two steps: \n\n1. Choosing the form of $f(x)$.\n2. Computing the parameters that will give us the \"best fit\" to the data. \n\n\n### What is the \"best\" fit?\n\nWhen the noise in the data is limited to the $y$-coordinate, it's common to use a **least-squares fit**, which minimizes the function\n\n$$\n\\begin{equation} \n S(a_0, a_1, ... , a_m) = \\sum_{i=0}^{n} [y_i - f(x_i)]^2\n\\end{equation} \n$$\n\nwith respect to each $a_j$. We find the values of the parameters for the best fit by solving the following equations:\n\n$$\n\\begin{equation}\n \\frac{\\partial{S}}{\\partial{a_k}} = 0, \\quad k = 0, 1, ... , m.\n\\end{equation}\n$$\n\nHere, the terms $r_i = y_i - f(x_i)$ are called residuals: they tell us the discrepancy between the data and the fitting function at $x_i$. \n\nTake a look at the function $S$: what we want to minimize is the sum of the squares of the residuals. The equations (2) are generally nonlinear in $a_j$ and might be difficult to solve. Therefore, the fitting function is commonly chosen as a linear combination of specified functions $f_j(x)$, \n\n$$\n\\begin{equation*}\n f(x) = a_0f_0(x) + a_1f_1(x) + ... + a_mf_m(x)\n\\end{equation*}\n$$\n\nwhich results in equations (2) being linear. In the case that the fitting function is polynomial, we have have $f_0(x) = 1, \\; f_1(x) = x, \\; f_2(x) = x^2$, and so on. \n\n### Linear regression \n\nWhen we talk about linear regression we mean \"fitting a straight line to the data.\" Thus,\n\n$$\n\\begin{equation}\n f(x) = a_0 + a_1x\n\\end{equation}\n$$\n\nIn this case, the function that we'll minimize is:\n\n$$\n\\begin{equation}\n S(a_0, a_1) = \\sum_{i=0}^{n} [y_i - f(x_i)]^2 = \\sum_{i=0}^{n} (y_i - a_0 - a_1x_i)^2 \n\\end{equation} \n$$\n\nEquations (2) become:\n\n$$\n\\begin{equation}\n \\frac{\\partial{S}}{\\partial{a_0}} = \\sum_{i=0}^{n} -2(y_i - a_0 - a_1x_i) = 2 \\left[ a_0(n+1) + a_1\\sum_{i=0}^{n} x_i - \\sum_{i=0}^{n} y_i \\right] = 0\n\\end{equation} \n$$\n\n$$\n\\begin{equation}\n \\frac{\\partial{S}}{\\partial{a_1}} = \\sum_{i=0}^{n} -2(y_i - a_0 - a_1x_i)x_i = 2 \\left[ a_0\\sum_{i=0}^{n} x_i + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i \\right] = 0\n\\end{equation} \n$$\n\nLet's divide both equations by $2(n+1)$ and rearrange terms.\n\nRearranging (6) and (7):\n\n$$\n\\begin{align}\n 2 \\left[ a_0(n+1) + a_1\\sum_{i=0}^{n} x_i - \\sum_{i=0}^{n} y_i \\right] &= 0 \\nonumber \\\\ \n \\frac{a_0(n+1)}{n+1} + a_1 \\frac{\\sum_{i=0}^{n} x_i}{n+1} - \\frac{\\sum_{i=0}^{n} y_i}{n+1} &= 0 \\\\\n\\end{align}\n$$\n\n$$\n\\begin{align}\n a_0 = \\bar{y} - a_1\\bar{x}\n\\end{align}\n$$\n\nwhere $\\bar{x} = \\frac{\\sum_{i=0}^{n} x_i}{n+1}$ and $\\bar{y} = \\frac{\\sum_{i=0}^{n} y_i}{n+1}$.\n\nRearranging (7):\n\n$$\n\\begin{align}\n 2 \\left[ a_0\\sum_{i=0}^{n} x_i + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i \\right] &= 0 \\\\\n a_0\\sum_{i=0}^{n} x_i + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i &=0 \\\\\n\\end{align}\n$$\n\nNow, if we replace $a_0$ from equation (8) into (9) and rearrange terms:\n\n$$\n\\begin{align*}\n (\\bar{y} - a_1\\bar{x})\\sum_{i=0}^{n} x_i + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i &= 0 \\\\ \n\\end{align*}\n$$\n\nReplacing the definitions of the mean values into the equation, \n\n$$\n\\begin{align*}\n \\left[\\frac{1}{n+1}\\sum_{i=0}^{n} y_i - \\frac{a_1}{n+1}\\sum_{i=0}^{n} x_i \\right]\\sum_{i=0}^{n} x_i + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i &= 0 \\\\ \n \\frac{1}{n+1}\\sum_{i=0}^{n} y_i \\sum_{i=0}^{n} x_i - \\frac{a_1}{n+1}\\sum_{i=0}^{n} x_i \\sum_{i=0}^{n} x_i + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i &= 0 \\\\ \n\\end{align*}\n$$\n\nLeaving everything in terms of $\\bar{x}$, \n\n$$\n\\begin{align*}\n \\sum_{i=0}^{n} y_i \\bar{x} - a_1\\sum_{i=0}^{n} x_i \\bar{x} + a_1\\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_iy_i = 0 \n\\end{align*}\n$$\n\nGrouping the terms that have $a_1$ on the left-hand side and the rest on the right-hand side:\n\n$$\n\\begin{align*}\n a_1\\left[ \\sum_{i=0}^{n} x_{i}^2 - \\sum_{i=0}^{n} x_i \\bar{x}\\right] &= \\sum_{i=0}^{n} x_iy_i - \\sum_{i=0}^{n} y_i \\bar{x} \\\\\n a_1 \\sum_{i=0}^{n} (x_{i}^2 - x_i \\bar{x}) &= \\sum_{i=0}^{n} (x_iy_i - y_i \\bar{x}) \\\\\n a_1 \\sum_{i=0}^{n} x_{i}(x_{i} -\\bar{x}) &= \\sum_{i=0}^{n} y_i(x_i - \\bar{x}) \n\\end{align*}\n$$\n\nFinally, we get that:\n\n$$\n\\begin{align}\n a_1 = \\frac{ \\sum_{i=0}^{n} y_{i} (x_i - \\bar{x})}{\\sum_{i=0}^{n} x_i (x_i - \\bar{x})}\n\\end{align}\n$$\n\nThen our coefficients are:\n\n$$\n\\begin{align}\n a_1 = \\frac{ \\sum_{i=0}^{n} y_{i} (x_i - \\bar{x})}{\\sum_{i=0}^{n} x_i (x_i - \\bar{x})} \\quad , \\quad a_0 = \\bar{y} - a_1\\bar{x}\n\\end{align}\n$$\n\n### Let's fit!\n\nLet's now fit a straight line through the temperature-anomaly data, to see the trend over time. We'll use least-squares linear regression to find the slope and intercept of a line \n\n$$y = a_1x+a_0$$\n\nthat fits our data.\n\nIn our case, the `x`-data corresponds to `year`, and the `y`-data is `temp_anomaly`. To calculate our coefficients with the formula above, we need the mean values of our data. Sine we'll need to compute the mean for both `x` and `y`, it could be useful to write a custom Python _function_ that computes the mean for any array, and we can then reuse it.\n\nIt is good coding practice to *avoid repeating* ourselves: we want to write code that is reusable, not only because it leads to less typing but also because it reduces errors. If you find yourself doing the same calculation multiple times, it's better to encapsulate it into a *function*. \n\nRemember the _key concept_ from [Lesson 1](http://go.gwu.edu/engcomp1lesson1): A function is a compact collection of code that executes some action on its arguments. \n\nOnce *defined*, you can *call* a function as many times as you want. When we *call* a function, we execute all the code inside the function. The result of the execution depends on the *definition* of the function and on the values that are *passed* into it as *arguments*. Functions might or might not *return* values in their last operation. \n\nThe syntax for defining custom Python functions is:\n\n```python\ndef function_name(arg_1, arg_2, ...):\n '''\n docstring: description of the function\n '''\n \n```\n\nThe **docstring** of a function is a message from the programmer documenting what he or she built. Docstrings should be descriptive and concise. They are important because they explain (or remind) the intended use of the function to the users. You can later access the docstring of a function using the function `help()` and passing the name of the function. If you are in a notebook, you can also prepend a question mark `'?'` before the name of the function and run the cell to display the information of a function. \n\nTry it!\n\n\n```python\n?print\n```\n\nUsing the `help` function instead:\n\n\n```python\nhelp(print)\n```\n\n Help on built-in function print in module builtins:\n \n print(...)\n print(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n \n Prints the values to a stream, or to sys.stdout by default.\n Optional keyword arguments:\n file: a file-like object (stream); defaults to the current sys.stdout.\n sep: string inserted between values, default a space.\n end: string appended after the last value, default a newline.\n flush: whether to forcibly flush the stream.\n \n\n\nLet's define a custom function that calculates the mean value of any array. Study the code below carefully. \n\n\n```python\ndef mean_value(array):\n \"\"\" Calculate the mean value of an array \n \n Arguments\n ---------\n array: Numpy array \n \n Returns\n ------- \n mean: mean value of the array\n \"\"\"\n sum_elem = 0\n for element in array:\n sum_elem += element # this is the same as sum_elem = sum_elem + element\n \n mean = sum_elem / len(array)\n \n return mean\n \n```\n\nOnce you execute the cell above, the function`mean_value()` becomes available to use on any argument of the correct type. This function works on arrays of any length. We can try it now with our data.\n\n\n```python\nyear_mean = mean_value(year)\nprint(year_mean)\n```\n\n 1948.0\n\n\n\n```python\ntemp_anomaly_mean = mean_value(temp_anomaly)\nprint(temp_anomaly_mean)\n```\n\n 0.0526277372263\n\n\nNeat! You learned how to write a Python function, and we wrote one for computing the mean value of an array of numbers. We didn't have to, though, because NumPy has a built-in function to do just what we needed: [`numpy.mean()`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.mean.html).\n\n\n##### Exercise \n\nCalculate the mean of the `year` and `temp_anomaly` arrays using the NumPy built-in function, and compare the results with the ones obtained using our custom `mean_value` function.\n\n\n```python\n\n```\n\nNow that we have mean values, we can compute our coefficients by following equations (12). We first calculate $a_1$ and then use that value to calculate $a_0$.\n\nOur coefficients are:\n\n$$\n a_1 = \\frac{ \\sum_{i=0}^{n} y_{i} (x_i - \\bar{x})}{\\sum_{i=0}^{n} x_i (x_i - \\bar{x})} \\quad , \\quad a_0 = \\bar{y} - a_1\\bar{x}\n$$ \n\n\nWe already calculated the mean values of the data arrays, but the formula requires two sums over new derived arrays. Guess what, NumPy has a built-in function for that: [`numpy.sum()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html). Study the code below.\n\n\n```python\na_1 = numpy.sum(temp_anomaly*(year - year_mean)) / numpy.sum(year*(year - year_mean)) \n```\n\n\n```python\nprint(a_1)\n```\n\n 0.0103702839435\n\n\n\n```python\na_0 = temp_anomaly_mean - a_1*year_mean\n```\n\n\n```python\nprint(a_0)\n```\n\n -20.1486853847\n\n\n##### Exercise\n\nWrite a function that computes the coefficients, call the function to compute them and compare the result with the values we obtained before. As a hint, we give you the structure that you should follow:\n\n```python\ndef coefficients(x, y, x_mean, y_mean):\n \"\"\"\n Write docstrings here\n \"\"\"\n\n a_1 = \n a_0 = \n \n return a_1, a_0\n```\n\nWe now have the coefficients of a linear function that best fits our data. With them, we can compute the predicted values of temperature anomaly, according to our fit. Check again the equations above: the values we are going to compute are $f(x_i)$. \n\nLet's call `reg` the array obtined from evaluating $f(x_i)$ for all years.\n\n\n```python\nreg = a_0 + a_1 * year\n```\n\nWith the values of our linear regression, we can plot it on top of the original data to see how they look together. Study the code below. \n\n\n```python\npyplot.figure(figsize=(10, 5))\n\npyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5) \npyplot.plot(year, reg, 'k--', linewidth=2, label='Linear regression')\npyplot.xlabel('Year')\npyplot.ylabel('Land temperature anomaly [\u00b0C]')\npyplot.legend(loc='best', fontsize=15)\npyplot.grid();\n```\n\n## Step 4: Apply regression using NumPy\n\nAbove, we coded linear regression from scratch. But, guess what: we didn't have to because NumPy has built-in functions that do what we need!\n\nYes! Python and NumPy are here to help! With [`polyfit()`](https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.polyfit.html), we get the slope and $y$-intercept of the line that best fits the data. With [`poly1d()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.poly1d.html), we can build the linear function from its slope and $y$-intercept.\n\nCheck it out:\n\n\n```python\n# First fit with NumPy, then name the coefficients obtained a_1n, a_0n:\na_1n, a_0n = numpy.polyfit(year, temp_anomaly, 1)\n\nf_linear = numpy.poly1d((a_1n, a_0n)) \n```\n\n\n```python\nprint(a_1n)\n```\n\n 0.0103702839435\n\n\n\n```python\nprint(a_0n)\n```\n\n -20.1486853847\n\n\n\n```python\nprint(f_linear)\n```\n\n \n 0.01037 x - 20.15\n\n\n\n```python\npyplot.figure(figsize=(10, 5))\n\npyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5) \npyplot.plot(year, f_linear(year), 'k--', linewidth=2, label='Linear regression')\npyplot.xlabel('Year')\npyplot.ylabel('Land temperature anomaly [\u00b0C]')\npyplot.legend(loc='best', fontsize=15)\npyplot.grid();\n```\n\n## \"Split regression\"\n\nIf you look at the plot above, you might notice that around 1970 the temperature starts increasing faster that the previous trend. So maybe one single straight line does not give us a good-enough fit.\n\nWhat if we break the data in two (before and after 1970) and do a linear regression in each segment? \n\nTo do that, we first need to find the position in our `year` array where the year 1970 is located. Thankfully, NumPy has a function called [`numpy.where()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) that can help us. We pass a condition and `numpy.where()` tells us where in the array the condition is `True`. \n\n\n\n```python\nnumpy.where(year==1970)\n```\n\n\n\n\n (array([90]),)\n\n\n\nTo split the data, we use the powerful instrument of _slicing_ with the colon notation. Remember that a colon between two indices indicates a range of values from a `start` to an `end`. The rule is that `[start:end]` includes the element at index `start` but excludes the one at index `end`. For example, to grab the first 3 years in our `year` array, we do:\n\n\n```python\nyear[0:3]\n```\n\n\n\n\n array([ 1880., 1881., 1882.])\n\n\n\nNow we know how to split our data in two sets, to get two regression lines. We need two slices of the arrays `year` and `temp_anomaly`, which we'll save in new variable names below. After that, we complete two linear fits using the helpful NumPy functions we learned above.\n\n\n```python\nyear_1 , temp_anomaly_1 = year[0:90], temp_anomaly[0:90]\nyear_2 , temp_anomaly_2 = year[90:], temp_anomaly[90:]\n\nm1, b1 = numpy.polyfit(year_1, temp_anomaly_1, 1)\nm2, b2 = numpy.polyfit(year_2, temp_anomaly_2, 1)\n\nf_linear_1 = numpy.poly1d((m1, b1))\nf_linear_2 = numpy.poly1d((m2, b2))\n```\n\n\n```python\npyplot.figure(figsize=(10, 5))\n\npyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5) \npyplot.plot(year_1, f_linear_1(year_1), 'g--', linewidth=2, label='1880-1969')\npyplot.plot(year_2, f_linear_2(year_2), 'r--', linewidth=2, label='1970-2016')\n\npyplot.xlabel('Year')\npyplot.ylabel('Land temperature anomaly [\u00b0C]')\npyplot.legend(loc='best', fontsize=15)\npyplot.grid();\n```\n\nWe have two different curves for two different parts of our data set. A little problem with this and is that the end point of our first regression doesn't match the starting point of the second regression. We did this for the purpose of learning, but it is not rigorously correct. We'll fix in in the next course module when we learn more about different types of regression. \n\n## We learned:\n\n* Making our plots more beautiful\n* Defining and calling custom Python functions\n* Applying linear regression to data\n* NumPy built-ins for linear regression\n* The Earth is warming up!!!\n\n## References\n\n1. [_Essential skills for reproducible research computing_](https://barbagroup.github.io/essential_skills_RRC/) (2017). Lorena A. Barba, Natalia C. Clementi, Gilbert Forsyth. \n2. _Numerical Methods in Engineering with Python 3_ (2013). Jaan Kiusalaas. Cambridge University Press.\n3. _Effective Computation in Physics: Field Guide to Research with Python_ (2015). Anthony Scopatz & Kathryn D. Huff. O'Reilly Media, Inc.\n\n", "meta": {"hexsha": "2ef7d34ff7f76680c88f7ddf9c4ecc9dcfaac82f", "size": 308244, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "00_Intro_Python_Jupyter_notebooks/5_Linear_Regression_with_Real_Data.ipynb", "max_stars_repo_name": "daniel-koehn/Differential-equations-earth-system", "max_stars_repo_head_hexsha": "3916cbc968da43d0971b7139476350c1dd798746", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 30, "max_stars_repo_stars_event_min_datetime": "2019-10-16T19:07:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:48:44.000Z", "max_issues_repo_path": "00_Intro_Python_Jupyter_notebooks/5_Linear_Regression_with_Real_Data.ipynb", "max_issues_repo_name": "daniel-koehn/Differential-equations-earth-system", "max_issues_repo_head_hexsha": "3916cbc968da43d0971b7139476350c1dd798746", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "00_Intro_Python_Jupyter_notebooks/5_Linear_Regression_with_Real_Data.ipynb", "max_forks_repo_name": "daniel-koehn/Differential-equations-earth-system", "max_forks_repo_head_hexsha": "3916cbc968da43d0971b7139476350c1dd798746", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2020-11-19T08:21:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-10T09:33:37.000Z", "avg_line_length": 255.592039801, "max_line_length": 54996, "alphanum_fraction": 0.9002283905, "converted": true, "num_tokens": 7405, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38121955219593834, "lm_q2_score": 0.3665897501624599, "lm_q1q2_score": 0.13975118039655388}} {"text": "\n\n\n\n\n```python\n%%html\n\n\n

Code is hidden for ease of viewing. Click the Show/Hide button to see. \n

\n```\n\n\n\n\n

Code is hidden for ease of viewing. Click the Show/Hide button to see. \n

\n\n\n\n\n```python\nfrom IPython.display import display, Math, Latex, HTML\nimport ipywidgets as widgets\n```\n\n\n```python\ntry:\n from geogebra.ggb import *\nexcept ImportError:\n !pip install --upgrade --force-reinstall --user git+git://github.com/callysto/nbplus.git#egg=geogebra\\&subdirectory=geogebra\n from importlib import reload\n import site\n reload(site)\n from geogebra.ggb import * \nggb = GGB().setDefaultOptions(enableShiftDragZoom=False)\n```\n\n\n
\n\n\n\n \n\n\n# Inductive Reasoning and Deductive Reasoning\n\nThere are two forms of reasoning that that are useful when investigating a piece of mathematics.\n\n* `Inductive reasoning` involves looking for **patterns** in evidence in order to come up with conjectures (i.e. things that are likely to be true). This sort of reasoning will **not** tell you whether or not something actually *is* true but it is still very useful for making connections and figuring out what to investigate next.\n\n* `Deductive reasoning` involves starting with what you **know** and logically figuring out if some conjecture **must** also be true (and why). While deductive reasoning is stronger than inductive reasoning, it can also be more difficult to use.\n\nIn practice, one will often use `inductive reasoning` to make conjectures and `deductive reasoning` to verify them. In some cases producing a conjecture will require a mix of inductive and deductive reasoning.\n\nIn this notebook we will go over some example problems to help illustrate how one would go about using `inductive` and `deductive` reasoning in problem solving while avoiding pitfalls. Being able to apply these skills will make you a more effective problem solver. Being able to distinguish between the two will help you maintain a clear understanding of what you're doing, why you're doing it, and avoid mistakes in the process.\n\n## A Flawed Application of Inductive Reasoning\n\nConsider a circle. Suppose we were to add a certain number of dots to the edge of the circle and then draw chords connecting every pair of dots. The chords would partition the circle into a certain number of regions. *Can we find a relationship between the number of dots and the number of regions?* For instance, a circle with a two dots along the edge has one chord connecting them and is partitioned into two regions but a circle with three dots along the edge has three chords and is partitioned into four regions.\n\nHere is a simple applet to help visualize the problem. The slider at the bottom controls the number of dots along the edge (from $1$ to $6$). I've labeled each region with a number to make them easier to count.\n\n\n```python\n%%html\n\n```\n\n\n\n\n\n\nIf we look at the number of regions in the first five examples\n\n| dots | regions |\n| ---- | ------- |\n| 1 | 1 |\n| 2 | 2 |\n| 3 | 4 |\n| 4 | 8 |\n| 5 | 16 |\n\nand apply inductive reasoning then we may convince ourselves that the circle with $n$ dots will have $2^{n-1}$ regions.\n\n**Is that true?** *Note: If our inductive reasoning is correct then the sixth case will have $2^{6-1}=32$ regions.*\n\nUnfortunately, as it turns out the sixth circle will break down into $31$ regions, not $32$ regions. This is an example where inductive reasoning can lead you astray. Fortunately for us we managed to find a counterexample right away but there are conjectures where the first counterexample took decades to find and required numbers so large that it's virtually impossible for people to find them by hand. So we should always be very skeptical about the things inductive thinking may lead us to believe.\n\n## Some Flawed Applications of Deductive Reasoning\n\nDeductive reasoning can also fail us if we are not careful. It is possible to get caught up manipulating equations and not realize there's an underlying logical problem.\n\nHere are two flawed proofs. Try and find the problem in each one!\n\n### A Classic Flawed Proof\n\nThere are a few variations of this proof (with the same flaw) floating around. Every so often a student rediscovers it and thinks they've broken math.\n\nLet $a=b$.\n\nThen it follows that $b^2=ab$.\n\n\n$$\n\\begin{align*}\n a^2 - b^2 &= a^2 - b^2\\\\\n a^2 - b^2 &= a^2 - ab \\tag{Since $b^2=ab$}\\\\\n (a+b)(a-b) &= (a)(a-b) \\tag{Factoring}\\\\\n a+b &= a \\tag{Divide by sides by $a-b$}\\\\\n 2a &= a \\tag{Since $a=b$}\\\\\n 2 &= 1 \\tag{Divide both sides by $a$}\n\\end{align*}\n$$\n\n\nHint: The problem involves division.\n\nThe problem is introduced when both sides are divided by $a-b$ because $a-b=0$ and division by zero is not allowed (for reasons like this).\n\n### A Flawed Proof Involving Radicals\n\nThis one is somewhat less common but still interesting.\n\n\n$$\n\\begin{align*}\n -1 \n &= i^2 \\\\\n &= (i)(i) \\\\\n &= \\sqrt{-1}\\sqrt{-1} \\\\\n &= \\sqrt{(-1)(-1)} \\\\\n &= \\sqrt{1} \\\\\n &= 1\n\\end{align*}\n$$\n\n\nSo $-1=1$.\n\nHint: The problem involves distributing roots.\n\nThe problem occurs because $\\sqrt{ab}=\\sqrt{a}\\sqrt{b}$ only holds when $a$ and $b$ are both greater than or equal to $0$ (neither $a$ nor $b$ are allowed to be negative).\n\n## Some Applications of Inductive Reasoning\n\n### Sum of the first n odd integers\n\nSuppose you need to compute the sum of the first $100$ odd integers. You could do this directly but that likely wouldn't be very fun or interesting. Let's instead try applying inductive reasoning to try to come up with a better way to do it.\n\nBefore we can start looking for patterns we'll first need to generate some examples (so that we can use them as evidence later). Let's do that in Python:\n\n\n```python\n# Create a list of odd integers from 1 to 20 (incrementing by 2 each time).\noddIntegers = range(1,20,2)\n\n# Print a nice heading.\nprint('| n | Odd | S(n)|')\nprint('------------------')\n\n# For each odd integer print the step, the integer, and the sum of all odd integers so far.\nstep = 0\noddSum = 0\nfor odd in oddIntegers:\n step = step + 1\n oddSum = oddSum + odd\n print('|{:3d} | {:3d} | {:3d} |'.format(step, odd, oddSum))\n```\n\n | n | Odd | S(n)|\n ------------------\n | 1 | 1 | 1 |\n | 2 | 3 | 4 |\n | 3 | 5 | 9 |\n | 4 | 7 | 16 |\n | 5 | 9 | 25 |\n | 6 | 11 | 36 |\n | 7 | 13 | 49 |\n | 8 | 15 | 64 |\n | 9 | 17 | 81 |\n | 10 | 19 | 100 |\n\n\nFor brevity we'll use $S(n)$ to refer to the __sum of the first $n$ odd integers__.\n\nThe code above gives us a list of the first $10$ odd integers as well as $S(n)$ for each one (eg. for $n=3$, the $3$rd odd is $5$ and $S(3) = 1 + 3 + 5 = 9$).\n\nNow look closely at the data and try to see if there is a pattern there. Maybe consider changing the 20 in `range(1,20,2)` to a larger value to obtain more examples.\n\nHint: $1+3+5=3^2$.\n\nA good conjecture might be that\n\n$$S(n)=n^2.$$\n\nHere is a slider that tests our conjecture against a larger range of values:\n\n\n```python\nnum = widgets.IntSlider(description='n:', min=1)\ndef oddCompare(num):\n oddIntegers = range(1, num*2, 2)\n oddSum = sum(list(oddIntegers))\n print('S(n): {}'.format(oddSum))\n print('n^2: {}'.format((num*num)))\n\nout = widgets.interactive_output(oddCompare, {'num': num})\nwidgets.VBox([num, out])\n```\n\n\n VBox(children=(IntSlider(value=1, description='n:', min=1), Output()))\n\n\nNow that we have a conjecture it is typically very helpful if we're able to take it further and come up with some guesses about __why__ the conjecture holds. In this case the trick is to realize that we can compute the sum of the first $n$ odd integers by taking the sum of the first $n-1$ odd integers and adding the $n$'th odd. In other words:\n\n$$S(n+1)=S(n) + (n+1)^{\\text{th}} \\text{ odd integer}.$$\n\nFor instance, $S(5) = 1 + 3 + 5 + 7 + 9 = S(4) + 9$.\n\nThen combining this insight with the fact that we can represent square integers as squares yields this visualization:\n\n\n
https://www.quora.com/What-is-the-sum-of-1+3+5+7+9+11+13+15+17+19+21+23+25+27+29+31+-+95+97+99
\n\nUnfortunately as convincing as this visual representation may be, it isn't strong enough to prove that $S(n)=n^2$ holds for all integers $n$. In order to prove that it holds for all $n$ we require a more advanced proof technique that we don't currently have access to. So we must grit our teeth and accept the fact that _as far as we know_ there could exist some integer out there for which this fails. Despite this some people will accept the visual argument as a proof because it provides the key intuition necessary to develop a proof.\n\n### Triangular numbers\n\nThere is famous story about the mathematician Carl Friedrich Gauss who as a child in primary school was tasked with computing the sum of the first 100 integers as a way to keep him busy. As the story goes, Gauss quickly realized a pattern and wrote down the answer of 5050 within a few seconds.\n\nFor brevity we'll use $T(n)$ to refer to the __sum of the first $n$ integers__.\n\nThe trick to seeing the pattern in this problem isn't as straightforward as the last one. As before we'll need to generate some examples to analyze first.\n\n\n```python\n# Create a list of the first 10 integers.\nintegers = range(1,11)\n\n# Print a nice heading.\nprint('| n | T(n)|')\nprint('-------------')\n\n# For each integer print the integer and the sum of all integer so far.\ntSum = 0\nfor num in integers:\n tSum = tSum + num\n print('| {:3d} | {:3d} |'.format(num, tSum))\n```\n\n | n | T(n)|\n -------------\n | 1 | 1 |\n | 2 | 3 |\n | 3 | 6 |\n | 4 | 10 |\n | 5 | 15 |\n | 6 | 21 |\n | 7 | 28 |\n | 8 | 36 |\n | 9 | 45 |\n | 10 | 55 |\n\n\nUnfortunately this didn't turn out to be very insightful.\n\nAnother approach we can take is to try to represent the sum differently. Taking a cue from the previous section we'll draw the sum visually:\n\n\n\n
https://www.chegg.com/homework-help/use-ideas-behind-drawings-b-find-solution-gauss-s-problem-ex-chapter-1.1-problem-2a-solution-9780321987297-exc
\n\n\nIt is because of this representation that the sum of the first $n$ integers is often referred to as a $n$'th `triangle number`. The value of our sum is represented by the 'area' of its triangular representation. Now, while it may not be easy to compute the area of such a triangle it is easy to compute the area of a rectangle and we can produce one by setting two triangles face to face:\n\n\n
https://www.chegg.com/homework-help/use-ideas-behind-drawings-b-find-solution-gauss-s-problem-ex-chapter-1.1-problem-2a-solution-9780321987297-exc
\n\nThis representation suggests a good conjecture for computing the $n$'th triangle number:\n$$T(n)=\\frac{(n)(n+1)}{2}.$$\n\nUnfortunately we once again lack the advanced proof technique we need to prove (using deductive thinking) that this is true for all integers $n$. So like before we've managed to obtain a really good conjecture through inductive thinking but are not able to confirm with certainty whether or not it's true. Like the previous example, some people may accept this visual argument as a proof.\n\n### One Weird Trick\n\nFrom time to time neat computational tricks like this will go viral on social media. Unfortunately the people presenting them will typically only show a few flashy examples and leave the readers feeling completely mystified about __why__ the trick works (or worse, feeling betrayed when it fails).\n\n\n
https://brightside.me/article/nine-simple-math-tricks-youll-wish-you-had-always-known-92805/
\n\nBefore we start lets first rephrase what the picture is saying:\n\nTo compute $(97)(96)$:\n1. For each of our values, compute their difference from $100$:\n - $3=100-97$\n - $4=100-96$\n2. Multiply the differences to compute the first two digits of the result:\n - $12=(3)(4)$\n3. Add the differences and subtract the result from $100$ to compute the remaining digits of the result:\n - $93=100-(3+4)$\n4. Glue the two results together to get the final result:\n - $(97)(96)=9312$\n\nIt looks like step 3 could be simplified a bit to $93 = 97 - 4$ or $93 = 96 - 3$\n\nIn general it looks like the algorithm may be something like this:\n\nTo compute $(a)(b)$:\n1. For each of our values, compute their difference from $100$:\n - $a'=100-a$\n - $b'=100-b$\n2. Multiply the differences to compute the last two digits of the result:\n - $D=(a')(b')$\n3. Add the differences and subtract the result from $100$ to compute the remaining digits of the result:\n - $C=a-b'$\n4. Glue the two results together to get the final result:\n - $(a)(b)=C\\text{ appended with }D$\n\nNext lets have the computer generate some more examples for us so that we can get a better sense of the problem through `inductive reasoning`. The two sliders below let us choose some inputs and present the result created by the algorithm as well as the actual result with a message saying `Success!` if the algorithm gave the correct answer and `Fail!` if it gave the incorrect answer.\n\n\n```python\na = widgets.IntSlider(description='a:', min=85, max=115, value=100)\nb = widgets.IntSlider(description='b:', min=85, max=115, value=100)\n\ndef multiply(a,b):\n aDiff = 100-a\n bDiff = 100-b\n \n firstTwo = aDiff*bDiff\n lastTwo = a - bDiff\n \n result = str(lastTwo).lstrip('0') + str(firstTwo).zfill(2)\n print('Result: {}'.format(result))\n print('Actual product: {}'.format((a*b)))\n if (result == str(a*b)):\n print('Success!')\n else:\n print('Fail!')\n\nout = widgets.interactive_output(multiply, {'a': a, 'b':b})\nwidgets.VBox([a,b, out])\n```\n\n\n VBox(children=(IntSlider(value=100, description='a:', max=115, min=85), IntSlider(value=100, description='b:',\u2026\n\n\nPlaying around with the sliders it seems that the algorithm fails in two cases:\n1. Where the first digits in the result are greater than $100$.\n * for instance, for $(101)(99)$ it gives `100-1` instead of `9999`\n2. Where the first digits in the result are negative.\n * For instance, for $(110)(110)$ it gives `120120` instead of `12100`\n\nCan you see a pattern in the way the numbers fail, maybe a way to fix it?\n\nIt seems like both instances can be fixed by carrying values. Perhaps, instead of gluing values together like strings we're actually supposed to be multiplying the last digits by $100$ and adding the first digits! For instance, instead of saying $$9312 = 93\\text{ appended with }12$$ we would say $$9312=(93)(100)+12.$$\n\nLets update the algorithm with this change:\n\nTo compute $(a)(b)$:\n1. For each of our values, compute their difference from $100$:\n - $a'=100-a$\n - $b'=100-b$\n2. Multiply the differences to compute the first two digits of the result:\n - $D=(a')(b')$\n3. Add the differences and subtract the result from $100$ to compute the remaining digits of the result:\n - $C=a - b'$\n4. Combine the two results together to get the final result:\n - $(a)(b)=(C)(100) + D$\n\nIn other words: $$ab = [a-(100-b)](100) + (100-b)(100-a).$$\n\nNext let's create a new version of the sliders:\n\n\n```python\na = widgets.IntSlider(description='a:', min=85, max=115, value=100)\nb = widgets.IntSlider(description='b:', min=85, max=115, value=100)\n\ndef multiply(a,b):\n aDiff = 100-a\n bDiff = 100-b\n \n firstTwo = aDiff*bDiff\n lastTwo = 100 - (aDiff + bDiff)\n \n result = lastTwo*100 + firstTwo\n print('Result: {}'.format(result))\n print('Actual product: {}'.format((a*b)))\n if (result == a*b):\n print('Success!')\n else:\n print('Fail!')\n\nout = widgets.interactive_output(multiply, {'a': a, 'b':b})\nwidgets.VBox([a,b, out])\n```\n\n\n VBox(children=(IntSlider(value=100, description='a:', max=115, min=85), IntSlider(value=100, description='b:',\u2026\n\n\nFor the two failing examples mentioned above we now get:\n* For $(101)(99)$ we get `9999` which is correct!\n* For $(110)(110)$ we get `12100` which is correct!\n\nNow that we have a conjecture let's getting a better sense of why it works. One thing we can do is to take our equation from above:\n\n$$\n\\begin{align}\nab &= [a-(100-b)](100) + (100-b)(100-a) \\\\\n&= (a)(100) - (100-b)(100) + (100-b)(100-a)\n\\end{align}\n$$\n\nWe can visualize this:\n\n\n```python\n%%html\n\n```\n\n\n\n\n\n\nNote: This visualization assumes that $a$ and $b$ are between $0$ and $100$ (though in our conjecture we also allow them be greater than $100$).\n\nIn general these sorts of techniques where one performs a computation by manipulating the digits of a value is called an 'algorism' (not to be confused with algorithm). They're not really used very much these days (except for fast mental math gimmicks).\n\nThis particular algorism has a lot of generalizations for dealing with larger numbers but the reasoning behind them gets quite convoluted and in the end the most important part isn't proving that a algorism works for all numbers but that it works for all the numbers for which a mental computation is fast. In this case we can be satisfied by saying that this algorism works for numbers between $91$ and $109$ since those values are the easiest to use in practice.\n\n## Some Applications of Deductive Reasoning\n\n### Fractions\n\nFractions can be difficult to manipulate. Perhaps we can use deductive reasoning to come up with some easier ways to manipulate them.\n\nFirst some observations:\n1. Any integer, like $3$, can be written in fraction form: \n\n$$ \n3=\\frac{3}{1}. \n$$\n\n2. Any integer except $0$, like $3$, can be put into a fraction to get $1$: \n\n$$ \n1=\\frac{3}{3}. \n$$\n\n3. In order to multiply two fractions, such as $\\frac{2}{3}$ and $\\frac{5}{7}$, just multiply the numerators and denominators: \n\n$$ \n\\left( \\frac{2}{3}\\right) \\left( \\frac{5}{7} \\right) = \\frac{(2)(5)}{(3)(7)} = \\frac{10}{21}. \n$$\n\nThe first thing to note is that observation (3) gives us a way to factor any fraction: \n\n$$ \n\\frac{2}{3} = \\left( \\frac{2}{1}\\right) \\left( \\frac{1}{3} \\right). \n$$\n\nReducing a fraction, such as $\\frac{10}{15}$ to $\\frac{2}{3}$, can be achieved by applying observations (2) and (3) in reverse: \n\n$$\n\\frac{10}{15}=\\frac{(2)(5)}{(3)(5)}=\\left(\\frac{2}{3}\\right)\\left(\\frac{5}{5}\\right)=\\left(\\frac{2}{3}\\right)(1)=\\frac{2}{3}\n$$\n\nThe usual process of cancelling a denominator, like $(3) \\left(\\frac{2}{3}\\right)=2$ follows from these observations as well: \n\n$$ \n(3) \\left(\\frac{2}{3}\\right) = \\left(\\frac{3}{1}\\right) \\left(\\frac{2}{1}\\right)\\left(\\frac{1}{3}\\right) = \\left(\\frac{3}{3}\\right) \\left(\\frac{2}{1}\\right) = (1)(2) = 2\n$$\n\nLet's use these observations to manipulate some more complicated fractions.\n\n$$\n\\begin{align*}\n \\frac{2}{\\frac{1}{5}}\n & = \\left( \\frac{2}{\\frac{1}{5}}\\right) (1) \\tag{Multiply by $1$}\\\\\n & = \\left( \\frac{2}{\\frac{1}{5}}\\right) \\left(\\frac{5}{5} \\right) \\tag{By observation 2}\\\\\n & = \\frac{(2)(5)}{\\left( \\frac{1}{5}\\right) (5)} \\tag{By observation 3}\\\\\n & = \\frac{(2)(5)}{1} \\tag{By cancelling}\\\\\n & = (2)(5) \\tag{By observation 1}\\\\\n & = 10\n\\end{align*}\n$$\n\nAnother more complicated example:\n\n$$\n\\begin{align*}\n \\frac{\\frac{2}{3}}{\\frac{7}{5}}\n & = \\left( \\frac{\\frac{2}{3}}{\\frac{7}{5}} \\right) (1)(1) \\tag{Multiply by $1$}\\\\\n & = \\left( \\frac{\\frac{2}{3}}{\\frac{7}{5}} \\right) \\left(\\frac{3}{3} \\right)\\left(\\frac{5}{5} \\right) \\tag{By observation 2}\\\\\n & = \\frac{\\left( \\frac{2}{3} \\right) (3)(5)}{\\left(\\frac{7}{5} \\right) (3)(5)} \\tag{By observation 3}\\\\\n & = \\frac{(2)(5)}{(7)(3)} \\tag{By cancelling}\\\\\n & = \\frac{10}{21}\n\\end{align*}\n$$\n\nWe can manipulate even the most complicated fractions by __cleverly multiplying by 1__ in this way.\n\n### Distributive Property\n\nThe distributive property is extremely useful in simplifying expressions and performing computations. In fact, every multiplication algorithm you encounter will at some level boil down to some clever application of the distributive property. Simply put the distributive property tells us how addition and multiplication interact: \n\n$$\n(a+b)c = ac + bc.\n$$\n\nSince multiplication is commutative this statement is equivalent: \n\n$$\na(c+d) = ac + ad.\n$$\n\nThe FOIL mnemonic is just a special case of the distributive property:\n\n$$ \n(a+b)(c+d) = (a+b)c + (a+b)d = ac + bc + ad + bd. \n$$\n\nIt is important to remember that the distributive property can be read two ways. In one sense it tells us how to distribute multiplication across addition but in another sense it tells us how to undo that distribution.\n\nFor an example, suppose you have something like \n\n$$\n6x + 10xy.\n$$\n\nIf we notice that both $6x$ and $10xy$ have $2x$ as a factor, since $6x=2x(3)$ and $10xy=2x(5y)$, then we can rewrite that as \n\n$$\n6x + 10xy = 2x(3 + 5y).\n$$\n\nThis technique is an extremely useful application of deductive reasoning. *Do not underestimate it.*\n\n### Mentally Computing Simple Percentages\n\nThere are many occasions where one might be asked to compute a percentage of some value on the spot (eg. tipping at a restaurant). Fortunately there's a trick to doing it quickly.\n\nFirst notice that computing $10\\%$ is as easy as moving the decimal point one digit to the left (eg. $10\\%$ of $25.3$ is $2.53$). Similarly, $1\\%$ can be computed by moving the decimal point two digits to the left.\n\nFrom there on it's just a matter of adding, subtracting, and/or multiplying these percentages to get to the desired percentage. For instance, $18\\%$ can be computed by first computing $10\\%$, doubling the value to get $20\\%$, moving the decimal over one more time to get $2\\%$ and then subtracting the $2\\%$ value from the $20\\%$ value. It's easier than it sounds.\n\nThis is an application of deductive reasoning because we reached all of the assertions here logically, not by looking at any patterns and conjecturing.\n\n[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)\n", "meta": {"hexsha": "ba0393dd436ae342cf0ee4c133422b34f7b245aa", "size": 38394, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/jupyter_execute/curriculum-notebooks/Mathematics/InductiveAndDeductiveReasoning/inductive-and-deductive-reasoning.ipynb", "max_stars_repo_name": "BryceHaley/curriculum-jbook", "max_stars_repo_head_hexsha": "d1246799ddfe62b0cf5c389394a18c2904383437", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-18T18:19:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T18:19:40.000Z", "max_issues_repo_path": "_build/jupyter_execute/curriculum-notebooks/Mathematics/InductiveAndDeductiveReasoning/inductive-and-deductive-reasoning.ipynb", "max_issues_repo_name": "callysto/curriculum-jbook", "max_issues_repo_head_hexsha": "ffb685901e266b0ae91d1250bf63e05a87c456d9", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_build/jupyter_execute/curriculum-notebooks/Mathematics/InductiveAndDeductiveReasoning/inductive-and-deductive-reasoning.ipynb", "max_forks_repo_name": "callysto/curriculum-jbook", "max_forks_repo_head_hexsha": "ffb685901e266b0ae91d1250bf63e05a87c456d9", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1376146789, "max_line_length": 544, "alphanum_fraction": 0.5570401625, "converted": true, "num_tokens": 6237, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38861804086755836, "lm_q2_score": 0.35936415888237616, "lm_q1q2_score": 0.139655395382887}} {"text": "

M\u00e9todos Num\u00e9ricos

\n

Cap\u00edtulo 4: Interpolaci\u00f3n Num\u00e9rica

\n

2021/02

\n

MEDELL\u00cdN - COLOMBIA

\n\n\n \n
\n Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) Carlos Alberto Alvarez Henao
\n\n*** \n\n***Docente:*** Carlos Alberto \u00c1lvarez Henao, I.C. D.Sc.\n\n***e-mail:*** carlosalvarezh@gmail.com\n\n***skype:*** carlos.alberto.alvarez.henao\n\n***Linkedin:*** https://www.linkedin.com/in/carlosalvarez5/\n\n***github:*** https://github.com/carlosalvarezh/Metodos_Numericos\n\n***Herramienta:*** [Jupyter](http://jupyter.org/)\n\n***Kernel:*** Python 3.8\n\n\n***\n\n\n\n

Tabla de Contenidos

\n
\n\n## Introducci\u00f3n\n\nLa informaci\u00f3n (datos) resultante de la medici\u00f3n de un evento ya sea natural o social viene dada en forma discreta o tabular, es decir, se expresa como un conjunto de pares ordenados $(x_i,y_i)$. Por ejemplo, los datos obtenidos de los censos poblacionales realizados en Colombia desde 1985 seg\u00fan el [DANE](https://www.dane.gov.co/) son:\n\n|A\u00f1o|Poblaci\u00f3n*|\n|:----:|:----:|\n|1985|30802|\n|1990|34130|\n|1995|37472|\n|2000|40296|\n|2005|42889|\n|2010|45510|\n|2015|48203|\n\n(\\* en miles de habitantes)\n\n***Nota:*** Ejemplo tomado de las notas de clase del curso Simulaci\u00f3n Computacional de la Universidad EAFIT y es autor\u00eda del profesor Nicol\u00e1s Guar\u00edn.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.interpolate\nfrom scipy.optimize import curve_fit\n\n```\n\n\n```python\nstart = 1985.\nstop = 2015.\nstep = int((stop - start) / 5) + 1\n\nx = np.linspace(start, stop, step)\ny = [30802, 34130, 37472, 40296, 42889, 45510, 48203]\n\nplt.plot(x,y, 'o')\nplt.grid(True)\n```\n\nSi quisieramos responder a la pregunta: \u00bfCu\u00e1l era la poblaci\u00f3n de Colombia en el 2012? \n\n\n```python\nfig, ax = plt.subplots()\n\nax.plot(x, y, 'o')\nax.vlines(x=2012, ymin=30000.0, ymax=48203.0, color='r')\nax.hlines(y=46000.0, xmin=1985.0, xmax=2012.0, color='y')\nax.hlines(y=47000.0, xmin=1985.0, xmax=2012.0, color='b')\nax.hlines(y=48000.0, xmin=1985.0, xmax=2012.0, color='g')\nplt.grid(True)\nplt.show()\n```\n\n\u00bfcu\u00e1l de los valores mostrados arriba es el m\u00e1s correcto? Podemos plantear varias ideas para determinar dicho valor:\n\n- Considerando que la funci\u00f3n es constante entre los valores\n\n\n```python\nfig, ax = plt.subplots()\n\nax.plot(x, y, 'o')\n\nax.hlines(y=y[0], xmin=x[0], xmax=x[0] + 2, color='b')\nax.vlines(x=x[0] + 2, ymin=y[0], ymax=y[1], color='b')\n\nfor i in range(1, len(x)-1):\n ax.hlines(y=y[i], xmin=x[i]-3, xmax= x[i] + 2, color='b')\n ax.vlines(x=x[i]+2, ymin=y[i], ymax=y[i+1], color='b')\n\nax.hlines(y=y[-1], xmin=x[-1]-3, xmax=x[-1]+2, color='b')\n\nax.vlines(x = 2012, ymin = 30000.0, ymax = y[-1], color = 'r')\nax.hlines(y = y[-1], xmin = 1985.0, xmax = 2012.0, color = 'r')\n\nplt.grid(True)\nplt.show()\n```\n\n- Asumiendo que la funci\u00f3n es lineal entre valores \n\n\n```python\nfig, ax = plt.subplots()\n\nax.plot(x, y, 'o--')\nax.vlines(x = 2012, ymin = 30000.0, ymax = 46700, color = 'r')\nax.hlines(y = 46700, xmin = 1985.0, xmax = 2012.0, color = 'r')\n\nplt.grid(True)\nplt.show()\n```\n\n- Determinando un polinomio que pase por cada uno de los puntos.\n\n\n```python\nfig, ax = plt.subplots()\n\nax.plot(x, y, 'o')\n\nt = np.linspace(0, 1, len(x)) # parameter t to parametrize x and y\npxLagrange = scipy.interpolate.lagrange(t, x) # X(T)\npyLagrange = scipy.interpolate.lagrange(t, y) # Y(T)\nn = 100\nts = np.linspace(t[0],t[-1],n)\nxLagrange = pxLagrange(ts) # lagrange x coordinates\nyLagrange = pyLagrange(ts) # lagrange y coordinates\nax.plot(xLagrange, yLagrange,'b-')\n\nax.vlines(x=2012, ymin=30000.0, ymax=46700, color='r')\nax.hlines(y=46700, xmin=1985.0, xmax=2012.0, color='r')\n\nax.grid(True)\nplt.show()\n```\n\n- ajustando la curva que mejor se aproxime a cada uno de los datos. En este ejemplo haremos un ajuste lineal, pero no es la \u00fanica forma de hacerlo.\n\n\n```python\n# define the true objective function\ndef objective(x, a, b):\n return a * x + b\n \n# curve fit\npopt, _ = curve_fit(objective, x, y)\n\n# summarize the parameter values\na, b = popt\nprint('y = %.5f * x + %.5f' % (a, b))\n\nfig, ax = plt.subplots()\n\n# plot input vs output\nax.scatter(x, y)\n\n# define a sequence of inputs between the smallest and largest known inputs\nx_line = np.arange(min(x), max(x), 1)\n\n# calculate the output for the range\ny_line = objective(x_line, a, b)\n\n# create a line plot for the mapping function\nax.plot(x_line, y_line, '--', color='b')\n\nyfit = a * 2012 + b\n\nax.vlines(x=2012, ymin=30000.0, ymax=yfit, color='r')\nax.hlines(y=yfit, xmin=1985.0, xmax=2012.0, color='r')\n\nplt.grid(True)\nplt.show()\n```\n\nEn este curso nos centraremos en los esquemas de interpolaci\u00f3n, por lo que los esquemas de ajuste no se tratar\u00e1n.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Prop\u00f3sitos de la interpolaci\u00f3n\n\nLos problemas de interpolaci\u00f3n surgen de muchas fuentes diferentes y pueden tener muchos prop\u00f3sitos diferentes. Algunos de estos incluyen:\n\n- Trazar una curva suave a trav\u00e9s de puntos de datos discretos\n\n\n- Evaluaci\u00f3n r\u00e1pida y sencilla de una funci\u00f3n matem\u00e1tica\n\n\n- Reemplazar una funci\u00f3n dif\u00edcil por una f\u00e1cil\n\n\n- Diferenciar o integrar datos tabulares\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Diferencias entre Interpolaci\u00f3n, Aproximaci\u00f3n y Ajuste de curvas\n\nLas t\u00e9cnicas para resolver el problema de determinar un valor intermedio entre dos valores conocidos se pueden enmarcar en:\n\n- [Interpolaci\u00f3n](https://en.wikipedia.org/wiki/Interpolation)\n\n\n- [Aproximaci\u00f3n](https://en.wikipedia.org/wiki/Approximation_theory)\n\n\n- [Ajuste de curvas](https://en.wikipedia.org/wiki/Curve_fitting)\n\nA continuaci\u00f3n vamos a describir brevemente las diferencias entre ellas.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Interpolaci\u00f3n vs Aproximacion\n\nEn interpolaci\u00f3n, se ajustan todos los puntos de datos de forma exacta mientras que la aproximaci\u00f3n, como su nombre indica, solo se aproxima.\n\nCuando se trata de la idoneidad, la interpolaci\u00f3n es apropiada para suavizar esos datos ruidosos y no es apropiada cuando los puntos de datos est\u00e1n sujetos a errores experimentales u otras fuentes de error significativo. Tener un gran conjunto de puntos de datos tambi\u00e9n puede sobrecargar la interpolaci\u00f3n. Por otro lado, la aproximaci\u00f3n es principalmente apropiada para el dise\u00f1o de rutinas de biblioteca para calcular funciones especiales. Esto se debe a la naturaleza de estas funciones: se considera que los valores exactos no son esenciales y, hasta cierto punto, ineficaces cuando los valores aproximados funcionan.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Interpolaci\u00f3n vs Ajuste de curvas\n\nEn el ajuste de curvas, no ajustamos todos nuestros puntos de datos. Por eso tenemos el concepto de residuos. En la interpolaci\u00f3n, se obliga a la funci\u00f3n a ajustarse a todos los puntos de datos. Ver la referencia del [Cuarteto de ascombe](https://es.wikipedia.org/wiki/Cuarteto_de_Anscombe) como un contra ejemplo de los inconvenientes en los ajustes de curvas.\n\nAhora que sabemos de qu\u00e9 categor\u00eda estamos hablando, reduzcamos a las familias de funciones utilizadas para la interpolaci\u00f3n.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Elecci\u00f3n de la funci\u00f3n de interpolaci\u00f3n\n\nEs importante darse cuenta de que existe cierta arbitrariedad en la mayor\u00eda de los problemas de interpolaci\u00f3n. Hay arbitrariamente muchas funciones que interpolan un conjunto dado de datos. Simplemente requiriendo que alguna funci\u00f3n matem\u00e1tica se ajuste a los puntos de datos deje exactamente abiertos tales\npreguntas como:\n\n- \u00bfQu\u00e9 forma debe tener la funci\u00f3n? Puede haber consideraciones matem\u00e1ticas o f\u00edsicas relevantes que sugieran una forma particular de interpolante.\n\n\n- \u00bfC\u00f3mo deber\u00eda comportarse la funci\u00f3n entre puntos de datos?\n\n\n- \u00bfDeber\u00eda la funci\u00f3n heredar propiedades de los datos, como monotonicidad, convexidad o periodicidad?\n\n\n- Si se grafican la funci\u00f3n y los datos, \u00bflos resultados deber\u00edan ser agradables a la vista?\n\n\n- \u00bfEstamos interesados principalmente en los valores de los par\u00e1metros que definen la funci\u00f3n de interpolaci\u00f3n, o simplemente en evaluar la funci\u00f3n en varios puntos para graficar u otros prop\u00f3sitos?\n\n\nLa elecci\u00f3n de la funci\u00f3n de interpolaci\u00f3n depende de las respuestas a estas preguntas, as\u00ed como de los datos que se deben ajustar y generalmente se basa en:\n\n- Qu\u00e9 tan f\u00e1cil es trabajar con la funci\u00f3n (determinar sus par\u00e1metros a partir de los datos, evaluar la funci\u00f3n en un punto dado, diferenciar o integrar la funci\u00f3n, etc.)\n\n\n- Qu\u00e9 tan bien las propiedades de la funci\u00f3n coinciden con las propiedades de los datos a ser t (suavidad, monotonicidad, convexidad, periodicidad, etc.)\n\n\nAlgunas familias de funciones que se utilizan com\u00fanmente para la interpolaci\u00f3n incluyen:\n\n\n- [Polinomios](https://en.wikipedia.org/wiki/Polynomial_interpolation)\n\n\n- [Interpolaci\u00f3n a trazos](https://en.wikipedia.org/wiki/Spline_interpolation)\n\n\n- Funciones trigonom\u00e9tricas\n\n\n- Exponenciales\n\n\n- Funciones racionales\n\n\nEn este cap\u00edtulo nos centraremos en la interpolaci\u00f3n por polinomios e interpolaci\u00f3n a trazos.\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Interpolaci\u00f3n Polinomial\n\n### Introducci\u00f3n\n\nLa interpolaci\u00f3n polinomial es el tipo de interpolaci\u00f3n m\u00e1s simple y com\u00fan. Una de sus caracter\u00edsticas es que siempre hay un polinomio \u00fanico de grado como m\u00e1ximo $n-1$ que pasa por $n$ puntos de datos.\nHay muchas formas de calcular o representar un polinomio, pero se reducen a la misma funci\u00f3n matem\u00e1tica. Algunos de los m\u00e9todos son la base monomial, la base de [Lagrange](https://en.wikipedia.org/wiki/Lagrange_polynomial) y la base de [Newton](https://en.wikipedia.org/wiki/Newton_polynomial). Como puede observar, reciben el nombre de su base.\n\n***Inconvenientes:***\n\n- ***Polinomio de alto grado:*** una elecci\u00f3n adecuada de funciones de base y puntos de interpolaci\u00f3n puede mitigar algunas de las dificultades asociadas con polinomio de alto grado.\n\n\n- ***Sobreajuste ([overfitting](https://en.wikipedia.org/wiki/Overfitting)):*** ajuste de un solo polinomio a una gran cantidad de puntos de datos, lo que probablemente producir\u00eda un comportamiento de oscilaci\u00f3n insatisfactorio en el interpolante.\n\nLa f\u00f3rmula general de un polinomio de $n$-\u00e9simo orden es:\n\n\\begin{equation*}\nf_n(x) = a_0 + a_1x + a_2x^2 +\u2026+ a_nx^n\n\\label{eq:Ec4_1} \\tag{4.1}\n\\end{equation*}\n\nEl polinomio de interpolaci\u00f3n dado por la ecuaci\u00f3n $\\eqref{eq:Ec4_1}$ consiste en determinar el \u00fanico polinomio de n-\u00e9simo orden que se ajusta a los $n+1$ puntos dados. Este polinomio proporciona una f\u00f3rmula para calcular los valores intermedios.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Interpolaci\u00f3n Lineal\n\nEl m\u00e9todo m\u00e1s simple de interpolaci\u00f3n es conectar dos puntos mediante una l\u00ednea recta.\n\n

\n \n

\n\n\n\nDe la figura se tiene:\n\n\\begin{equation*}\n\\frac{f_1(x)-f(x_0)}{x-x_0}=\\frac{f(x_1)-f(x_0)}{x_1-x_0}\n\\label{eq:Ec4_2} \\tag{4.2}\n\\end{equation*}\n\nreordenando,\n\n\\begin{equation*}\nf_1(x)=f(x_0)+\\frac{f(x_1)-f(x_0)}{x_1-x_0}(x-x_0)\n\\label{eq:Ec4_3} \\tag{4.3}\n\\end{equation*}\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Ejemplo interpolaci\u00f3n lineal\n\n- Estime el valor de $Ln(2)$ empleando la interpolaci\u00f3n lineal, entre $x_0=0$ y $x_1=6$\n\nEvaluando el valor del logaritmo en cada uno de los dos puntos, $Ln(1)=0$ y $Ln(6)=1.791759$\n\n$$f_1(2)=0+\\frac{1.791759-0}{6-1}(2-1)=0.3583519$$\n\nEl valor exacto es $Ln(2)=0.693147$, que representa un error relativo porcentual de\n\n$$Er(\\%)=\\frac{|0.693147-0.3583519|}{0.693147}=48.3\\%$$\n\nSi se disminuye el valor del intervalo a evaluar, por ejemplo en $x_1=4$, se llega a \n\n$$f_1(2)=0+\\frac{1.386294-0}{4-1}(2-1)=0.462098$$\n\nobteniendo un error relativo porcentual del $33.3\\%$.\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Visualizaci\u00f3n computacional\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nn = 20\n\nfig, ax = plt.subplots()\n\nx = np.linspace(1, 6, n)\ny = np.log(x)\n\nx1 = [x[0], x[-1]]\ny1 = [np.log(x[0]), np.log(x[-1])]\n\nx2 = [x[0], 4]\ny2 = [np.log(x[0]), np.log(4)]\n\nax.plot(x,y, '-', x1, y1, 'o-', x2,y2, 'o-')\nax.vlines(x = 2, ymin = 0.0, ymax = np.log(2), color = 'r', linestyles='dashed')\n\nplt.grid(True)\n\n```\n\nEl error en la interpolaci\u00f3n lineal resulta de aproximar una curva con una l\u00ednea recta.\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Mejoras al esquema de interpolaci\u00f3n lineal\n\n- Disminuir el tama\u00f1o del intervalo.\n\n\n- Introducir alguna curvatura en la l\u00ednea que conecta los puntos.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n\n### Interpolaci\u00f3n cuadr\u00e1tica\n\nSi tres (3) puntos de los datos est\u00e1n disponibles, esto puede realizarse con un polinomio de segundo grado (par\u00e1bola).\n\n

\n \n

\n\n\n\n\nLa forma general de una ecuaci\u00f3n cuadr\u00e1tica puede expresarse de la siguiente forma:\n\n\\begin{equation*}\nf_2(x) = b_0 + b_1(x\u2013x_0) + b_2(x\u2013x_0)(x\u2013x_1)\n\\label{eq:Ec4_4} \\tag{4.4}\n\\end{equation*}\n\nDebemos determinar los valores de los coeficientes $b_i$.\n\n- Para $b_0$, en la ecuaci\u00f3n $\\eqref{eq:Ec4_4}$, con $x = x_0$:\n\n\\begin{equation*}\nb_0 = f(x_0)\n\\label{eq:Ec4_5} \\tag{4.5}\n\\end{equation*}\n\n- Para $b_1$, sustituyendo la ecuaci\u00f3n $\\eqref{eq:Ec4_5}$ en la la ecuaci\u00f3n $\\eqref{eq:Ec4_4}$, y evaluando en $x = x_1$:\n\n\\begin{equation*}\nb_1 = \\frac{f(x_1)-f(x_0)}{(x_1 - x_0)}\n\\label{eq:Ec4_6} \\tag{4.6}\n\\end{equation*}\n\n- Para $b_2$, las ecuaciones $\\eqref{eq:Ec4_5}$ y $\\eqref{eq:Ec4_6}$ pueden sustituirse en la ecuaci\u00f3n $\\eqref{eq:Ec4_4}$, evaluada en $x_2$\n\n\\begin{equation*}\nb_2=\\frac{\\frac{f(x_2)-f(x_1)}{(x_2-x_1)}-\\frac{f(x_1)-f(x_0)}{(x_1-x_0)}}{(x_2 - x_0)}\n\\label{eq:Ec4_7} \\tag{4.7}\n\\end{equation*}\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Ejemplo interpolaci\u00f3n cuadr\u00e1tica\n\nContinuando con el ejemplo anterior, se van a considerar los siguientes puntos:\n\n$$x_0=1 \\hspace{1cm} f(x_0)=0.000000$$\n$$x_1=4 \\hspace{1cm} f(x_1)=1.386294$$\n$$x_2=6 \\hspace{1cm} f(x_2)=1.791759$$\n\nDe las ecuaciones anteriores, \n\n$$b_0=0$$\n\n$$b_1=\\frac{1.386294-0}{4-1}=0.4620981$$\n\n$$b_2=\\frac{\\frac{1.791759-1.386294}{6-4}-0.4620981}{6-1}=-0.0518731$$\n\nSustituyendo estos valores en la ecuaci\u00f3n cuadr\u00e1tica inicial, se llega a:\n\n$$f_2(x)=0+0.4620981(x-1)-0.0518731(x-1)(x-4)$$\n\ny evaluando en $x=2$, se llega a\n\n$$f_2(2)=0.565844$$\n\nque representa un error relativo porcentual del $18.4\\%$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Implementaci\u00f3n computacional\n\n\n```python\ndef difdiv2o(x, y, xm):\n b0 = y[0]\n b1 = (y[1] - b0) / (x[1] - x[0])\n b2 = ((y[2] - y[1]) / (x[2] - x[1]) - b1) / (x[2] - x[0])\n\n return b0 + b1 * (xm - x[0]) + b2 * (xm - x[0]) * (xm - x[1])\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nn = 20\n\nfig, ax = plt.subplots()\n\nx = np.linspace(1, 6, n)\ny = np.log(x)\n\nx1 = [x[0], 4, x[-1]]\ny1 = [np.log(x[0]), np.log(x1[1]), np.log(x[-1])]\nym = difdiv2o(x1,y1,2)\n\nx1.insert(1, 2)\ny1.insert(1,ym)\n\nax.plot(x, y, '-', x1, y1, 'o-')\nax.vlines(x = 2, ymin = 0.0, ymax = np.log(2), color = 'r', linestyles='dashed')\n\nplt.grid(True)\n```\n\n### Polinomio de diferencias divididas de Newton\n\nLo anterior puede ser generalizado para ajustar un polinomio de $n$-\u00e9simo orden a $n+1$ datos:\n\n

\n \n

\n\n
Fuente: medium.com
\n\n\\begin{equation*}\nf_n(x) = b_0+b_1(x\u2013x_0)+\\ldots+b_n(x\u2013x_0)(x\u2013x_1)\\ldots(x \u2013 x_{n-1})\n\\label{eq:Ec4_8} \\tag{4.8}\n\\end{equation*}\n\nDe igual manera que para las interpolaciones lineal y cuadr\u00e1tica, se llega a:\n\t\n\\begin{equation*}\nf_n(x)=f(x_0)+(x\u2013x_0)f[x_1,x_0]+(x\u2013x_0)(x\u2013x_1)f[x_2,x-,x_0]+\\ldots+(x\u2013x_0)(x\u2013x_1)\\ldots(x\u2013x_{n-1})f[x_n, x_{n-1},\\ldots,x_2,x_1,x_0]\n\\label{eq:Ec4_9} \\tag{4.9}\n\\end{equation*}\n\nConocido como *Polinomio de interpolaci\u00f3n por [diferencias divididas de Newton](https://en.wikipedia.org/wiki/Divided_differences)*. Las evaluaciones de las funciones puestas entre par\u00e9ntesis son diferencias divididas finitas.\n \n- ***Primera diferencia dividida:***\n\n\\begin{equation*}\nf[x_i, x_j]=\\frac{f(x_i)-f(x_j)}{(x_i-x_j)}\n\\label{eq:Ec4_10} \\tag{4.10}\n\\end{equation*}\n\n- ***Segunda diferencia dividida:*** representa la diferencia de las dos primeras diferencias divididas\n\n\\begin{equation*}\nf[x_i, x_j,x_k]=\\frac{f[x_i,x_j]-f[x_j,x_k]}{(x_i-x_k)}\n\\label{eq:Ec4_11} \\tag{4.11}\n\\end{equation*}\n\n$$\\vdots$$\n\n- ***$n$-\u00e9sima diferencia dividida:*** representa la diferencia de las dos primeras diferencias divididas\n\\begin{equation*}\nf[x_n, x_{n-1},\\ldots, x_1,x_0]=\\frac{f[x_n,x_{n-1},\\ldots,x_1]-f[x_{n-1},x_{n-2},\\ldots,x_0]}{(x_n-x_0)}\n\\label{eq:Ec4_12} \\tag{4.12}\n\\end{equation*}\n\nEste proceso recursivo lo podemos visualizar de la siguiente manera:\n\n

\n \n

\n\n
Fuente: Wikimedia.org
\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Implementaci\u00f3n computacional\n\nEn la p\u00e1gina 513 del libro de Chapra y Canale, Figura 18.7, se tiene un algoritmo en Fortran para la implementaci\u00f3n del c\u00f3digo de Diferencias Divididas tipo Newton. Se invita al estudiante a que lo estudie y codifique en el lenguaje de preferencia.\n\n\n```python\n# Escriba aqu\u00ed su c\u00f3digo\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### An\u00e1lisis de Error para la interpolaci\u00f3n polinomial tipo Newton\n\nLa ecuaci\u00f3n $\\eqref{eq:Ec4_9}$ es similar a la *serie de expansi\u00f3n de Taylor*. Se agregan t\u00e9rminos en forma secuencial para capturar el comportamiento de alto orden de la funci\u00f3n a analizar. Estos t\u00e9rminos son diferencias divididas finitas y, as\u00ed, representan aproximaciones de derivadas de orden mayor.\n\nEl error de truncamiento se expresa entonces como:\n\n\\begin{equation*}\nR_n=\\frac{f^{(n+1)}(\\xi)}{(n+1)!} \\left ( x_{i+1}-x_i\\right )^{n+1}\n\\label{eq:Ec4_13} \\tag{4.13}\n\\end{equation*}\n\nPara una interpolaci\u00f3n de n-\u00e9simo orden, una relaci\u00f3n an\u00e1loga para el error es\n\n\n\\begin{equation*}\nR_n=\\frac{f^{(n+1)}(\\xi)}{(n+1)!}(x-x_0)(x-x_1) \\ldots (x-x_n)\n\\label{eq:Ec4_14} \\tag{4.14}\n\\end{equation*}\n\nObserve que en la ecuaci\u00f3n [(4.14)](#Ec4_14), la funci\u00f3n debe conocerse. Para resolver esta situaci\u00f3n, una formulaci\u00f3n alternativa es el uso de la diferencia dividida para aproximar la derivada $(n+1)$\u2013\u00e9sima y que no requiere el conocimiento previo de la funci\u00f3n.\n\n\n\\begin{equation*}\nR_n=f_n[x_n,x_{n-1},x_{n-2},\\ldots,x_2,x_1,x_0](x-x_0)(x-x_1) \\ldots (x-x_n)\n\\label{eq:Ec4_15} \\tag{4.15}\n\\end{equation*}\n\nDebido a que la ecuaci\u00f3n [(4.15)](#Ec4_15) contiene el t\u00e9rmino $f_n(x)$ no puede resolverse para estimar el error, pero, si se dispone de un dato adicional, la ecuaci\u00f3n [(4.15)](#Ec4_15) puede usarse:\n\n\\begin{equation*}\nR_n=f_n[x_{n+1}, x_n,x_{n-1},x_{n-2},\\ldots,x_2,x_1,x_0](x-x_0)(x-x_1) \\ldots (x-x_n)\n\\label{eq:Ec4_16} \\tag{4.16}\n\\end{equation*}\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Polinomios de Interpolaci\u00f3n de Lagrange\n\n#### Introducci\u00f3n\n\nEl [polinomio de interpolaci\u00f3n de Lagrange](https://en.wikipedia.org/wiki/Lagrange_polynomial) evita el c\u00e1lculo de las diferencias divididas en el esquema de Newton. De una forma general, se representa como la [combinaci\u00f3n lineal](https://en.wikipedia.org/wiki/Linear_combination):\n\n\\begin{equation*}\nf_n(x)=\\sum \\limits_{i=0}^n L_i(x)f(x_i)\n\\label{eq:Ec4_17} \\tag{4.17}\n\\end{equation*}\n\ndonde $L_i$ son las bases polin\u00f3micas de *[Lagrange](https://es.wikipedia.org/wiki/Joseph-Louis_Lagrange)* dadas por: \n\n\\begin{equation*}\nL_i(x)=\\prod_{\\substack{j=0\\\\ j \\ne i}}^n \\frac{x-x_j}{x_i-x_j}\n\\label{eq:Ec4_18} \\tag{4.18}\n\\end{equation*}\n\nde $\\eqref{eq:Ec4_18}$ se observa que todas las funciones $L_i$ son polinomios de grado $n$ que tienen la propiedad\n\n\\begin{equation*}\nL_i(x_j)=\\delta_{ij}, \\quad \\delta_{ij} = \\left \\{\n\\begin{aligned}\n1, \\quad i=j,\\\\\n0, \\quad i \\ne j\n\\end{aligned}\n\\right.\n\\label{eq:Ec4_19} \\tag{4.19}\n\\end{equation*}\n\ndonde $\\delta_{is}$ es el [delta de Kronecker](https://en.wikipedia.org/wiki/Kronecker_delta).\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Polinomio de Interpolaci\u00f3n de Lagrange de primer grado\n\nTomando $n=1$ (lineal), se tiene\n\n\\begin{equation*}\nf_1(x)=\\frac{(x-x_1)}{(x_0-x_1)}f(x_0)+\\frac{(x-x_0)}{(x_1-x_0)}f(x_1)\n\\label{eq:Ec4_19a} \\tag{4.19a}\n\\end{equation*}\n\n[Volver a la Tabla de Contenido](#TOC) \n\n#### Polinomio de Interpolaci\u00f3n de Lagrange de segundo grado\n\nTomando $n=2$ (cuadr\u00e1tico), se tiene\n\n\\begin{equation*}\nf_2(x)=\\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}f(x_0) + \\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}f(x_1) + \\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}f(x_2)\n\\label{eq:Ec4_20} \\tag{4.20}\n\\end{equation*}\n\n[Volver a la Tabla de Contenido](#TOC) \n\n#### Ejemplo polinomio de interpolaci\u00f3n de Lagrange\n\nretomando el ejemplo que se ha venido trabajando anteriormente,\n\n$$x_0=1 \\hspace{1cm} f(x_0)=0.000000$$\n$$x_1=4 \\hspace{1cm} f(x_1)=1.386294$$\n$$x_2=6 \\hspace{1cm} f(x_2)=1.791759$$\n\n- ***Polinomio de primer grado:***\n\n$$f_1(2)=\\frac{2-4}{1-4}(0)+\\frac{2-1}{4-1}(1.386294) = 0.462098$$\n\n- ***Polinomio de segundo grado:***\n\n$$f_2(2)=\\frac{(2-4)(2-6)}{(1-4)(1-6)}(0)+\\frac{(2-1)(2-6)}{(4-1)(4-6)}(1.386294)+\\frac{(2-1)(2-4)}{(6-1)(6-4)}(1.791759) = 0.565844$$\n\n- ***Nota:*** Realice una comparaci\u00f3n con los resultados obtenidos con los correspondientes esquemas lineal y cuadr\u00e1tico en el esquema de [diferencias divididas de Newton](#DDN).\n\n#### Implementaci\u00f3n computacional\n\n\n```python\ndef lagrange(x ,i , xm ):\n n = len(xm) - 1\n y = 1.0\n for j in range(n + 1):\n if i != j:\n y *= (x - xm[j]) / (xm[i] - xm[j])\n return y\n```\n\n\n```python\ndef interpolation(x, xm, ym):\n n = len(xm) - 1\n lagrpoly = np.array([lagrange(x, i, xm) for i in range(n + 1)])\n y = np.dot(ym, lagrpoly)\n return y\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxm = np.array([1,4,6])\nym = np.log(xm)\n#xm = np.array([1, 2, 3, 4, 5, 6])\n#ym = np.array([-3, 0, -1, 2, 1, 4])\n#ym = np.sin(xm)\n\nxplot = np.linspace(-1., 6.0, 100)\nyplot = interpolation(xplot, xm, ym)\nplt.plot(xm, ym, '--', xplot, yplot, '-')\nplt.grid('True')\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Funciones de base\n\nPara entender un poco m\u00e1s c\u00f3mo es que trabaja la interpolaci\u00f3n entre cada uno de los puntos, recordemos que dichos polinomios interpolantes de Lagrange deben cumplir con la propiedad del delta de Kronecker. Para visualizar esto, se implementar\u00e1 la descripci\u00f3n propuesta en el dcumento [Interpolaci\u00f3n de Lagrange 1D](https://github.com/AppliedMechanics-EAFIT/modelacion_computacional/blob/master/notebooks/02a_interpolacion.ipynb) realizado por los profesores *Juan David G\u00f3mez Cata\u00f1o* y *Nicol\u00e1s Guar\u00edn Zapata* para el curso de Modelaci\u00f3n Computacional en el programa de Ingenier\u00eda Civil de la Universidad EAFIT. Todo el cr\u00e9dito para ellos.\n\n\n```python\n# llamado a las biblitecas num\u00e9ricas, de visualizaci\u00f3n y simb\u00f3licas\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import interpolate\nimport sympy as sym\nsym.init_printing()\n```\n\n\n```python\ndef lagrange_poly(x, order, i, xi=None): \n if xi == None:\n xi = sym.symbols('x:%d'%(order+1))\n index = list(range(order+1))\n index.pop(i)\n return sym.prod([(x - xi[j])/(xi[i] - xi[j]) for j in index])\n```\n\n\n```python\nfun = lambda x: x**3 + 4.0*x**2 - 10.0\n```\n\n\n```python\nnpts = 200\nx_pts = np.linspace(-1, 1, npts)\n```\n\n\n```python\npts = np.array([-1, 1, 0])\nfd = fun(pts)\n```\n\n\n```python\nplt.figure()\ny_pts = fun(x_pts)\nplt.plot(x_pts , y_pts)\nplt.plot(pts, fd, 'ko')\n```\n\n\n```python\nx = sym.symbols('x') \npol = [] \npol.append(sym.simplify(lagrange_poly(x, 2, 0, [-1,1,0]))) \npol.append(sym.simplify(lagrange_poly(x, 2, 1, [-1,1,0])))\npol.append(sym.simplify(lagrange_poly(x, 2, 2, [-1,1,0])))\npol\n```\n\n\n```python\nplt.figure()\nprint(npts)\nfor k in range(3):\n \n for i in range(npts):\n yy[i] = pol[k].subs([(x, x_pts[i])])\n print(yy[i])\n plt.plot(x_pts, yy)\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Dificultades de los polinomios de Lagrange\n\nLos polinomios de interpolaci\u00f3n de Lagrange presentan dificultades cuando se tienen polinomios de orden muy alto, agravado cuando se tienen puntos equidistantes o se presentan saltos (discontinuidades) en la soluci\u00f3n.\n\nA esta situaci\u00f3n se le conoce como *[fen\u00f3meno de Runge](https://en.wikipedia.org/wiki/Runge%27s_phenomenon)*. Veamos el siguiente ejemplo:\n\n- Dada la ecuaci\u00f3n\n\n$$f(x)=\\frac{1}{1+25x^2}$$\n\nsi se interpola esta funci\u00f3n utilizando nodos equidistantes $x_i \\in [-1, 1]$ tal que\n\n$$x_i=-1+(i-1)\\frac{2}{n} \\quad i \\in \\{1, 2, 3, \\ldots, n, n+1\\}$$\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nn = 200\nx = [-1 + 2 * (i - 1) / n for i in range(1, n + 2)]\ny = [1 / (1+ 25 * i **2) for i in x]\n\nplt.plot(x,y)\nplt.grid(True)\n```\n\nAhora vamos a emplear, como ejemplo, una serie de polinomios interpolantes para evaluar su comportamiento, desde orden 1 (2 puntos) hasta de orden 9 (10 puntos).\n\n\n```python\n# Polinomios interpolantes de orden 1-9 (2-10 puntos)\n\ndata = [2, 3, 4, 6, 10]\n\n#fig = plt.figure(figsize=(16, 6), dpi= 50, facecolor='w', edgecolor='k')\n#ax = fig.add_subplot(111)\n\nplt.plot(x, y, '-', label = 'Exacta')\nplt.title('Fen\u00f3meno de Runge')\nxplot = np.linspace(-1., 1.0, 100)\n\n\nfor i in data:\n xRi = np.linspace(-1, 1, i)\n yRi = [1 / (1+ 25 * j **2) for j in xRi]\n \n yploti = interpolation(xplot, xRi, yRi)\n string = \"P_orden \" + str(i-1)\n plt.plot(xplot, yploti, '-', label = string)\n\n# idx = np.argwhere(np.diff(np.sign(f - g))).flatten()\n# plt.plot(x[idx], f[idx], 'ro')\n\n plt.legend()\n\nplt.plot(x, y, '-', label = 'Exacta')\nplt.title('Fen\u00f3meno de Runge')\nplt.grid('True')\n\n```\n\nSe observa que a medida que se aumenta el orden del polinomio, intentando obtener un mejor ajuste, se presentan oscilaciones en los puntos extremos.\n\n***Como actividad complementaria, se invita al estudiante reproducir el fen\u00f3meno de Runge que se representa en la gr\u00e1fica de la Figura 18.14 (p. 526) que se encuentra en el libro de Chapra y Canale, 5a Ed.***\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Interpolaci\u00f3n mediante trazadores\n\n### Introducci\u00f3n\n\nEn las secciones anteriores, se usaron polinomios de $n$-\u00e9simo grado para interpolar entre $n+1$ puntos que se ten\u00edan como datos, por ejemplo, para $10$ puntos se puede obtener un polinomio exacto de noveno grado. Esta curva podr\u00eda agrupar todas las curvas al menos hasta, e incluso, la novena derivada. No obstante, hay casos, como el que se acaba de observar, donde estas funciones llevan a resultados err\u00f3neos a causa de los errores de redondeo y los puntos lejanos (*fen\u00f3meno de Runge*). \n\nComo alternativa para intentar mitigar esta situaci\u00f3n se pueden implementar polinomios de menor grado en subconjuntos de los datos. Tales polinomios conectores se denominan trazadores o [splines](https://en.wikipedia.org/wiki/Spline_(mathematics)).\n\nSupongamos que empleamos curvas de tercer grado para unir dos conjuntos de datos, cada una de esas funciones se pueden construir de tal forma que las conexiones entre ecuaciones c\u00fabicas adyacentes resulten visualmente suaves. Podr\u00eda parecer que la aproximaci\u00f3n de tercer grado de los trazadores ser\u00eda inferior a la expresi\u00f3n de noveno grado, entonces, por qu\u00e9 un trazador resulta preferible?\n\nEl concepto de trazador se origin\u00f3 en la t\u00e9cnica de dibujo que usa una cinta delgada y flexible (llamada spline, en ingl\u00e9s), para dibujar curvas suaves a trav\u00e9s de un conjunto de puntos. Se coloca un papel sobre una mesa y alfileres en el papel en la ubicaci\u00f3n de los datos. Una curva c\u00fabica suave resulta al entrelazar la cinta entre los alfileres. De aqu\u00ed que se haya adoptado el nombre de Trazador C\u00fabico para los polinomios de este tipo.\n\n

\n \n

\n\n\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Trazadores lineales\n\nLa uni\u00f3n m\u00e1s simple entre dos puntos es una l\u00ednea recta. Los trazadores de primer grado para un grupo de datos ordenados pueden definirse como un conjunto de funciones lineales:\n\n\\begin{equation*}\n\\begin{split}\nf(x) & = f(x_0) + m_0(x - x_0), \\quad x_0 \\le x \\le x_1 \\\\\nf(x) & = f(x_1) + m_1(x - x_1), \\quad x_1 \\le x \\le x_2 \\\\\nf(x) & = f(x_2) + m_2(x - x_2), \\quad x_2 \\le x \\le x_3 \\\\\n&\\vdots \\\\\nf(x) & = f(x_{n-1}) + m_{n-1}(x - x_{n-1}), \\quad x_{n-1} \\le x \\le x_n\n\\end{split}\n\\label{eq:Ec4_21} \\tag{4.21}\n\\end{equation*}\n\ndonde $m_i=\\frac{f(x_{i+1})-f(x_i)}{(x_{i+1}-x_i)}$ es la pendiente de la l\u00ednea recta que une los puntos. La principal desventaja de los trazadores de primer grado es que no son suaves. En los puntos donde se encuentran dos trazadores, la pendiente cambia de forma abrupta. La primer derivada de la funci\u00f3n es discontinua en esos puntos.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Trazadores cuadr\u00e1ticos\n\nPara asegurar que las derivadas $m$-\u00e9simas sean continuas en los nodos, se debe emplear un trazador de un grado de, al menos, $m+1$. El objetivo de los trazadores cuadr\u00e1ticos es obtener un polinomio de segundo grado para cada intervalo entre los datos. De manera general, el polinomio en cada intervalo se representa como:\n\n\n\\begin{equation*}\n\\begin{split}\nf(x_i) = a_ix^2+b_ix+c_i\n\\end{split}\n\\label{eq:Ec4_22} \\tag{4.22}\n\\end{equation*}\n\nPara $n+1$ datos ($i=0, 1, 2,\\ldots, n$) existen $n$ intervalos y, en consecuencia, $3n$ constantes desconocidas ($a$, $b$ y $c$) por evaluar. Por lo tanto, se requieren $3n$ ecuaciones o condiciones para evaluar las inc\u00f3gnitas. \u00c9stas son:\n\n1. Los valores de la funci\u00f3n de polinomios adyacentes deben ser iguales en los nodos interiores. Esta condici\u00f3n se representa como: \n\n\n\\begin{equation*}\n\\begin{split}\na_{i\u22121}x_{i\u22121}^2+b_{i\u22121}x_{i\u22121}+c_{i\u22121}&=f(x_{i\u22121}) \\\\\na_i x_{i\u22121}^2+b_i x_{i\u22121}+c_i&=f(x_{i\u22121})\n\\end{split}\n\\label{eq:Ec4_23} \\tag{4.23}\n\\end{equation*}\n\n   para $i=2$ a $n$. Como s\u00f3lo se emplean nodos interiores, las ecuaciones anteriores proporcionan, cada una, $n\u20131$ condiciones; en total, $2n\u20132$ condiciones. \n\n2. La primera y la \u00faltima funci\u00f3n deben pasar a trav\u00e9s de los puntos extremos. Esto agrega dos ecuaciones m\u00e1s:\n\n\n\\begin{equation*}\n\\begin{split}\na_{1}x_{0}^2+b_{1}x_{0}+c_{1}&=f(x_{0}) \\\\\na_n x_{n}^2+b_n x_{n}+c_n&=f(x_{n})\n\\end{split}\n\\label{eq:Ec4_24} \\tag{4.24}\n\\end{equation*}\n\n   En total se tienen $2n\u22122+2=2n$ condiciones\n\n3. La primera derivada de la ecuaci\u00f3n: $f_i(x)=a_ix^2+b_i x+c$ es: $f'(x_\ud835\udc56)=2a_ix+b$. De manera general, esta condici\u00f3n se representa como:\n\n\n\\begin{equation*}\n\\begin{split}\n2a_{i\u22121} x_{i\u22121}+b_{i\u22121}=2a_i x_{i\u22121}+b_i\n\\end{split}\n\\label{eq:Ec4_25} \\tag{4.25}\n\\end{equation*}\n\n   para $i = 2an$. Esto proporciona otras $n\u20131$ condiciones, llegando a un total de $2n+n\u22121=3n\u22121$. Como se tienen $3n$ inc\u00f3gnitas, falta una condici\u00f3n m\u00e1s.\n\n4. Suponga que en el primer punto la segunda derivada es cero. Esto es: $f''_\ud835\udc56(x)=2a_i$, que se puede expresar matem\u00e1ticamente como: \n\n\n\\begin{equation*}\n\\begin{split}\na_1=0\n\\end{split}\n\\label{eq:Ec4_26} \\tag{4.26}\n\\end{equation*}\n\n

\n \n

\n\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Trazadores c\u00fabicos\n\nEl objetivo es obtener polinomios de tercer grado para cada intervalo entre los nodos:\n\n\n\\begin{equation*}\n\\begin{split}\nf_i(x)=a_i x^3+b_i x^2+c_i x+d_i\n\\end{split}\n\\label{eq:Ec4_27} \\tag{4.27}\n\\end{equation*}\n\n\nAs\u00ed para los $n+1$ datos ($i=0,1,2,\\ldots,n$) existen $n$ intervalos y, por lo tanto, se requerir\u00e1n de $4n$ condiciones para evaluar las inc\u00f3gnitas. Estas son:\n\n1. Los valores de la funci\u00f3n deben ser iguales en los nodos interiores: $2n\u22122$ condiciones.\n\n2. La primera y la \u00faltima funci\u00f3n deben pasar a trav\u00e9s de los puntos extremos: $2$ condiciones.\n\n3. Las primeras derivadas en los nodos interiores deben ser iguales: $n\u22121$ condiciones.\n\n4. Las segundas derivadas en los nodos interiores deben ser iguales: $n\u22121$.\n\n5. Las segundas derivadas en los nodos extremos son cero: $2$ condiciones.\n\nSuponga que se tienen $n+1$ puntos, $P_k(x_k,y_k)$, donde los $y_k=f(x_k)$, $k=0m,1,2,\\ldots,n$, en los cuales se requiere interpolar una funci\u00f3n $f$. Las abcisas, $x_k$, no se requiere que sean equidistantes, pero s\u00ed est\u00e9n ordenadas, es decir: $x_0\n\\begin{equation*}\n\\begin{split}\nq_k(x_k)&=y_k \\\\\nq_k(x_{k+1})&=y_{k+1} \\quad k=0,1,2 \\ldots, n-1\n\\end{split}\n\\label{eq:Ec4_28} \\tag{4.28}\n\\end{equation*}\n\nDe la ecuaci\u00f3n [(4.28)](#Ec4_28) se obtienen $2n$ condiciones. Tambi\u00e9n, los polinomios $q_k(x)$ del interpolador c\u00fabico $s(x)$ deben tener la misma pendiente y concavidad, es decir\n\n\n\\begin{equation*}\n\\begin{split}\nq'_{k-1}(x_k)&=q'_k(x_k) \\\\\nq''_{k-1}(x_{k})&=q''_{k}(x_k) \\quad k=1,2 \\ldots, n-1\n\\end{split}\n\\label{eq:Ec4_29} \\tag{4.29}\n\\end{equation*}\n\nde la ecuaci\u00f3n [(4.29)](#Ec4_29) se obtienen otras $2(n-1)$ condiciones a ser satisfechas. Las ecuaciones [(4.28)](#Ec4_28) y [(4.29)](#Ec4_29) son condiciones de continuidad mediante la primera y segunda derivada.\n\nSi $s(x)$ es c\u00fabica a trozos en el intervalo $[x_0, x_n]$, su derivada segunda $s''(x)$ es lineal en el mismo intervalo e interpola en los puntos $(x_k, s''(x_k))$ y $(x_{k+1}, s''(x_{k+1}))$ en $[x_k, x_{k+1}]$. Por tanto, $q_k(x)$ es un polinomio de grado uno que interpola en los puntos $(x_k, s''(x_k))$ y $(x_{k+1}, s''(x_{k+1}))$\n\n\\begin{equation*}\n\\begin{split}\nq''_k(x)=s''(x_k) \\frac{x-x_{k+1}}{x_k-x_{k+1}}+s''(x_{k+1}) \\frac{x-x_{k}}{x_{k+1}-x_{k}}, \\quad k=0,1,2,\\ldots,n-1\n\\end{split}\n\\end{equation*}\n\nsean,\n\n\\begin{equation*}\n\\begin{split}\nh_k&=x_{k+1}-x_k, \\quad k=0,1,2,\\ldots,n-1 \\\\\n\\sigma_k&=s''(x_k), \\quad k=0,1,2,\\ldots,n\n\\end{split}\n\\end{equation*}\n\nreemplazando,\n\n\n\\begin{equation*}\n\\begin{split}\nq''_k(x)=\\frac{\\sigma_k}{h_k}(x_{k+1}-x)+\\frac{\\sigma_{k+1}}{h_k}(x-x_k), \\quad k=0,1,2,\\ldots,n\n\\end{split}\n\\label{eq:Ec4_30} \\tag{4.30}\n\\end{equation*}\n\ndonde $h_k$ y $\\sigma_k$ son constantes, con $\\sigma_k$ a\u00fan por determinar. Para ello, integrando dos veces se tiene\n\n\n\\begin{equation*}\n\\begin{split}\nq_k(x)=\\frac{\\sigma_k}{h_k}\\frac{(x_{k+1}-x)^3}{6}+\\frac{\\sigma_{k+1}}{h_k}\\frac{(x-x_k)}{6}+C_k +D_kx\n\\end{split}\n\\label{eq:Ec4_31} \\tag{4.31}\n\\end{equation*}\n\nEl t\u00e9rmino lineal $C_k+D_kx$, se puede reescribir como:\n\n\\begin{equation*}\n\\begin{split}\nC_k+D_kx=A_k(x-x_k)+B_k(x_{k+1}-x)\n\\end{split}\n\\end{equation*}\n\ndonde $A_k$ y $B_k$ son constantes arbitrarias. La ecuaci\u00f3n [(4.31)](#Ec4_31) queda\n\n\n\\begin{equation*}\n\\begin{split}\nq_k(x)=\\frac{\\sigma_k}{h_k}\\frac{(x_{k+1}-x)^3}{6}+\\frac{\\sigma_{k+1}}{h_k}\\frac{(x-x_k)}{6}+A_k(x-x_k)+B_k(x_{k+1}-x)\n\\end{split}\n\\label{eq:Ec4_32} \\tag{4.32}\n\\end{equation*}\n\nAhora aplicando las condiciones dadas en las ecuaciones [(4.28)](#Ec4_28) a esta ecuaci\u00f3n,\n\n\n\\begin{equation*}\n\\begin{split}\ny_k&=\\frac{\\sigma_k}{h_k}\\frac{h_k^3}{6}+\\frac{\\sigma_{k+1}}{h_k}\\times 0+A_k \\times 0 + B_kh_k=\\frac{\\sigma_k}{6}h_k^2+B_kh_k \\\\\ny_{k+1}&=\\frac{\\sigma_{k+1}}{h_k}h_k^3+A_kh_k=\\frac{\\sigma_{k+1}}{6}h_k^2+A_kh_k \n\\end{split}\n\\label{eq:Ec4_33} \\tag{4.33}\n\\end{equation*}\n\nDe estas dos ecuaciones con dos inc\u00f3gnitas, se despejan $A_k$ y $B_k$, y se sustituyen en la ecuaci\u00f3n [(4.32)](#Ec4_32), resultando:\n\n\n\\begin{equation*}\n\\begin{split}\nq_k(x)&=\\frac{\\sigma_k}{6} \\left[ \\frac{(x_{k+1}-x)^3}{h_k}-h_k(x_{k+1}-x) \\right] \\\\\n&+\\frac{\\sigma_{k+1}}{6} \\left[ \\frac{(x-x_k)^3}{h_k}-h_k(x-x_k) \\right] \\\\\n&+y_k \\left[ \\frac{(x_{k+1}-x)}{h_k} \\right] +y_{k+1} \\left[ \\frac{(x-x_k)}{h_k} \\right], \\quad k=0,1,2,\\ldots,n-1\n\\end{split}\n\\label{eq:Ec4_34} \\tag{4.34}\n\\end{equation*}\n\nQue corresponde a la ecuaci\u00f3n para el spline $q_k(x)$. Falta conocer los valores $\\sigma_k$, con $k=0,1,2,\\ldots, n$, que proporcionan otras $n+1$ inc\u00f3gnitas. Para esto, empleamos las condiciones dadas en la ecuaci\u00f3n [(4.29)](#Ec4_29) y derivando la ecuaci\u00f3n [(4.34)](#Ec4_34), se tiene:\n\n\n\\begin{equation*}\n\\begin{split}\nq'_k(x)&=\\frac{\\sigma_k}{6} \\left[ \\frac{-3(x_{k+1}-x)^2}{h_k}+h_k \\right]+\\frac{\\sigma_{k+1}}{6} \\left[ \\frac{3(x_k-x)^2}{h_k}-h_k \\right]+\\frac{y_{k+1}-y_k}{h_k}\n\\end{split}\n\\label{eq:Ec4_35} \\tag{4.35}\n\\end{equation*}\n\nentonces,\n\n\n\\begin{equation*}\n\\begin{split}\nq'_k(x)&=\\frac{\\sigma_k}{6}(-2h_k)+\\frac{\\sigma_{k+1}}{6}(-h_k)+\\frac{y_{k+1}-y_k}{h_k}\n\\end{split}\n\\label{eq:Ec4_36} \\tag{4.36}\n\\end{equation*}\n\ny\n\n\n\\begin{equation*}\n\\begin{split}\nq'_k(x_{k+1})&=\\frac{\\sigma_k}{6}(h_k)+\\frac{\\sigma_{k+1}}{6}(2h_k)+\\frac{y_{k+1}-y_k}{h_k} \n\\end{split}\n\\label{eq:Ec4_37} \\tag{4.37}\n\\end{equation*}\n\nReemplazando $k$ por $k-1$ en la ecuaci\u00f3n [(4.37)](#Ec4_37) para obtener $q'_{k-1}(x_k)$ e igualando en la ecuaci\u00f3n [(4.36)](#Ec4_36) se llega a:\n\n\n\\begin{equation*}\n\\begin{split}\nh_{k-1}\\sigma_{k-1}+2(h_{k-1}+h_k)\\sigma_k+h_k\\sigma_{k+1}=6 \\left( \\frac{y_{k+1}-y_k}{h_k} - \\frac{y_k-y_{k-1}}{h_{k-1}} \\right), \\quad k=1,2,3,\\ldots, n-1\n\\end{split}\n\\label{eq:Ec4_38} \\tag{4.38}\n\\end{equation*}\n\nobs\u00e9rvese que el t\u00e9rmino entre par\u00e9ntesis se puede representar como diferenciales $(\\Delta y_k)$ o incluso como diferencias divididas de Newton, $f[x_k,x_{k+1}]$ vistas al comienzo del cap\u00edtulo. \n\nHay que tener en cuenta que el \u00edndice $k$ var\u00eda de $1$ a $n-1$, produci\u00e9ndose $n-1$ ecuaciones lineales con $n+1$ inc\u00f3gnitas. Esto genera un sistema indeterminado, con infinitas soluciones. Existen varias formas de determinar $\\sigma_0$ y $\\sigma_n$ de la primera y $n-1$-\u00e9sima ecuaci\u00f3n, lleg\u00e1ndose a un sistema tridiagonal de orden $n-1$ con las variables $\\sigma_k$, $k=1, 2, 3, \\ldots, n-1$. \n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Alternativa I:\n\nEspecificando el valor de la segunda derivada, $s''(x)$ en los puntos extremos: $\\sigma_0=s''(x_0)$ y $\\sigma_n=s''(x_n)$. Si se supone $\\sigma_0=0$ y $\\sigma_n=0$ se denomina [spline c\u00fabico natural](https://towardsdatascience.com/numerical-interpolation-natural-cubic-spline-52c1157b98ac)\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Alternativa II:\n\nSuponer que $s''(x)$ es constante en los extremos, es decir, $\\sigma_0=\\sigma_1$ y $\\sigma_n=\\sigma_{n-1}$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Alternativa III:\n\nSuponer que $s''(x)$ es lineal cerca de los extremos, es decir, \n\n$$\\sigma_0=\\frac{1}{h_1}((h_0+h_1)\\sigma_1 - h_0\\sigma_2)$$\n\ny\n\n$$\\sigma_n=\\frac{1}{h_{n-2}}((h_{n-2}-h_{n-1})\\sigma_{n-2}+ (h_{n-2}+h_{n-1})\\sigma_{n-1})$$\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Alternativa IV:\n\nEspecificar el valor de $s'(x)$ en los puntos extremos:\n\n$$\\sigma_0=\\frac{3}{h_0}[\\Delta y_0-s'(x_0)]-\\frac{1}{2}\\sigma_1$$\n\ny \n\n$$\\sigma_n=\\frac{3}{h_{n-1}}[s'(x_n)-\\Delta y_{n-1}]-\\frac{1}{2}\\sigma_{n-1}$$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo aplicaci\u00f3n \n\nDado el siguiente conjunto de datos:\n\n|x|f(x)|\n|:--:|:--:|\n|3.0|2.5|\n|4.5|1.0|\n|7.0|2.5|\n|9.0|0.5|\n\nEval\u00fae el valor en $x=5.0$ empleando trazadores cuadr\u00e1ticos y c\u00fabicos\n\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Trazadores cuadr\u00e1ticos\n\n\nDe la tabla se tienen cuatro puntos y $n=3$ intervalos, por lo que se deben determinar $3n=3\\times3=9$ inc\u00f3gnitas.\n\n1. De las ecuaciones [(4.23)](#Ec4_23) se determinan $2\\times3-2=4$ condiciones as\u00ed,\n\n\\begin{equation*}\n\\begin{split}\n4.5^{2}a_{1}+4.5b_{1}+c_{1}&=1.0 \\\\\n4.5^{2}a_{2}+4.5b_{2}+c_{2}&=1.0 \\\\\n7.0^{2}a_{2}+7.0b_{2}+c_{2}&=2.5 \\\\\n7.0^{2}a_{3}+7.0b_{3}+c_{3}&=2.5\n\\end{split}\n\\end{equation*}\n\n2. La primera y \u00faltima funci\u00f3n pasan por los puntos extremos, agregando 2 ecuaciones m\u00e1s, ecuaci\u00f3n [(4.24)](#Ec4_24):\n\n\\begin{equation*}\n\\begin{split}\n3.0^2a_{1}+3.0b_{1}+c_{1}&=2.5 \\\\\n9.0^2a_{3}+9.0b_{3}+c_{3}&=0.5 \n\\end{split}\n\\end{equation*}\n\n3. La continuidad de las derivadas crean adicionalmente $3-1=2$ condiciones, ecuaci\u00f3n [(4.25)](#Ec4_25):\n\n\\begin{equation*}\n\\begin{split}\n9.0a_{1}+b_{1}=9.0a_{2}+b_{2} \\\\\n14.0a_{2}+b_{2}=14.0a_{3}+b_{3} \n\\end{split}\n\\end{equation*}\n\n4. De la ecuaci\u00f3n [(4.26)](#Ec4_26) se obtiene la ecuaci\u00f3n faltante, es decir,\n\n\\begin{equation*}\n\\begin{split}\na_{1}=0\n\\end{split}\n\\end{equation*}\n\nEsta \u00faltima ecuaci\u00f3n especifica de forma exacta el valor de una de las $9$ condiciones requridas, por lo que el problema se reduce a encontrar las restantes $8$ ecuaciones. Estas condiciones se expresan de forma matricial de la siguiente manera:\n\n\\begin{align*}\n\\left[\\begin{array}{cccc}\n 4.5 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 20.25 & 4.5 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 49 & 7 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 49 & 7 & 1 \\\\\n 3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 81 & 9 & 1 \\\\\n 1 & 0 & -9 & -1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 14 & 1 & 0 & -14 & -1 & 0 \\\\\n\\end{array}\\right]\n\\begin{Bmatrix}\n b_{1} \\\\\n c_{1} \\\\\n a_{2} \\\\\n b_{2} \\\\\n c_{2} \\\\\n a_{3} \\\\\n b_{3} \\\\\n c_{3} \\\\\n\\end{Bmatrix}\n= \\begin{Bmatrix}\n 1.0 \\\\\n 1.0 \\\\\n 2.5 \\\\\n 2.5 \\\\\n 2.5 \\\\\n 0.5 \\\\\n 0.0 \\\\\n 0.0\n\\end{Bmatrix}\n\\end{align*}\n\nEmpleando una de las t\u00e9cnicas de resoluci\u00f3n de sistemas de ecuaciones lineales vistas en el cap\u00edtulo anterior, se llega a las respuetas:\n\n\\begin{array}{crl}\na_1&=0.0, &b_1&=-1, &c_1&=5.5 \\\\\na_2&=0.64, &b_2&=-6.76, &c_2&=18.46 \\\\\na_3&=-1.6, &b_3&=24.6, &c_3&=-91.3\n\\end{array}\n\nSustituyendo estos valores en las ecuaciones cuadr\u00e1ticas originales,\n\n\\begin{array}{crl}\nf_1(x)=-x+5.5, &3.0\\le x \\le 4.5 \\\\\nf_2(x)=0.64x^2-6.76x+18.46, &4.5 \\le x \\le 7.0 \\\\\nf_3(x)=-1.6x^2+24.6x-91.3, &7.0\\le x \\le 9.0 \\\\\n\\end{array}\n\nPor \u00faltimo, se emplea la ecuaci\u00f3n $f_2$, para predecir el valor en $x=5$, que es $f_2(5)=0.66$\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Trazadores c\u00fabicos\n\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('./nb_style.css', 'r').read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "38dd959a26b9f06be30aa6f8ab582bbefe2a819a", "size": 294778, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Cap04_InterpolacionNumerica.ipynb", "max_stars_repo_name": "carlosalvarezh/Analisis_Numerico", "max_stars_repo_head_hexsha": "4a6aed7cf18832e81e731352ed279bd381cfd7a6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-24T17:53:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-24T17:53:50.000Z", "max_issues_repo_path": "Cap04_InterpolacionNumerica.ipynb", "max_issues_repo_name": "carlosalvarezh/Analisis_Numerico", "max_issues_repo_head_hexsha": "4a6aed7cf18832e81e731352ed279bd381cfd7a6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Cap04_InterpolacionNumerica.ipynb", "max_forks_repo_name": "carlosalvarezh/Analisis_Numerico", "max_forks_repo_head_hexsha": "4a6aed7cf18832e81e731352ed279bd381cfd7a6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-01-28T21:22:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-11T17:53:02.000Z", "avg_line_length": 122.9266055046, "max_line_length": 46676, "alphanum_fraction": 0.8510200897, "converted": true, "num_tokens": 19134, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4532618480153861, "lm_q2_score": 0.3073580232098525, "lm_q1q2_score": 0.13931366560245367}} {"text": "```python\nfrom IPython.core.display import HTML\ncss_file = '../style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n# Introduction to symbolic python\n\n## Preamble\n\nThis first notebook introduces `python` and `sympy`, the computer algebra library. As with most of the notebooks, we begin by importing the `sympy` language into `python`. We also initialize the notebook to be able to rpint our results using Latex.\n\n\n```python\nfrom sympy import init_printing\ninit_printing()\n```\n\n## Creating mathematical variables\n\nThe equal symbol, `=`, is an assignment operator in Python. It assigns what is on the right-hand side (RHS) to what is on the left hand side (LHS). The LHS is is a name called a computer variable and we are free to choose this name, within certain limits. The name is basically a reference to a part of computer memory that stores what is on the RHS. The RHS is then an object, which can be of various types\n\nThe most common convention in Python is to use a descriptive name that may consist of more than one word, i.e. a phrase. The first letter of each word is in lowercase and the words are concatenated by an underscore. This convention is referred to as _snake case_. For instance, if I want to create a computer variable to hold a number, I might call it `my_number`. The name makes sense in as much as if I viewed my code months down the line or if I gve it to someone else, we should all be able to figure out what is was meant to hold.\n\nPython contains many reserved words that make up the syntax of the language. It is not advised to use these as you computer variable names. Also steer clear from symbols and place numbers at the end of names.\n\nYou can use your favorite search engine and look up naming conventions. Find the one that suits you. \n\nLet's then create a computer variable named `my_number` and assign the valye $4$ to it.\n\n\n```python\nmy_number = 4 # Creating a computer variable and assigning a value to it.\n```\n\nA small part of the computer memory is now _called_ `my_number` and it contains the value $4$. We can access this value by simply calling the name of the computer variable.\n\n\n```python\nmy_number\n```\n\nThe value $4$ is an instance of an object and as an object, is of a certain type. We can check up the object type with the `type()` function (a keyowrd from the Python syntax that actually perform a, well, _function_.). All functions in Python have a set of parenthesis at the end. Inside of these parenthesis we pass the information that the function required to do its job. These pieces of information are called _arguments_. We will pass `my_number` as argument.\n\n\n```python\ntype(my_number)\n```\n\n\n\n\n int\n\n\n\nWe note that it of type `int`, which is short for, _integer_, a whole number. We can reassign the computer variable to hold another value, which may be an instanceof another type of object.\n\n\n```python\nmy_number = 4.0 # The .0 indicates that this is a decimal value (we can also just type 4.)\ntype(my_number)\n```\n\n\n\n\n float\n\n\n\nOur computer variable is now an instance of a floating point (`float`) object. We use the term _instance_, because in each case we create a single example of an object.\n\nComputer variables and the assigment operator do not behave in the same way as their mathematical namesakes. Look at the code below.\n\n\n```python\nmy_number = my_number + 1\nmy_number\n```\n\nInitially, this makes no sense. Just to be sure that we are all on the same page, let's create another computer variable called `x` and assign the integer value $7$ to it.\n\n\n```python\nx = 7 # Creating a computer variable and assigning the integer value 7 to it\nx # Caling the computer variable to access the value that it holds\n```\n\nNow we'll repeat the ` + 1` we did above.\n\n\n```python\nx = x + 1\nx\n```\n\nAlgebraically this makes no sense, until we remember that the `=` symbol is not an equal sign, but an assignment operator, assigning what is on the RHS to what is on the LHS. On the RHS then we have `x + 1`. At the moment `x` holds the value `7`. We then add $1$ to it to make it $7+1=8$. This new value, $8$, is then assigned to the LHS, which is the computer variable `x`. The value $8$ then overwrites the previous value held in `x`.\n\nIn mathematics, then, we are used to mathematical variables, such as $x$ and $y$ and not computer variables, `x` and `y`. We have to use a package such as `sympy` to help Python deal with this. More specifically, we import just the `symbols()` function from the `sympy` package. Note the syntaxh for doing so.\n\n\n```python\nfrom sympy import symbols\n```\n\nWe now have the `symbols()` function added to the many built-in Python functions. We will use it to reassign our computer variable `x` and turn it into a mathematical variable.\n\n\n```python\nx = symbols('x') # Setting x as a mathematical symbol\ntype(x) # Looking at the type of x\n```\n\n\n\n\n sympy.core.symbol.Symbol\n\n\n\nNow `x` is a mathematical symbol (in Python a `sympy.core.symbol.Symbol` object) and we can write an expression such as $x+1$.\n\n\n```python\nx + 1\n```\n\nYou might wonder what type `x+1` is. Certainly, `1` is a Python `int`.\n\n\n```python\ntype(1) # Checking the type of the mumber 1\n```\n\n\n\n\n int\n\n\n\n\n```python\ntype(x + 1) # Checking the type of x + 1\n```\n\n\n\n\n sympy.core.add.Add\n\n\n\nWe note that its is a `sympy.core.add.Add` type. This allows Python to do the actual mathematical addition.\n\nLet's add `y` as a mathematical symbol and try to create $\\frac{x}{y}$. Here we are not using an assignment operator, we simply write the expression $\\frac{x}{y}$.\n\n\n```python\ny = symbols('y')\n```\n\nJust so that you know, the forward slash, `/`, symbol is used for division.\n\n\n```python\nx / y # Stating a calculation without an assigment operator is an expression\n```\n\nThe `sympy` package is great at creating mathematical typesetting. Let's create an expression with a power. In Python, two asterisks, `**`, are used to indicated powers. Below we create the expression $x^2 -x$.\n\n\n```python\nx**2 - x\n```\n\n## Transformation of expressions\n\nNow we will get right into things and examine the power of `sympy`. One of the common tasks in algebra is factorization, as in $x^2 -x = x \\left( x - 1 \\right)$.\n\nPython functions that act on objects (or expressions) that we create are termed _methods_. The `.factor()` method is a function that will factor our expression.\n\n\n```python\n(x**2 - x).factor()\n```\n\nWe can also expand an expression using the `expand()` method. By the way, we use the single asterisk, `*` symbol for multiplication.\n\n\n```python\n(x * (x - 1)).expand()\n```\n\nJust to be clear, we can import the actual functions, `factor()` and `expand()`, from `sympy`.\n\n\n```python\nfrom sympy import factor, expand\n```\n\nNow we can use them as functions (instead of the method syntax we used above).\n\n\n```python\nfactor(x**2 - x)\n```\n\n\n```python\nexpand(x * (x - 1))\n```\n\nLastly, we are still using Python, so we can assign any `sympy` expression to a computer variable and use all of the functions and methods we have learned about.\n\n\n```python\nmy_expression = x * (x - 1) # Creating a computer variable\nmy_expression\n```\n\n\n```python\nexpand(my_expression)\n```\n\n## Common mathematical functions\n\nThe `sympy` package is really great for symbolic (mathematical) computation. In this section we higlight the difference between numerical (Python) and symbolic (`sympy`) computation. \n\nAs an example of numerical computation, let's calculate $\\frac{5}{3}$.\n\n\n```python\n5 / 3\n```\n\nWe get an approximation, i.e. the reciprocal $6$ is terminated by rouding to a $7$. The real solution is obviously $1.\\dot{6}$. Better yet, the exact solution is $\\frac{5}{3}$..\n\nThere is package called `math` that expands Python's ability to do numerical computations. Let's import it and calculate the square root of eight.\n\n\n```python\nimport math\n```\n\nBecause we did not import any specific `math` functions, we have to refer to them by dot notation, i.e. `math.sqrt()` for the square root function in the `math` function.\n\n\n```python\nmath.sqrt(8) # An approxximation of the square root of 8\n```\n\nThe `math` package contains numerical approximations of constants such as $\\pi$.\n\n\n```python\nmath.pi\n```\n\nThe `exp()` function contains an approximation of Euler's number. We can get this approximation by passing the argument `1`, as in $e^1 = e$.\n\n\n```python\nmath.exp(1)\n```\n\nSince we are dealing with approximations when doing numerical calculations, we have to deal with a bit of rounding. Here is $\\sin \\left( \\frac{\\pi}{6} \\right) = 0.5 $.\n\n\n```python\nmath.sin(math.pi/6)\n```\n\nThe `log()` function in the `math` package calculates the natural logarith. The naural logarithm of Euler's number is $1$.\n\n\n```python\nmath.log(math.exp(1))\n```\n\nNow, let's change to symbolic computation and import some useful functions from `sympy`.\n\n\n```python\nfrom sympy import Rational, sqrt, log, exp, sin, pi, I\n```\n\nThe `Rational()` function calculates an exact solution for fraction. Here is $\\frac{5}{3}$ again.\n\n\n```python\n# Because we imported the function directly, we don't have to use the dot notation\nRational(5, 3)\n```\n\nThe `evalf()` method still allows us to get a numerical approximation.\n\n\n```python\n(Rational(5, 3)).evalf()\n```\n\nWe can even specify the number of significant digits we require by passing it as an argument.\n\n\n```python\n(Rational(5, 3)).evalf(3) # Significant digits\n```\n\nNow for the square root of $8$.\n\n\n```python\nsqrt(8)\n```\n\nThat's beautiful! For even more beauty, here's $\\pi$ and then a numerical approximation with $40$ significant digits.\n\n\n```python\npi\n```\n\n\n```python\npi.evalf(40)\n```\n\nEuler's number is just as spectacular.\n\n\n```python\nexp(1)\n```\n\n\n```python\nexp(1).evalf(40) # Foury significant digits\n```\n\nThe trigonometric and logarithmic expressions from above will also now give us an exact solution.\n\n\n```python\nsin(pi / 6)\n```\n\n\n```python\nlog(exp(1))\n```\n\n## Substitutions\n\nWhile we create symbolic expressions, `sympy` does not exclude use from substituting actual numerical values into our mathematical variables. The `.subs()` method does the job. Below we create an expression and assign it to the computer variable `expr`.\n\n\n```python\nexpr = x + 4 # x is a mathematical symbol\nexpr\n```\n\nWe have to specify the mathematical variable we want substituted and then the value we want to substitute with.\n\n\n```python\nexpr.subs(x, 3)\n```\n\nIf we have more than one mathematical symbol, we can substitute all of them using the syntax below.\n\n\n```python\nexpr = x + y # Overwriting the expr computer variable\nexpr.subs([(x, 2), (y, 5)])\n```\n\nThe square bracket, `[]` notation creates a list object. An alternative syntax uses dictionary objects with curly braces, `{}`. We will learn more about these at a later stage.\n\n\n```python\nexpr.evalf(subs = {x:2, y:5})\n```\n\n## Equality\n\nSubstitution provides a great segway into Boolean logic. The double equal symbol, `==` evaluates both sides of an equation to see of they are indeed equal.\n\n\n```python\nexpr.subs(x, 3) == 7\n```\n\n\n\n\n False\n\n\n\nIndeed $7 = 7$, the LHS equals the RHS.\n\nThe `sympy` adheres to the principle of exact structural equality. Let's look at the expression $\\left( x+1 \\right)^2$ and its expansion $x^2 + 2x +1$.\n\n\n```python\n(x + 1)**2\n```\n\n\n```python\n((x + 1)**2).expand()\n```\n\nWe know that these two expression are equal to each other. Let's test this assumption.\n\n\n```python\n(x + 1)**2 == x**2 + 2 * x + 1\n```\n\n\n\n\n False\n\n\n\nWe see a `False`. This is what we mean by adherence to the principle of exact structural equality. If we expand the LHS or factor the RHS, we will get equality.\n\n\n```python\n((x + 1)**2).expand() == x**2 + 2 * x + 1\n```\n\n\n\n\n True\n\n\n\n\n```python\n((x + 1)**2) == (x**2 + 2 * x + 1).factor()\n```\n\n\n\n\n True\n\n\n\n## Roundtripping to numerical packages\n\nWhile `sympy` is great at symbolic computations, we often need numerical evaluations. We did this using `.subs()` and `.evalf()` methods. They do single substitutions, but what if want computations on many values. To do this, we rondtrip to some of the Python packages that are designed for numerical computations such as `numpy` (numerical Python) and `scipy` (scientific Python).\n\nFortunately, `sympy` provides the `lambdify()` function. Imagine then that we want to calculate the $\\sin \\left( x \\right)$ for the integere values in the closed domain $\\left[ -3, 3 \\right]$. The `arange()` function in the `numpy` package will let us create this list of values. We will ause three argument. The first is the start value, the second is the end value (`numpy` excludes this from the final list, so we will use `4`), and the step size, which is $1$.\n\n\n```python\nfrom numpy import arange\n```\n\n\n```python\nmy_domain = arange(-3, 4, 1) # Creating the list of integers in our chosen domain\nmy_domain\n```\n\n\n\n\n array([-3, -2, -1, 0, 1, 2, 3])\n\n\n\nThe `arange()` function creates a `numpy` object called an `array`. We note the seven integers in our array object, `my_domain`. Now we overwrite the `expr` computer variable to hold the sine function.\n\n\n```python\nexpr = sin(x)\nexpr\n```\n\n\n```python\nfrom sympy import lambdify # Importing the required function\n```\n\nThe `lambdify()` function as used below takes three arguments. First is the mathematical variable of interest. Then follows teh actual expression, and finally the package we want to use for the calculation. We assign this to the computer variable `f`.\n\n\n```python\nf = lambdify(x, expr, 'numpy')\n```\n\nIt is now a simple matter of providing the seven numbered array to the computer variable. All seven values in the array will be passed to the expression.\n\n\n```python\nf(my_domain)\n```\n\n\n\n\n array([-0.14112001, -0.90929743, -0.84147098, 0. , 0.84147098,\n 0.90929743, 0.14112001])\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6e79b0d727f954352c21d88b63fec9ebaf482783", "size": 77368, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/0.0 Start Here/0.1 Getting_started.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/0.0 Start Here/0.1 Getting_started.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/0.0 Start Here/0.1 Getting_started.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 42.5566556656, "max_line_length": 4336, "alphanum_fraction": 0.7258298004, "converted": true, "num_tokens": 4218, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.44552953503957266, "lm_q2_score": 0.3106943895971202, "lm_q1q2_score": 0.1384235269366088}} {"text": "```python\n# This mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive')\n\n# TODO: Enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment1/'\nFOLDERNAME = 'cs231n/assignment2'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# Now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# This downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd /content/drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content/drive/My\\ Drive/$FOLDERNAME\n```\n\n Mounted at /content/drive\n /content/drive/My Drive/cs231n/assignment2/cs231n/datasets\n /content/drive/My Drive/cs231n/assignment2\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization, proposed by [1] in 2015.\n\nTo understand the goal of batch normalization, it is important to first recognize that machine learning methods tend to perform better with input data consisting of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features. This will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance, since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, they propose to insert into the network layers that normalize batches. At training time, such a layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```python\n# Setup cell.\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams[\"figure.figsize\"] = (10.0, 8.0) # Set default size of plots.\nplt.rcParams[\"image.interpolation\"] = \"nearest\"\nplt.rcParams[\"image.cmap\"] = \"gray\"\n\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\"Returns relative error.\"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(f\" means: {x.mean(axis=axis)}\")\n print(f\" stds: {x.std(axis=axis)}\\n\")\n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n\n\n\n```python\n# Load the (preprocessed) CIFAR-10 data.\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(f\"{k}: {v.shape}\")\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n# Batch Normalization: Forward Pass\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```python\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n\n# Means should be close to zero and stds close to one.\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.99520433e-17 6.93889390e-17 8.32667268e-19]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```python\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n# Batch Normalization: Backward Pass\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```python\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-13 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.6674604875341426e-09\n dgamma error: 7.417225040694815e-13\n dbeta error: 2.379446949959628e-12\n\n\n# Batch Normalization: Alternative Backward Pass\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hard part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```python\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 9.890497291190823e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 1.62x\n\n\n# Fully Connected Networks with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\n**Hint:** You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`.\n\n\n```python\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.2611955101340957\n W1 relative error: 1.10e-04\n W2 relative error: 5.65e-06\n W3 relative error: 4.14e-10\n b1 relative error: 2.78e-09\n b2 relative error: 2.22e-08\n b3 relative error: 1.02e-10\n beta1 relative error: 6.94e-09\n beta2 relative error: 1.89e-09\n gamma1 relative error: 7.47e-09\n gamma2 relative error: 3.35e-09\n \n Running check with reg = 3.14\n Initial loss: 6.996533220108303\n W1 relative error: 1.98e-06\n W2 relative error: 2.28e-06\n W3 relative error: 1.11e-08\n b1 relative error: 5.55e-09\n b2 relative error: 2.22e-08\n b3 relative error: 2.10e-10\n beta1 relative error: 6.32e-09\n beta2 relative error: 3.39e-09\n gamma1 relative error: 6.27e-09\n gamma2 relative error: 4.14e-09\n\n\n# Batch Normalization for Deep Networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```python\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340974\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.314000; val_acc: 0.266000\n (Iteration 21 / 200) loss: 2.039365\n (Epoch 2 / 10) train acc: 0.385000; val_acc: 0.279000\n (Iteration 41 / 200) loss: 2.041102\n (Epoch 3 / 10) train acc: 0.494000; val_acc: 0.308000\n (Iteration 61 / 200) loss: 1.753902\n (Epoch 4 / 10) train acc: 0.531000; val_acc: 0.307000\n (Iteration 81 / 200) loss: 1.246584\n (Epoch 5 / 10) train acc: 0.574000; val_acc: 0.313000\n (Iteration 101 / 200) loss: 1.320589\n (Epoch 6 / 10) train acc: 0.634000; val_acc: 0.338000\n (Iteration 121 / 200) loss: 1.157328\n (Epoch 7 / 10) train acc: 0.683000; val_acc: 0.323000\n (Iteration 141 / 200) loss: 1.135180\n (Epoch 8 / 10) train acc: 0.770000; val_acc: 0.328000\n (Iteration 161 / 200) loss: 0.677145\n (Epoch 9 / 10) train acc: 0.783000; val_acc: 0.343000\n (Iteration 181 / 200) loss: 0.935965\n (Epoch 10 / 10) train acc: 0.803000; val_acc: 0.339000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696059\n (Epoch 6 / 10) train acc: 0.535000; val_acc: 0.345000\n (Iteration 121 / 200) loss: 1.557987\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.304000\n (Iteration 141 / 200) loss: 1.432189\n (Epoch 8 / 10) train acc: 0.628000; val_acc: 0.339000\n (Iteration 161 / 200) loss: 1.033931\n (Epoch 9 / 10) train acc: 0.661000; val_acc: 0.340000\n (Iteration 181 / 200) loss: 0.901034\n (Epoch 10 / 10) train acc: 0.726000; val_acc: 0.318000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```python\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch Normalization and Initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train eight-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```python\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n\n\n /content/drive/My Drive/cs231n/assignment2/cs231n/layers.py:149: RuntimeWarning: divide by zero encountered in log\n loss = np.sum(-np.log(correct_score/score_sum))\n\n\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```python\n# Plot results of weight scale experiment.\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the weight initialization scale affect models with/without batch normalization differently, and why?\n\n## Answer:\nThe weight scaling affect largely on without normalized models as seen above. However, with batch normalized model, it deosn't affect much. Because batch normalization improves gradient flow and speed up the gradient descent process. \n\n\n# Batch Normalization and Batch Size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```python\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n \n # Try training a very deep net with batchnorm.\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```python\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\nDuring training, the accuracy increase with increase in batch size because the accuracy is calculated based on statistics of that particular batch and with small batch the statistics may not generalize properly. However, during test time, the accuracy is calculated with running statistics (mean and variance) so the accuracy don't get much affected by batch size.\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```python\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization.\n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\n# Means should be close to zero and stds close to one.\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [-2.22044605e-16 -7.40148683e-17 -7.40148683e-17 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```python\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-12 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.2514082056813097e-09\n dgamma error: 1.980045566295477e-12\n dbeta error: 2.5842537629899423e-12\n\n\n# Layer Normalization and Batch Size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```python\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n2 will cause layer normalization to be affected, mainly because layer normalization statistic on all neurons in each layer. If the dimension of the neuron is too small, the statistic difference may be too large and the performance of the model fluctuates too much.\n", "meta": {"hexsha": "7df758d65419d2705f255015c87807bef02fa4f5", "size": 446205, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "shambhu1998/cs231n", "max_stars_repo_head_hexsha": "cf169f6fea090187787a585c51c624ccd4d9b721", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "shambhu1998/cs231n", "max_issues_repo_head_hexsha": "cf169f6fea090187787a585c51c624ccd4d9b721", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "shambhu1998/cs231n", "max_forks_repo_head_hexsha": "cf169f6fea090187787a585c51c624ccd4d9b721", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 446205.0, "max_line_length": 446205, "alphanum_fraction": 0.9365314149, "converted": true, "num_tokens": 9403, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.44552952031526044, "lm_q2_score": 0.3106943895971202, "lm_q1q2_score": 0.1384235223618476}} {"text": "```python\nfrom IPython.core.display import HTML\nHTML(\"\")\n```\n\n\n\n\n\n\n\n\n# Lecture 7, direct methods for constrained optimization\n\n## Structure of optimization methods\n\nTypically\n\n* Constraint handling **converts** the problem to (a series of) unconstrained problems\n* In unconstrained optimization a **search direction** is determined at each iteration\n* The best solution in the search direction is found with **line search**\n\n\n\n## Classification of the constraint optimization methods\n\n* **Indirect methods:** the constrained problem is converted into a sequence of unconstrained problems whose solutions will approach to the solution of the constrained problem, the intermediate solutions need not to be feasible \n\n* **Direct methods:** the constraints are taking into account explicitly, intermediate solutions are feasible\n\n# Direct methods for constrained optimization\n\nDirect methods for constrained optimization are also known as *methods of feasible directions*\n\n### Idea\n\n* in a point $x_k$, generate a feasible search direction where objective function value can be improved\n* use line search to get $x_{k+1}$ \n\n### Methods differ in\n\n* how to choose a feasible direction and\n* what is assumed from the constraints (linear/nonlinear, equality/inequality)\n\n## Feasible descent directions\n\nLet $S\\subset \\mathbb R^n$ ($S\\neq \\emptyset$) and $x^*\\in S$. \n\n**Definition:** The set\n$$ D = \\{d\\in \\mathbb R^n: d\\neq0,x^*+\\alpha d\\in S \\text{ for all } \\alpha\\in (0,\\delta) \\text{ for some } \\delta>0\\}$$\n\nis called the cone of feasible directions of $S$ in $x^*$.\n\n**Definition:** The set \n$$ F = \\{d\\in \\mathbb R^n: f(x^*+\\alpha d)0\\}$$\nis called the cone of descent directions.\n\n**Definition:** The set $F\\cap D$ is called the cone of feasible descent directions.\n\n\n\n**(Obvious) Theorem:** Consider an optimization problem \n$$\n\\begin{align}\n\\min &\\ f(x)\\\\\n\\text{s.t. }&\\ x\\in S\n\\end{align}\n$$\nand let $x^*\\in S$. Now if $x^*$ is a local minimizer **then** the set of feasible descent directions $F\\cap D$ is empty.\n\n## Idea for the methods of feasible descent directions\n\n1. Find a feasible solution $x_0$ as the starting point ($k=0$).\n2. Find a feasible descent direction $d_k\\in D\\cap F$.\n3. Determine the step length ($\\alpha_k$) to the direction $d_k$ (Use line search to find an optimal step length).\n4. Update $x$ accordingly ($x_{k+1} = x_k + \\alpha_k d_k$).\n5. Check convegency. If not converged, set $k = k+1$ and go to 2.\n\n# Rosen's projected gradient method\n\nAssume a problem with linear equality constraints\n\n$$\n\\min f(x)\\\\\n\\text{s.t. } H(x)=Ax-b=0,\n$$\n\nwhere $A$ is a $l\\times n$ matrix ($l\\leq n$) and $b$ is a vector.\n\nLet $\\mathbf{x}$ be a feasible solution to the above problem.\n\nIt holds that:\n\n$$\n\\mathbf{Ax}=b \\\\\n\\rightarrow \\mathbf{A}(\\mathbf{x} + \\alpha \\mathbf{d}) = b \\\\\n\\rightarrow \\mathbf{Ax} + \\alpha \\mathbf{Ad} = b \\\\\n\\rightarrow b + \\alpha \\mathbf{Ad} = b\n$$\n\nThen, $\\mathbf{d}$ is a feasible direction *if and only if* $\\mathbf{Ad}=0$\n\nThus, the gradient $-\\nabla f(x)$ is a feasible descent direction, if \n\n$$ A\\nabla f(x)=0.$$\n\nThis may or may not be true (i.e. the gradient may or may not be a feasible descent direction).\n\nHowever, we can project the gradient to the set of feasible descent directions\n$$ \\{d\\in \\mathbb R^n: Ad=0\\},$$\nwhich now is a linear subspace.\n\n\n\n### Projection\n\nLet $a\\in \\mathbb R^n$ be a vector and let $L$ be a linear subspace of $\\mathbb R^n$. Now, the following are equivalent\n* $a^P$ is the projection of $a$ on $L$,\n* $\\{a^P\\} = \\operatorname{argmin}_{l\\in L}\\|a-l\\|$, and\n\n\n\n\n## Projected gradient\n\nThe projection of the gradient $\\nabla f(x)$ on the set $\\{d\\in \\mathbb R^n: Ad=0\\}$ is denoted by $\\nabla f(x)^P$ and called the *projected gradient*. \n\nNow, given some conditions, the projected gradient gives us a feasible descent direction.\n\n\n\n## How to compute the projected gradient?\n\nThere are different ways, but at this course we can use optimization. Basically, the optimization problem that we have to solve is\n$$\n\\min \\|\\nabla f(x)-d\\|\\\\\n\\text{s.t. }Ad=0.\n$$\n\nSince it is equivalent to minimize the square of the objective function $\\sum_{i=n}\\nabla_i f(x)^2+d_i^2-2\\nabla_i f(x)d_i$, we can see that the problem is a quadratic problem with equality constraints,\n$$\n\\min \\frac12 d^TId-\\nabla f(x)^Td\\\\\n\\text{s.t. }Ad=0\n$$\nwhich means that we just need to solve the system of equations (see e.g., https://en.wikipedia.org/wiki/Quadratic_programming#Equality_constraints)\n\n$$\n\\left[\n\\begin{array}{cc}\nI&A^T\\\\\nA&0\n\\end{array}\n\\right] \n\\left[\\begin{align}d\\\\\\lambda\\end{align}\\right]\n= \\left[ \n\\begin{array}{c}\n\\nabla f(x)\\\\\n0\n\\end{array}\n\\right],\n$$\n\nwhere $I$ is the identity matrix, and $\\lambda$ are the KKT multipliers.\n\n### Code in Python\n\n#### A function for projecting a vector to a linear space defined by $Ax=0$.\n\n\n```python\nimport numpy as np\n#help(np.linalg.solve)\n```\n\n\n```python\nimport numpy as np\ndef project_vector(A,vector):\n #convert A into a matrix\n A_matrix = np.matrix(A)\n #construct the \"first row\" of the matrix [[I,A^T],[A,0]]\n left_matrix_first_row = np.concatenate((np.identity(len(vector)),A_matrix.transpose()), axis=1)\n #construct the \"second row\" of the matrix\n left_matrix_second_row = np.concatenate((A_matrix,np.matrix(np.zeros([len(A),len(A)]))), axis=1)\n #combine the whole matrix by combining the rows\n left_matrix = np.concatenate((left_matrix_first_row,left_matrix_second_row),axis = 0)\n #Solve the system of linear equalities from the previous page\n return np.linalg.solve(left_matrix, \\\n np.concatenate((np.matrix(vector).transpose(),\\\n np.zeros([len(A),1])),axis=0))[:len(vector)] \n```\n\n\n```python\n# Example: Project gradient such that A*proj_gradient = 0\nA = [[1,0,0],[0,1,0]]\ngradient = [1,1,1]\nproject_vector(A,gradient)\n```\n\n\n\n\n matrix([[0.],\n [0.],\n [1.]])\n\n\n\n# Example\n\nLet us study optimization problem\n$$\n\\begin{align}\n\\min \\qquad& x_1^2+x_2^2+x_3^2\\\\\n\\text{s.t.}\\qquad &x_1+x_2=3\\\\\n &x_1+x_3=4.\n\\end{align}\n$$\n\nLet us project a negative gradient from a feasible point $x=(1,2,3)$\n\nNow, the matrix\n$$\nA = \\left[\n\\begin{array}{ccc}\n1& 1 & 0\\\\\n1& 0 & 1\n\\end{array}\n\\right]\n$$.\n\n\n```python\nimport ad\nA = [[1,1,0],[1,0,1]]\ngradient = ad.gh(lambda x:x[0]**2+x[1]**2+x[2]**2)[0]([1,2,3])\nprint(gradient)\nd = project_vector(A,[-i for i in gradient])\nprint(d)\n```\n\n [2.0, 4.0, 6.0]\n [[ 2.66666667]\n [-2.66666667]\n [-2.66666667]]\n\n\n### d is a feasible direction\n\n\n```python\nnp.matrix(A)*d\n```\n\n\n\n\n matrix([[0.0000000e+00],\n [4.4408921e-16]])\n\n\n\n### d is a descent direction\n\n\n```python\ndef f(x):\n return x[0]**2+x[1]**2+x[2]**2\nalpha = 0.001\nprint(\"Value of f at [1,2,3] is \"+str(f([1,2,3])))\nx_mod= np.array([1,2,3])+alpha*np.array(d).transpose()[0]\nprint(x_mod)\nprint(\"Value of f at [1,2,3] +alpha*d is \"+str(f(x_mod)))\nprint(\"Gradient dot product direction (i.e., directional derivative) is \" \\\n + str(np.matrix(ad.gh(f)[0]([1,2,3])).dot(np.array(d))))\n```\n\n Value of f at [1,2,3] is 14\n [1.00266667 1.99733333 2.99733333]\n Value of f at [1,2,3] +alpha*d is 13.978687999999998\n Gradient dot product direction (i.e., directional derivative) is [[-21.33333333]]\n\n\n## Finally, the algorithm of the projected gradient\n\n\n```python\nimport numpy as np\nimport ad\ndef projected_gradient_method(f,A,start,step,precision):\n f_old = float('Inf')\n x = np.array(start)\n steps = []\n f_new = f(x)\n iters = 0\n while abs(f_old-f_new)>precision:\n # store the current function value\n f_old = f_new\n # compute gradient\n gradient = ad.gh(f)[0](x)\n # project negative gradient\n d = project_vector(A,[-i for i in gradient])\n # take transpose\n d = d.reshape(1,-1)\n # take step\n x = np.array(x + step*d)[0]\n # compute f in new point+ \n f_new = f(x)\n # record new step\n steps.append(x)\n # update iterations counter\n iters = iters + 1\n return x,f_new,steps,iters\n```\n\n\n```python\nf = lambda x:x[0]**2+x[1]**2+x[2]**2\nA = [[1,1,0],[1,0,1]]\nstart = [1,2,3]\n(x,f_val,steps,iters) = projected_gradient_method(f,A,start,0.6,0.000001)\n```\n\n\n```python\nprint(x)\nprint(f(x))\nprint(f([1,2,3]))\nprint(np.matrix(A)*np.matrix(x).transpose())\nprint(iters)\n```\n\n [2.333248 0.666752 1.666752]\n 8.666666688512\n 14\n [[3.]\n [4.]]\n 6\n\n\n## Note\nIf there are both linear equality and inequality constraints, the projection matrix does not remain the same \n* At each iteration, it includes only the equality and active inequality constraints\n\n# Active set method\nConsider a problem\n$$\n\\min f(x)\\\\\n\\text{s.t. }Ax\\leq b,\n$$\nwhere $A$ is a $l\\times n$ matrix ($l\\leq n$) and $b$ is a vector.\n\n## Idea\n* In $\ud835\udc65_k$, the set of constraints is divided into active ($\ud835\udc56 \u2208 \ud835\udc3c$) and inactive constraints\n* Inactive constraints are not taken into account when the search direction $\ud835\udc51_k$ is determined\n* Inactive constraints affect only when computing the optimal step length $\\alpha_k$\n\n\n## Feasible directions\n* For $\ud835\udc56\\in \ud835\udc3c$ , $(\ud835\udc4e_i)^Tx_k = b_i$\n* If $\ud835\udc51_k$ is feasible in $\ud835\udc65_k$, then $\ud835\udc65_k + \\alpha \ud835\udc51_k \\in \ud835\udc46$ for some $\\alpha > 0$\n* $(\ud835\udc4e_i)^T(x_k+\\alpha d_k) = (a_i)^Tx_k + \\alpha(a_i)^Td_k\\leq b_i$\n* $(\ud835\udc4e_i)^Td_k\\leq 0$ for feasible $\ud835\udc51_k$ and the constraint remains active if $(\ud835\udc4e_i)^Td_k=0$\n\n## On active constraints\n* Optimization problem with inequality constraints is more difficult than problem with equality constraints since the active set in a local minimizer is not known\n* If it would be known, then it would be enough to solve a corresponding equality constrained problem\n* In that case, if the other constraints would be satisfied in the solution and all the Lagrange multipliers were non-negative, then the solution would also be a solution to the original problem\n\n## Using active set\n* At each iteration, a working set is considered which consists of the active constraints in $\ud835\udc65_k$\n* The direction $\ud835\udc51_k$ is determined so that it is a descent direction in the working set\n * E.g. Rosen\u2019s projected gradient method can be used\n\n## Active set algorithm\n1. Choose a starting point $x_1$ and determine an initial active set $I_1$ and set $k=1$\n2. Compute a feasible descent direction $d_k$ in the subspace defined by the active constraints (e.g., by using projected gradient)\n3. If $||d_k||=0$, go to step 6, otherwise, find optimal step length $\\alpha$ by staying in the feasible set and set $x_{k+1} = x_k + \\alpha d_k$\n4. If no new constraint becomes active go to step 7 (active set does not change)\n5. Addition to active set: a new constraint $j$ becomes active, update $I_{k+1} = _kh \\cup \\{j\\}$ and go to step 7\n6. Removal from active set: approximate Lagrangian multipliers $\\mu_i$, $i\\in I_k$. If $\\mu_i\\geq 0$ for all $i$, stop (active set is correct). Otherwise, remove a constraint $j$ with negative multiplier from the active set: $I_{k+1}=I_k\\setminus \\{j\\}$\n7. Set $k=k+1$ and go to step 2\n\nImplementation of the active set method is left as a voluntary exercise \n\n### Note: \n\n* Projected gradient method can also extended for non-linear constraints.\n* But, this needs some extra steps\n\n\n", "meta": {"hexsha": "3a2c989469c9e7d6200ae209d976570bcaa047eb", "size": 21646, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture 7, Direct methods for constrained optimization.ipynb", "max_stars_repo_name": "bshavazipour/TIES483-2022", "max_stars_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture 7, Direct methods for constrained optimization.ipynb", "max_issues_repo_name": "bshavazipour/TIES483-2022", "max_issues_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture 7, Direct methods for constrained optimization.ipynb", "max_forks_repo_name": "bshavazipour/TIES483-2022", "max_forks_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-03T09:40:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-03T09:40:02.000Z", "avg_line_length": 25.8614097969, "max_line_length": 269, "alphanum_fraction": 0.5240691121, "converted": true, "num_tokens": 3422, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3923368301671084, "lm_q2_score": 0.35220176844875106, "lm_q1q2_score": 0.1381817254124329}} {"text": "# Announcements\n\n* Today we will continue with our radioactivity lecture\n* Friday we will go over exam results and CP1 (which will be posted Friday). \n* The homework will be over radioactive decay and include some review material on common issues from the exam \n\n\n```python\n%matplotlib widget\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mplcursors\n```\n\n\n```python\n# Scale plot to fit page \nplt.rcParams[\"figure.figsize\"] = (8, 6)\nplt.rcParams[\"font.size\"] = 16\nplt.rcParams[\"\"]\n```\n\n## Review\n\n### Radioactive Decay\n\nEarlier last week we derived the decay equation:\n\n\n\\begin{align}\n \\frac{dN}{dt} &= -\\lambda N \\\\\n \\Rightarrow N_i(t) &= N_i(0)e^{-\\lambda t}\\\\\n\\end{align}\n\nwhere\n\n\\begin{align}\n N_i(t) &= \\mbox{number of isotopes i adjusted for decay}\\\\\n N_i(0)&= \\mbox{initial condition}\\\\\n \\end{align}\n \nWe also defined our decay constant $\\lambda$ and how it relates to the half-life of an isotope\n\n\\begin{align}\n A = N\\lambda \\\\\n \\lambda = \\frac{ln(2)}{\\tau_{1/2}} \\\\\n \\end{align}\n \n### Radioactive decay with a stable daughter product\n\nIf the decay of an element goes into a stable daughter, then the buildup of the daughter looks like: \n\ngiven the parent decay\n\\begin{align}\nN_1(t) = N_1(0)\\exp{^{\\lambda t}}\n\\end{align}\n\nthe daugter buildup is the eqivalent of the nuclides lost from the parent:\n\\begin{align}\nN_2(t) = N_1(0)\\left[1-\\exp{^{\\lambda t}}\\right]\n\\end{align}\n\nif you know $N_2(t)$ and $N_1(t)$, you can calcualte how long a sample has been decaying\n\\begin{align}\nt = \\frac{1}{\\lambda_1}ln\\left(1+\\frac{N_2(t)}{N_1(t)}\\right)\n\\end{align}\n \n### And radioactive decay with production\n\n\\begin{align}\n\\frac{dN(t)}{dt} &= -\\mbox{rate of decay} + \\mbox{rate of production}\\\\\n\\implies N(t) &= N_0 e^{-\\lambda t} + \\int_0^t dt'Q(t')e^{-\\lambda (t-t')}\\\\\n\\end{align}\n\nIf the production rate is constant $(Q(t)=Q_0)$, this simplifies:\n\n\\begin{align}\nN(t) &= N_0 e^{-\\lambda t} + \\frac{Q_0}{\\lambda}\\left[1-e^{-\\lambda t}\\right]\\\\\n\\end{align}\n\n## Radioactivity part 2, decay dynamics\n\nIn radioactive decay we can have long chains of unstable nuclei. This is common in the heavy nuclei. \n\n


By <a class=\"external free\" href=\"https://commons.wikimedia.org/wiki/User:BatesIsBack\">http://commons.wikimedia.org/wiki/User:BatesIsBack</a> - <a class=\"external free\" href=\"https://commons.wikimedia.org/wiki/File:Decay_Chain_of_Thorium.svg\">http://commons.wikimedia.org/wiki/File:Decay_Chain_of_Thorium.svg</a>, CC BY-SA 3.0, Link

\n\n## Series Decay Calculations\n\nIn these series, calculating activities of many products gets quite complex. Let's formulate some equations for a three decay series: \n\nEach series begins with $N_1$, which is governed by a familiar equation:\n\\begin{align}\n \\frac{dN_1}{dt} &= -\\lambda_1 N_1 \\\\\n\\end{align}\n\nThe second nuclide in the series is produced by a rate of the parents decay, and is removed by its own decay\n\n\\begin{align}\n \\frac{dN_2}{dt} &= \\mbox{+decay of parent - decay of itself}\\\\\n &= \\lambda_1 N_1 -\\lambda_2 N_2 \\\\\n\\end{align}\n\nThe next isotope in the series is similar:\n\n\n\\begin{align}\n \\frac{dN_3}{dt} &= \\mbox{+decay of parent - decay of itself}\\\\\n &= \\lambda_2 N_2 -\\lambda_3 N_3 \\\\\n\\end{align}\n\nThe $i$th in the series is:\n\n\\begin{align}\n \\frac{dN_i}{dt} &= \\mbox{+decay of parent - decay of itself}\\\\\n &= \\lambda_{i-1} N_{i-1} -\\lambda_i N_i \\\\\n\\end{align}\n\n### Solutions to a multi-component decay\n\n#### Nuclide 1\n\nThe solution for the first nuclide in the series is something we've already solved\n\n\\begin{align}\n N_1(t) &= N_1(0)e^{-\\lambda_1*t}\n\\end{align}\n\n#### Nuclide 2 \n\nWe can use this solution in our general formulation for Nuclide 2 \n\n\\begin{align}\n \\frac{dN_2}{dt} &= \\lambda_1 N_1 -\\lambda_2 N_2 \\\\\n &= \\lambda_1 N_1(0)e^{-\\lambda_1*t} - \\lambda_2 N_2 \\\\\n \\frac{dN_2}{dt} + \\lambda_2 N_2 &= \\lambda_1 N_1(0)e^{-\\lambda_1*t}\n\\end{align}\n\nWe can use the integrating factor $e^{-\\lambda_2 t}$ to solve this\n\n\\begin{align}\ne^{-\\lambda_2 t}\\frac{dN_2}{dt} + e^{-\\lambda_2 t}\\lambda_2 N_2 &= \\lambda_1 N_1(0)e^{(\\lambda_2-\\lambda_1)*t} \\\\ \n\\frac{d}{dt}\\left(N_2 e^{\\lambda_2*t}\\right) &= \\lambda_1 N_1(0)e^{(\\lambda_2-\\lambda_1)*t}\n\\end{align}\n\nWe can integrate this\n\n\\begin{align}\nN_2e^{\\lambda_2t} &= \\frac{\\lambda_1}{\\lambda_2-\\lambda_1}N_1(0)e^{\\lambda_2-lambda_1}t + C \\\\\nC &= \\frac{\\lambda_1}{\\lambda_2-lambda_1}N_1(0)\n\\end{align}\n\nSo $N_2$ as a function of time is: \n\\begin{align}\nN_2(t) = \\frac{\\lambda_1}{\\lambda_2-\\lambda_1}N_1(0)\\left(e^{-\\lambda_1t}-e^{-\\lambda_2t}\\right)\n\\end{align}\n\n#### Nuclide 3 \n\n\\begin{align}\n \\frac{dN_3}{dt} &= \\lambda_2 N_2 -\\lambda_3 N_3 \\\\\n\\end{align}\n\nWe use a similar process as our solution to nuclide 2. The solution is:\n\n\\begin{align}\n N_3(t) &= \\lambda_1 \\lambda_2 N_1(0)\\left[\\frac{e^{-\\lambda_1t}}{(\\lambda_2-\\lambda_1)(\\lambda_3-\\lambda_1)}+\\frac{e^{-\\lambda_2t}}{(\\lambda_1-\\lambda_2)(\\lambda_3-\\lambda_2)}+ \\frac{e^{-\\lambda_3t}}{(\\lambda_1-\\lambda_3)(\\lambda_2-\\lambda_3)}\\right] \\\\\n\\end{align}\n\nIf Nuclide 3 is *stable*, this looks like:\n\n\\begin{align}\nN_3(t) &= N_1(0)\\left(1 - \\frac{\\lambda_2}{\\lambda_2-\\lambda_1}e^{-\\lambda_1t} - \\frac{\\lambda_1}{\\lambda_1-\\lambda_2}e^{-\\lambda_2t}\\right)\n\\end{align}\n\n\n#### The batemann equations\n\nIf you continue the series farther past three daughters, the solutions have a generalized form called the **batemann equations**. These are used throughout nuclear engineering, and especially in fuel cycles. \n\n### Transient Equilibrium $t_{p}>t_d$\n\n### Secular Equilibrium $t_{p}>>t_d$\n\n### Daughter Decays Slower than parent $t_p < t_d$\n\n### Daughter is Stable $t_d = 0$\n\n\n```python\nimport math\ndef n_decay(t, n_initial=100, lam=0.4):\n \"\"\"This function describes the decay of an isotope\"\"\"\n return n_initial*math.exp(-lam*t)\n\n\n# This code plots the decay of an isotope\nimport numpy as np\ny = np.arange(26.0)\nx = np.arange(26.0)\nfor t in range(0,26):\n x[t] = t\n y[t] = n_decay(t)\n \n# creates a figure and axes with matplotlib\nfig, ax = plt.subplots()\nscatter = plt.scatter(x, y, color='blue', s=y*20, alpha=0.4) \nax.plot(x, y, color='red') \n\n# adds labels to the plot\nax.set_ylabel('N_i(t)')\nax.set_xlabel('Time')\nax.set_title('N_i')\n\n# adds tooltips\nimport mpld3\nlabels = ['{0}% remaining'.format(i) for i in y]\n\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\n\nmpld3.display()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e694dfc7f2a83edf0ed4beeab6710e4a5adbc3e5", "size": 12595, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10.05-radioactivity/01-radioactivity.ipynb", "max_stars_repo_name": "munkm/npre247", "max_stars_repo_head_hexsha": "5683fa3176e946622a31e3b207484e7ec74f8421", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-01-31T17:44:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T19:54:50.000Z", "max_issues_repo_path": "10.05-radioactivity/01-radioactivity.ipynb", "max_issues_repo_name": "munkm/npre247", "max_issues_repo_head_hexsha": "5683fa3176e946622a31e3b207484e7ec74f8421", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2022-01-28T20:32:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-31T17:43:54.000Z", "max_forks_repo_path": "10.05-radioactivity/01-radioactivity.ipynb", "max_forks_repo_name": "munkm/npre247", "max_forks_repo_head_hexsha": "5683fa3176e946622a31e3b207484e7ec74f8421", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2022-01-24T16:47:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-21T04:09:25.000Z", "avg_line_length": 38.9938080495, "max_line_length": 850, "alphanum_fraction": 0.553394204, "converted": true, "num_tokens": 2259, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49609382947091946, "lm_q2_score": 0.2782567937024021, "lm_q1q2_score": 0.13804147836412428}} {"text": "# Homework 4: Word-level entailment with neural networks\n\n\n```python\n__author__ = \"Christopher Potts\"\n__version__ = \"CS224u, Stanford, Spring 2019\"\n```\n\n## Contents\n\n1. [Overview](#Overview)\n1. [Set-up](#Set-up)\n1. [Data](#Data)\n 1. [Edge disjoint](#Edge-disjoint)\n 1. [Word disjoint](#Word-disjoint)\n1. [Baseline](#Baseline)\n 1. [Representing words: vector_func](#Representing-words:-vector_func)\n 1. [Combining words into inputs: vector_combo_func](#Combining-words-into-inputs:-vector_combo_func)\n 1. [Classifier model](#Classifier-model)\n 1. [Baseline results](#Baseline-results)\n1. [Homework questions](#Homework-questions)\n 1. [Hypothesis-only baseline [2 points]](#Hypothesis-only-baseline-[2-points])\n 1. [Alternatives to concatenation [1 point]](#Alternatives-to-concatenation-[1-point])\n 1. [A deeper network [2 points]](#A-deeper-network-[2-points])\n 1. [Your original system [4 points]](#Your-original-system-[4-points])\n1. [Bake-off [1 point]](#Bake-off-[1-point])\n\n## Overview\n\nThe general problem is word-level natural language inference.\n\nTraining examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.\n\nThe homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)\n\n\n\n## Set-up\n\nSee [the first notebook in this unit](nli_01_task_and_data.ipynb) for set-up instructions.\n\n\n```python\nfrom collections import defaultdict\nimport json\nimport numpy as np\nimport os\nimport pandas as pd\nfrom torch_shallow_neural_classifier import TorchShallowNeuralClassifier\nimport nli\nimport utils\n```\n\n\n```python\nDATA_HOME = 'data'\n\nNLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')\n\nwordentail_filename = os.path.join(\n NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')\n\nGLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')\n```\n\n## Data\n\nI've processed the data into two different train/test splits, in an effort to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample.\n\n* `edge_disjoint`: The `train` and `dev` __edge__ sets are disjoint, but many __words__ appear in both `train` and `dev`.\n* `word_disjoint`: The `train` and `dev` __vocabularies are disjoint__, and thus the edges are disjoint as well.\n\nThese are very different problems. For `word_disjoint`, there is real pressure on the model to learn abstract relationships, as opposed to memorizing properties of individual words.\n\n\n```python\nwith open(wordentail_filename) as f:\n wordentail_data = json.load(f)\n```\n\nThe outer keys are the splits plus a list giving the vocabulary for the entire dataset:\n\n\n```python\nwordentail_data.keys()\n```\n\n\n\n\n dict_keys(['edge_disjoint', 'vocab', 'word_disjoint'])\n\n\n\n### Edge disjoint\n\n\n```python\nwordentail_data['edge_disjoint'].keys()\n```\n\n\n\n\n dict_keys(['dev', 'train'])\n\n\n\nThis is what the split looks like; all three have this same format:\n\n\n```python\nwordentail_data['edge_disjoint']['dev'][: 5]\n```\n\n\n\n\n [[['sweater', 'stroke'], 0],\n [['constipation', 'hypovolemia'], 0],\n [['disease', 'inflammation'], 0],\n [['herring', 'animal'], 1],\n [['cauliflower', 'outlook'], 0]]\n\n\n\nLet's test to make sure no edges are shared between `train` and `dev`:\n\n\n```python\nnli.get_edge_overlap_size(wordentail_data, 'edge_disjoint')\n```\n\n\n\n\n 0\n\n\n\nAs we expect, a *lot* of vocabulary items are shared between `train` and `dev`:\n\n\n```python\nnli.get_vocab_overlap_size(wordentail_data, 'edge_disjoint')\n```\n\n\n\n\n 2916\n\n\n\nThis is a large percentage of the entire vocab:\n\n\n```python\nlen(wordentail_data['vocab'])\n```\n\n\n\n\n 8470\n\n\n\nHere's the distribution of labels in the `train` set. It's highly imbalanced, which will pose a challenge for learning. (I'll go ahead and reveal that the `dev` set is similarly distributed.)\n\n\n```python\ndef label_distribution(split):\n return pd.DataFrame(wordentail_data[split]['train'])[1].value_counts()\n```\n\n\n```python\nlabel_distribution('edge_disjoint')\n```\n\n\n\n\n 0 14650\n 1 2745\n Name: 1, dtype: int64\n\n\n\n### Word disjoint\n\n\n```python\nwordentail_data['word_disjoint'].keys()\n```\n\n\n\n\n dict_keys(['dev', 'train'])\n\n\n\nIn the `word_disjoint` split, no __words__ are shared between `train` and `dev`:\n\n\n```python\nnli.get_vocab_overlap_size(wordentail_data, 'word_disjoint')\n```\n\n\n\n\n 0\n\n\n\nBecause no words are shared between `train` and `dev`, no edges are either:\n\n\n```python\nnli.get_edge_overlap_size(wordentail_data, 'word_disjoint')\n```\n\n\n\n\n 0\n\n\n\nThe label distribution is similar to that of `edge_disjoint`, though the overall number of examples is a bit smaller:\n\n\n```python\nlabel_distribution('word_disjoint')\n```\n\n\n\n\n 0 7199\n 1 1349\n Name: 1, dtype: int64\n\n\n\n## Baseline\n\nEven in deep learning, __feature representation is vital and requires care!__ For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input.\n\n### Representing words: vector_func\n\nLet's consider two baseline word representations methods:\n\n1. Random vectors (as returned by `utils.randvec`).\n1. 50-dimensional GloVe representations.\n\n\n```python\ndef randvec(w, n=50, lower=-1.0, upper=1.0):\n \"\"\"Returns a random vector of length `n`. `w` is ignored.\"\"\"\n return utils.randvec(n=n, lower=lower, upper=upper)\n```\n\n\n```python\n# Any of the files in glove.6B will work here:\n\nglove_dim = 50\n\nglove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))\n\n# Creates a dict mapping strings (words) to GloVe vectors:\nGLOVE = utils.glove2dict(glove_src)\n\ndef glove_vec(w): \n \"\"\"Return `w`'s GloVe representation if available, else return \n a random vector.\"\"\"\n return GLOVE.get(w, randvec(w, n=glove_dim))\n```\n\n### Combining words into inputs: vector_combo_func\n\nHere we decide how to combine the two word vectors into a single representation. In more detail, where `u` is a vector representation of the left word and `v` is a vector representation of the right word, we need a function `vector_combo_func` such that `vector_combo_func(u, v)` returns a new input vector `z` of dimension `m`. A simple example is concatenation:\n\n\n```python\ndef vec_concatenate(u, v):\n \"\"\"Concatenate np.array instances `u` and `v` into a new np.array\"\"\"\n return np.concatenate((u, v))\n```\n\n`vector_combo_func` could instead be vector average, vector difference, etc. (even combinations of those) \u2013 there's lots of space for experimentation here; [homework question 2](#Alternatives-to-concatenation-[1-point]) below pushes you to do some exploration.\n\n### Classifier model\n\nFor a baseline model, I chose `TorchShallowNeuralClassifier`:\n\n\n```python\nnet = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)\n```\n\n### Baseline results\n\nThe following puts the above pieces together, using `vector_func=glove_vec`, since `vector_func=randvec` seems so hopelessly misguided for `word_disjoint`!\n\n\n```python\nword_disjoint_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['word_disjoint']['train'],\n assess_data=wordentail_data['word_disjoint']['dev'], \n model=net, \n vector_func=glove_vec,\n vector_combo_func=vec_concatenate)\n```\n\n Finished epoch 100 of 100; error is 0.022624022560194135\n\n precision recall f1-score support\n \n 0 0.92 0.92 0.92 1910\n 1 0.36 0.35 0.36 239\n \n micro avg 0.86 0.86 0.86 2149\n macro avg 0.64 0.64 0.64 2149\n weighted avg 0.86 0.86 0.86 2149\n \n\n\n## Homework questions\n\nPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)\n\n### Hypothesis-only baseline [2 points]\n\nDuring our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for inference tasks can be remarkably robust. This question asks you to explore briefly how this baseline effects the 'edge_disjoint' and 'word_disjoint' versions of our task.\n\nFor this problem, submit code the following:\n\n1. A `vector_combo_func` function called `hypothesis_only` that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.\n\n1. Code for looping over the two conditions 'word_disjoint' and 'edge_disjoint' and the two `vector_combo_func` values `vec_concatenate` and `hypothesis_only`, calling `nli.wordentail_experiment` to train on the conditions 'train' portion and assess on its 'dev' portion, with `glove50vec` as the `vector_func`. So that the results are consistent, use an `sklearn.linear_model.LogisticRegression` with default parameters as the model.\n\n1. Print out the percentage-wise increase in macro-F1 over the `hypothesis_only` delivers over `vec_concatenate` for each of the two conditions. For example, if `hypothesis_only` returns 0.5 for condition `C` and `vec_concatenate` delivers 0.75 for `C`, then you'd report a 50% increase for `C`. The values you need are stored in the dictionary returned by `nli.wordentail_experiment`, with key 'macro-F1'. Please use two digits of precision for the increases.\n\n\n```python\nimport sklearn\n```\n\n\n```python\n#1. hypothesis_only\ndef hypothesis_only(u,v):\n return v\n```\n\n\n```python\n# 2. looping over two conditions and two vector_combo_func\nconditions = ['word_disjoint', 'edge_disjoint']\ncombo_funcs = [vec_concatenate, hypothesis_only]\n# create a dictionary for the macro F1 score\nmacroF1_dict = {}\nfor i in range(len(conditions)):\n for j in range(len(combo_funcs)):\n print(\"conditions:\", conditions[i])\n print(\"combo functions:\", j)\n word_disjoint_experiment = nli.wordentail_experiment(\n train_data=wordentail_data[conditions[i]]['train'],\n assess_data=wordentail_data[conditions[i]]['dev'], \n model=sklearn.linear_model.LogisticRegression(), \n vector_func=glove_vec,\n vector_combo_func=combo_funcs[j])\n macroF1_dict[(conditions[i], combo_funcs[j])] = word_disjoint_experiment['macro-F1']\n```\n\n conditions: word_disjoint\n combo functions: 0\n\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.90 0.98 0.94 1910\n 1 0.48 0.15 0.22 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.69 0.56 0.58 2149\n weighted avg 0.85 0.89 0.86 2149\n \n conditions: word_disjoint\n combo functions: 1\n\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.89 0.99 0.94 1910\n 1 0.25 0.03 0.06 239\n \n micro avg 0.88 0.88 0.88 2149\n macro avg 0.57 0.51 0.50 2149\n weighted avg 0.82 0.88 0.84 2149\n \n conditions: edge_disjoint\n combo functions: 0\n\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.88 0.97 0.92 7376\n 1 0.58 0.23 0.33 1321\n \n micro avg 0.86 0.86 0.86 8697\n macro avg 0.73 0.60 0.62 8697\n weighted avg 0.83 0.86 0.83 8697\n \n conditions: edge_disjoint\n combo functions: 1\n\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.87 0.98 0.92 7376\n 1 0.59 0.20 0.30 1321\n \n micro avg 0.86 0.86 0.86 8697\n macro avg 0.73 0.59 0.61 8697\n weighted avg 0.83 0.86 0.83 8697\n \n\n\n\n```python\n# Print out the percentage-wise increase \n#in macro-F1 over the hypothesis_only delivers over vec_concatenate for each of the two conditions. \nmacroF1_dict\n```\n\n\n\n\n {('word_disjoint',\n ): 0.5818232403154631,\n ('word_disjoint',\n ): 0.4978590088855942,\n ('edge_disjoint',\n ): 0.623656138011501,\n ('edge_disjoint',\n ): 0.6079517065797235}\n\n\n\n\n```python\nprint(\"percentage-wise increase for word_disjoint is:\", \n str( round((0.5803144224196856 - 0.5166301461305162) / 0.5166301461305162 * 100, 2) ) + '%')\nprint(\"percentage-wise increase for edge_disjoint is:\",\n str( round((0.6270230030253002 - 0.6089775423004441) / 0.6089775423004441 * 100, 2) ) + '%')\n\n```\n\n percentage-wise increase for word_disjoint is: 12.33%\n percentage-wise increase for edge_disjoint is: 2.96%\n\n\n### Alternatives to concatenation [1 point]\n\nWe've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore a simple alternative. \n\nFor this problem, submit code the following:\n\n1. A new potential value for `vector_combo_func` that does something different from concatenation. Options include, but are not limited to, element-wise addition, difference, and multiplication. These can be combined with concatenation if you like.\n1. Include a use of `nli.wordentail_experiment` in the same configuration as the one in [Baseline results](#Baseline-results) above, but with your new value of `vector_combo_func`.\n\n\n```python\ndef alter_vec_concatenate(u, v):\n \"\"\"Concatenate np.array instances `u` and `v` into a new np.array\"\"\"\n concatenate_v = np.concatenate((u, v))\n addition_v = np.multiply(u,v)\n return np.concatenate((concatenate_v, addition_v))\n```\n\n\n```python\nalternative_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['word_disjoint']['train'],\n assess_data=wordentail_data['word_disjoint']['dev'], \n model=net, \n vector_func=glove_vec,\n vector_combo_func=alter_vec_concatenate)\n```\n\n Finished epoch 100 of 100; error is 0.012377092614769936\n\n precision recall f1-score support\n \n 0 0.93 0.94 0.94 1910\n 1 0.48 0.44 0.46 239\n \n micro avg 0.88 0.88 0.88 2149\n macro avg 0.70 0.69 0.70 2149\n weighted avg 0.88 0.88 0.88 2149\n \n\n\n### A deeper network [2 points]\n\nIt is very easy to subclass `TorchShallowNeuralClassifier` if all you want to do is change the network graph: all you have to do is write a new `define_graph`. If your graph has new arguments that the user might want to set, then you should also redefine `__init__` so that these values are accepted and set as attributes.\n\nFor this question, please subclass `TorchShallowNeuralClassifier` so that it defines the following graph:\n\n$$\\begin{align}\nh_{1} &= xW_{1} + b_{1} \\\\\nr_{1} &= \\textbf{Bernoulli}(1 - \\textbf{dropout_prob}, n) \\\\\nd_{1} &= r_1 * h_{1} \\\\\nh_{2} &= f(d_{1}) \\\\\nh_{3} &= h_{2}W_{2} + b_{2}\n\\end{align}$$\n\nHere, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \\textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside `TorchShallowNeuralClassifier`, $h_{3}$ is the basis for a softmax classifier, so no activation function is applied to it.)\n\nFor comparison, using this notation, `TorchShallowNeuralClassifier` defines the following graph:\n\n$$\\begin{align}\nh_{1} &= xW_{1} + b_{1} \\\\\nh_{2} &= f(h_{1}) \\\\\nh_{3} &= h_{2}W_{2} + b_{2}\n\\end{align}$$\n\nThe following code starts this sub-class for you, so that you can concentrate on `define_graph`. Be sure to make use of `self.dropout_prob`\n\nFor this problem, submit just your completed `TorchDeepNeuralClassifier`. You needn't evaluate it, though we assume you will be keen to do that!\n\n\n```python\nimport torch.nn as nn\n\nclass TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):\n def __init__(self, dropout_prob=0.7, **kwargs):\n self.dropout_prob = dropout_prob\n super().__init__(**kwargs)\n \n def define_graph(self):\n \"\"\"Complete this method!\n \n Returns\n -------\n an `nn.Module` instance, which can be a free-standing class you \n write yourself, as in `torch_rnn_classifier`, or the outpiut of \n `nn.Sequential`, as in `torch_shallow_neural_classifier`.\n \n \"\"\"\n \n return nn.Sequential(\n nn.Linear(self.input_dim, self.hidden_dim),\n nn.Dropout(p=self.dropout_prob),\n self.hidden_activation,\n nn.Linear(self.hidden_dim, self.n_classes_))\n \n```\n\n\n```python\nfrom sklearn.datasets import load_digits\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report, accuracy_score\n\ndigits = load_digits()\nX = digits.data\ny = digits.target\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.33, random_state=42)\n\nmod = TorchDeepNeuralClassifier()\n\nprint(mod)\n\nmod.fit(X_train, y_train)\npredictions = mod.predict(X_test)\n\nprint(\"\\nClassification report:\")\n\nprint(classification_report(y_test, predictions))\n\nprint(accuracy_score(y_test, predictions))\n```\n\n Finished epoch 1 of 100; error is 4.541479587554932\n\n TorchDeepNeuralClassifier(\n \thidden_dim=50,\n \thidden_activation=Tanh(),\n \tbatch_size=1028,\n \tmax_iter=100,\n \teta=0.01,\n \toptimizer=,\n \tl2_strength=0)\n\n\n Finished epoch 100 of 100; error is 1.0376653373241425\n\n \n Classification report:\n precision recall f1-score support\n \n 0 1.00 0.96 0.98 55\n 1 0.93 0.91 0.92 55\n 2 0.91 1.00 0.95 52\n 3 0.93 0.89 0.91 56\n 4 0.94 0.97 0.95 64\n 5 0.93 0.95 0.94 73\n 6 0.93 0.98 0.96 57\n 7 1.00 0.98 0.99 62\n 8 0.91 0.92 0.91 52\n 9 0.95 0.87 0.91 68\n \n micro avg 0.94 0.94 0.94 594\n macro avg 0.94 0.94 0.94 594\n weighted avg 0.94 0.94 0.94 594\n \n 0.9427609427609428\n\n\n### Your original system [4 points]\n\nThis is a simple dataset, but our focus on the 'word_disjoint' condition ensures that it's a challenging one, and there are lots of modeling strategies one might adopt. \n\nYou are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.\n\nKeep in mind that, for the bake-off evaluation, the 'edge_disjoint' portions of the data are off limits. You can, though, train on the combination of the 'word_disjoint' 'train' and 'dev' portions. You are free to use different pretrained word vectors and the like. Please do not introduce additional entailment datasets into your training data, though.\n\nPlease embed your code in this notebook so that we can rerun it.\n\n\n```python\n# explore different models\nfrom sklearn import model_selection\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\n\n# prepare models\nmodels = []\nmodels.append(('LR', LogisticRegression()))\nmodels.append(('LDA', LinearDiscriminantAnalysis()))\nmodels.append(('KNN', KNeighborsClassifier()))\nmodels.append(('CART', DecisionTreeClassifier()))\nmodels.append(('NB', GaussianNB()))\nmodels.append(('SVM', SVC()))\n```\n\n\n```python\n# using alter_vec_concatenate function\nmacroF1_dict = {}\nfor name, model in models:\n word_disjoint_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['word_disjoint']['train'],\n assess_data=wordentail_data['word_disjoint']['dev'], \n model=model, \n vector_func=glove_vec,\n vector_combo_func=alter_vec_concatenate)\n macroF1_dict[name] = word_disjoint_experiment['macro-F1']\n \n\n```\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.92 0.97 0.94 1910\n 1 0.53 0.30 0.38 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.72 0.63 0.66 2149\n weighted avg 0.87 0.89 0.88 2149\n \n precision recall f1-score support\n \n 0 0.91 0.97 0.94 1910\n 1 0.54 0.28 0.37 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.73 0.62 0.65 2149\n weighted avg 0.87 0.89 0.88 2149\n \n precision recall f1-score support\n \n 0 0.91 0.93 0.92 1910\n 1 0.31 0.25 0.28 239\n \n micro avg 0.86 0.86 0.86 2149\n macro avg 0.61 0.59 0.60 2149\n weighted avg 0.84 0.86 0.85 2149\n \n precision recall f1-score support\n \n 0 0.90 0.84 0.87 1910\n 1 0.18 0.29 0.23 239\n \n micro avg 0.78 0.78 0.78 2149\n macro avg 0.54 0.57 0.55 2149\n weighted avg 0.82 0.78 0.80 2149\n \n precision recall f1-score support\n \n 0 0.93 0.87 0.90 1910\n 1 0.32 0.51 0.40 239\n \n micro avg 0.83 0.83 0.83 2149\n macro avg 0.63 0.69 0.65 2149\n weighted avg 0.87 0.83 0.84 2149\n \n\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.\n \"avoid this warning.\", FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.90 1.00 0.95 1910\n 1 0.85 0.14 0.24 239\n \n micro avg 0.90 0.90 0.90 2149\n macro avg 0.88 0.57 0.60 2149\n weighted avg 0.90 0.90 0.87 2149\n \n\n\n\n```python\nmacroF1_dict\n```\n\n\n\n\n {'LR': 0.6609217738768122,\n 'LDA': 0.6537424265984265,\n 'KNN': 0.5973378736366588,\n 'CART': 0.5483229562403265,\n 'NB': 0.6480074347308773,\n 'SVM': 0.5956134882605116}\n\n\n\n\n```python\n# using original vec_concatenate function\nmacroF1_dict_original = {}\nfor name, model in models:\n word_disjoint_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['word_disjoint']['train'],\n assess_data=wordentail_data['word_disjoint']['dev'], \n model=model, \n vector_func=glove_vec,\n vector_combo_func=vec_concatenate)\n macroF1_dict_original[name] = word_disjoint_experiment['macro-F1']\n \n\n```\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.90 0.98 0.94 1910\n 1 0.49 0.16 0.24 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.70 0.57 0.59 2149\n weighted avg 0.86 0.89 0.86 2149\n \n precision recall f1-score support\n \n 0 0.90 0.98 0.94 1910\n 1 0.51 0.18 0.26 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.71 0.58 0.60 2149\n weighted avg 0.86 0.89 0.86 2149\n \n precision recall f1-score support\n \n 0 0.91 0.94 0.92 1910\n 1 0.36 0.29 0.32 239\n \n micro avg 0.86 0.86 0.86 2149\n macro avg 0.64 0.61 0.62 2149\n weighted avg 0.85 0.86 0.86 2149\n \n precision recall f1-score support\n \n 0 0.90 0.86 0.88 1910\n 1 0.19 0.27 0.22 239\n \n micro avg 0.79 0.79 0.79 2149\n macro avg 0.55 0.56 0.55 2149\n weighted avg 0.82 0.79 0.81 2149\n \n precision recall f1-score support\n \n 0 0.92 0.87 0.89 1910\n 1 0.29 0.44 0.35 239\n \n micro avg 0.82 0.82 0.82 2149\n macro avg 0.61 0.65 0.62 2149\n weighted avg 0.85 0.82 0.83 2149\n \n\n\n /home/teng/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.\n \"avoid this warning.\", FutureWarning)\n\n\n precision recall f1-score support\n \n 0 0.90 1.00 0.94 1910\n 1 0.81 0.07 0.13 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.85 0.53 0.54 2149\n weighted avg 0.89 0.89 0.85 2149\n \n\n\n\n```python\nmacroF1_dict_original\n```\n\n\n\n\n {'LR': 0.5896050403454263,\n 'LDA': 0.6005052051439069,\n 'KNN': 0.6227195113152642,\n 'CART': 0.5519613010529768,\n 'NB': 0.6207705823123246,\n 'SVM': 0.5374004648150265}\n\n\n\n\n```python\n# conclusion: using LDA/LR + alter_vec_concatenate produced relatively high results\n```\n\n## Bake-off [1 point]\n\nThe goal of the bake-off is to achieve the highest macro-average F1 score on __word_disjoint__, on a test set that we will make available at the start of the bake-off on May 6. The announcement will go out on Piazza. To enter, you'll be asked to run `nli.bake_off_evaluation` on the output of your chosen `nli.wordentail_experiment` run. \n\nTo enter the bake-off, upload this notebook on Canvas:\n\nhttps://canvas.stanford.edu/courses/99711/assignments/187250\n\nThe cells below this one constitute your bake-off entry.\n\nThe rules described in the [Your original system](#Your-original-system-[4-points]) homework question are also in effect for the bake-off.\n\nSystems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n\nThe bake-off will close at 4:30 pm on May 8. Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.\n\n\n```python\n# Enter your bake-off assessment code into this cell. \n# Please do not remove this comment.\ntest_data_filename = os.path.join(\n NLIDATA_HOME,\n \"bakeoff4-wordentail-data\",\n \"nli_wordentail_bakeoff_data-test.json\")\n\nmodel = LinearDiscriminantAnalysis()\n\nword_disjoint_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['word_disjoint']['train'],\n assess_data=wordentail_data['word_disjoint']['dev'], \n model=model, \n vector_func=glove_vec,\n vector_combo_func=alter_vec_concatenate)\n\nnli.bake_off_evaluation(\n word_disjoint_experiment,\n test_data_filename) \n\n```\n\n precision recall f1-score support\n \n 0 0.91 0.97 0.94 1910\n 1 0.54 0.28 0.37 239\n \n micro avg 0.89 0.89 0.89 2149\n macro avg 0.73 0.62 0.65 2149\n weighted avg 0.87 0.89 0.88 2149\n \n precision recall f1-score support\n \n 0 0.86 0.95 0.90 1767\n 1 0.68 0.38 0.49 446\n \n micro avg 0.84 0.84 0.84 2213\n macro avg 0.77 0.67 0.69 2213\n weighted avg 0.82 0.84 0.82 2213\n \n\n\n\n```python\n# On an otherwise blank line in this cell, please enter\n# your macro-avg f1 value as reported by the code above. \n# Please enter only a number between 0 and 1 inclusive.\n# Please do not remove this comment.\n0.69\n```\n", "meta": {"hexsha": "4941b89aba39c6391efc1c91f4a8c040a6ddf36a", "size": 47431, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hw4_wordentai_with_bakeoff.ipynb", "max_stars_repo_name": "yiyang7/cs224u", "max_stars_repo_head_hexsha": "3e360a15640f3ba5fc24f34ad45fc8284aabd26d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw4_wordentai_with_bakeoff.ipynb", "max_issues_repo_name": "yiyang7/cs224u", "max_issues_repo_head_hexsha": "3e360a15640f3ba5fc24f34ad45fc8284aabd26d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw4_wordentai_with_bakeoff.ipynb", "max_forks_repo_name": "yiyang7/cs224u", "max_forks_repo_head_hexsha": "3e360a15640f3ba5fc24f34ad45fc8284aabd26d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.376109215, "max_line_length": 574, "alphanum_fraction": 0.5261748645, "converted": true, "num_tokens": 8780, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.2751297238231752, "lm_q1q2_score": 0.1375648619115876}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)//2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\n Applied log-transform to lambda_1 and added transformed lambda_1_log_ to model.\n Applied log-transform to lambda_2 and added transformed lambda_2_log_ to model.\n\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000, step=step, return_inferencedata=False)\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:02<00:00, 4511.50it/s]\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "abcb724328f2894a5f6e5bf077a3733ccd7d8b07", "size": 369634, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "Study-Repos-Forks/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "efc9398f81f26340d075229f3d2357ce50fd4630", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19259, "max_stars_repo_stars_event_min_datetime": "2015-01-01T10:31:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T20:15:16.000Z", "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "Study-Repos-Forks/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "efc9398f81f26340d075229f3d2357ce50fd4630", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 255, "max_issues_repo_issues_event_min_datetime": "2015-01-07T17:12:59.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T14:13:03.000Z", "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "Study-Repos-Forks/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "efc9398f81f26340d075229f3d2357ce50fd4630", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7037, "max_forks_repo_forks_event_min_datetime": "2015-01-01T12:58:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T23:01:12.000Z", "avg_line_length": 345.7754911132, "max_line_length": 104444, "alphanum_fraction": 0.9033043497, "converted": true, "num_tokens": 11184, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4416730056646256, "lm_q2_score": 0.3106943768319878, "lm_q1q2_score": 0.13722531925848186}} {"text": "```python\n%matplotlib inline\n```\n\n# 592B Fall 2019 Problem Set 2 due Thurs 09/19 11.59PM\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io.wavfile as wavfile \n\nfrom ipywidgets import interactive\nfrom IPython.display import Audio, display\n```\n\nThe audio file in the directory that this notebook is in, `Fa50.wav`, comes from [Sameer ud Dowla Khan](http://www.reed.edu/linguistics/khan/index.html) at Reed College. It is an utterance of Bengali illustrating how Bengali intonational events chunk utterances into prosodic phrases. See page 101 of Khan's [dissertation on Bengali intonation](http://www.reed.edu/linguistics/khan/assets/Khan%202008%20disseration%20Intonational%20phonology%20and%20focus%20prosody%20of%20Bengali.pdf).\n\nHere's an annotated f0 contour of `Fa50.wav` from that page.\n\n\nYou will be doing some work with this audio file in this problem set.\n\n## Problem 1: converting from samples to time\n\nWrite a function that:\n- plots the audio data from `t_start` to `t_stop`, with time in seconds on the x-axis\n- creates an Audio object you can play, playing the audio data from `t_start` to `t_stop`.\n- if you like, you can generate an interactive \"widget\" like we saw in the Class 2.2 notebook, where you can manipulate sliders and then see the plot and have the audio change.\n\nYou can take a look at the sample code for converting from samples to time in Class 2.2's notebook, but remember that there are problems with that code. You can do way better!\n\n## Problem 2: resampling audio data and writing it to file\n\nThe purpose of this problem is to:\n- introduce you to [`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html), which we will be continuing to use during the course.\n- have you figure out how to write data to audio files\n- give you experience with figuring out how to use unfamiliar functions own your own\n- get you thinking more about the effect of sampling rates on representing signals\n\n*n.b. in my past experience, people have sometimes had problems using `scipy.signal`'s resampling function on their machine. If that ends up being the case for you, you might try instead [`librosa`](https://librosa.github.io/librosa/)'s sampling function.*\n\n\n1. Resample the bengali audio data using `scipy.signal.resample`, plot the resampled data, and also create a playable Audio object of the resampled data. Do this for two sampling rates: two times the original sampling rate of the file, and half of the original sampling rate of the file. Note: you may need to use `round()` and `int()` to coerce the number of samples to be an integer.\n2. Try to explain why the audio sounds the way it does for the upsampled and downsampled audio.\n3. Write the re-sampled audio to WAV files in the current directory using `scipy.io.wavfile.write`.\n4. Use `scipy.io.wavfile.read` to read in your re-sampled files and check that the sampling rate is what you expect.\n\n\n\n\n\n\n\n## Problem 3: Fourier series of a square wave\n\nWe will be going over the introductory material you need for this on Tuesday 09/17, so feel free to wait until then to get started on this problem if you like.\n\n### Define a square wave.\n1. Define a function for a [square wave](http://mathworld.wolfram.com/SquareWave.html) with a period of 1, with y = 1 from x = 0 to 0.5, and y = 0 from x = 0.5 to 1. You might see if there are any functions in `scipy.signal` that could help.\n2. Make a plot of your square wave showing 5 periods.\n\n### Reconstruct the square wave using a Fourier series.\n1. Calculate the Fourier coefficients for a square wave. Note: there are an infinite number of\ncoefficients, so just calculate the first six. Try to find a\npattern for the coefficient values of the infinite series.\n2. Plot the individual Fourier series terms, i.e., make plots of each of the following:\n\\begin{align}\n a_0\\\\\n a_1\\cos(2\\pi nt) + b_1\\sin(2\\pi nt)\\\\\n a_2\\cos(2\\pi nt) + b_2\\sin(2\\pi nt)\\\\\n a_3\\cos(2\\pi nt) + b_3\\sin(2\\pi nt)\\\\\n a_4\\cos(2\\pi nt) + b_4\\sin(2\\pi nt)\\\\\n a_5\\cos(2\\pi nt) + b_5\\sin(2\\pi nt)\n\\end{align}\n\n3. Plot the reconstruction of the square wave as you add in successive terms in the Fourier series, i.e., make plots of each of the following (I am subsuming $a_0$ as a term in the sum by having the sum start from $n=0$):\n\n\\begin{align}\n \\displaystyle\\sum\\limits_{n=0}^0 \\left(a_n\\cos(2\\pi nt) + b_n\\sin(2\\pi nt)\\right)\\\\\n \\displaystyle\\sum\\limits_{n=0}^1 \\left(a_n\\cos(2\\pi nt) + b_n\\sin(2\\pi nt)\\right)\\\\\n \\displaystyle\\sum\\limits_{n=0}^2 \\left(a_n\\cos(2\\pi nt) + b_n\\sin(2\\pi nt)\\right)\\\\\n \\displaystyle\\sum\\limits_{n=0}^3 \\left(a_n\\cos(2\\pi nt) + b_n\\sin(2\\pi nt)\\right)\\\\\n \\displaystyle\\sum\\limits_{n=0}^4 \\left(a_n\\cos(2\\pi nt) + b_n\\sin(2\\pi nt)\\right)\\\\\n \\displaystyle\\sum\\limits_{n=0}^5 \\left(a_n\\cos(2\\pi nt) + b_n\\sin(2\\pi nt)\\right)\\\\\n\\end{align}\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "88f122693360c40a44b11e0d0f5afaaa0e47b9c1", "size": 6783, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "592B-f19_ps02.ipynb", "max_stars_repo_name": "ling592b-f19/ling592b-f19-ps01-nisayas", "max_stars_repo_head_hexsha": "35ddff036d30cd2aecd2fdb47ec6b4768f890c96", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "592B-f19_ps02.ipynb", "max_issues_repo_name": "ling592b-f19/ling592b-f19-ps01-nisayas", "max_issues_repo_head_hexsha": "35ddff036d30cd2aecd2fdb47ec6b4768f890c96", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "592B-f19_ps02.ipynb", "max_forks_repo_name": "ling592b-f19/ling592b-f19-ps01-nisayas", "max_forks_repo_head_hexsha": "35ddff036d30cd2aecd2fdb47ec6b4768f890c96", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.3333333333, "max_line_length": 495, "alphanum_fraction": 0.6208167478, "converted": true, "num_tokens": 1418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.42250463481418826, "lm_q2_score": 0.3242354055108441, "lm_q1q2_score": 0.13699096159918944}} {"text": "```python\n%matplotlib inline\n```\n\n\n\ub9c8\ub9ac\uc624 \uac8c\uc784 RL \uc5d0\uc774\uc804\ud2b8\ub85c \ud559\uc2b5\ud558\uae30\n===============================\n\n\uc800\uc790: `Yuansong Feng `__, `Suraj\nSubramanian `__, `Howard\nWang `__, `Steven\nGuo `__.\n\n\ubc88\uc5ed: `\uae40\ud0dc\uc601 `__. \n\n\uc774\ubc88 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 \uc2ec\uce35 \uac15\ud654 \ud559\uc2b5\uc758 \uae30\ubcf8 \uc0ac\ud56d\ub4e4\uc5d0 \ub300\ud574 \uc774\uc57c\uae30\ud574\ubcf4\ub3c4\ub85d \ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\ub9c8\uc9c0\ub9c9\uc5d0\ub294, \uc2a4\uc2a4\ub85c \uac8c\uc784\uc744 \ud560 \uc218 \uc788\ub294 AI \uae30\ubc18 \ub9c8\ub9ac\uc624\ub97c \n(`Double Deep Q-Networks `__ \uc0ac\uc6a9) \n\uad6c\ud604\ud558\uac8c \ub429\ub2c8\ub2e4.\n\n\uc774 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 RL\uc5d0 \ub300\ud55c \uc0ac\uc804 \uc9c0\uc2dd\uc774 \ud544\uc694\ud558\uc9c0 \uc54a\uc9c0\ub9cc, \n\uc774\ub7ec\ud55c `\ub9c1\ud06c `__\n\ub97c \ud1b5\ud574 RL \uac1c\ub150\uc5d0 \uce5c\uc219\ud574 \uc9c8 \uc218 \uc788\uc73c\uba70,\n\uc5ec\uae30 \uc788\ub294\n`\uce58\ud2b8\uc2dc\ud2b8 `__\n\ub97c \ud65c\uc6a9\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \uc804\uccb4 \ucf54\ub4dc\ub294\n`\uc5ec\uae30 `__\n\uc5d0\uc11c \ud655\uc778 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n.. figure:: /_static/img/mario.gif\n :alt: mario\n\n\n\n```python\n# !pip install gym-super-mario-bros==7.3.0\n\nimport torch\nfrom torch import nn\nfrom torchvision import transforms as T\nfrom PIL import Image\nimport numpy as np\nfrom pathlib import Path\nfrom collections import deque\nimport random, datetime, os, copy\n\n# Gym\uc740 \uac15\ud654\ud559\uc2b5\uc744 \uc704\ud55c OpenAI \ud234\ud0b7\uc785\ub2c8\ub2e4.\nimport gym\nfrom gym.spaces import Box\nfrom gym.wrappers import FrameStack\n\n# OpenAI Gym\uc744 \uc704\ud55c NES \uc5d0\ubbac\ub808\uc774\ud130\nfrom nes_py.wrappers import JoypadSpace\n\n# OpenAI Gym\uc5d0\uc11c\uc758 \uc288\ud37c \ub9c8\ub9ac\uc624 \ud658\uacbd \uc138\ud305\nimport gym_super_mario_bros\n```\n\n\uac15\ud654\ud559\uc2b5 \uac1c\ub150\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n**\ud658\uacbd(Environment)** : \uc5d0\uc774\uc804\ud2b8\uac00 \uc0c1\ud638\uc791\uc6a9\ud558\uba70 \uc2a4\uc2a4\ub85c \ubc30\uc6b0\ub294 \uc138\uacc4\uc785\ub2c8\ub2e4.\n\n**\ud589\ub3d9(Action)** $a$ : \uc5d0\uc774\uc804\ud2b8\uac00 \ud658\uacbd\uc5d0 \uc5b4\ub5bb\uac8c \uc751\ub2f5\ud558\ub294\uc9c0 \ud589\ub3d9\uc744 \ud1b5\ud574 \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \n\uac00\ub2a5\ud55c \ubaa8\ub4e0 \ud589\ub3d9\uc758 \uc9d1\ud569\uc744 *\ud589\ub3d9 \uacf5\uac04* \uc774\ub77c\uace0 \ud569\ub2c8\ub2e4.\n\n**\uc0c1\ud0dc(State)** $s$ : \ud658\uacbd\uc758 \ud604\uc7ac \ud2b9\uc131\uc744 \uc0c1\ud0dc\ub97c \ud1b5\ud574 \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\ud658\uacbd\uc774 \uc788\uc744 \uc218 \uc788\ub294 \ubaa8\ub4e0 \uac00\ub2a5\ud55c \uc0c1\ud0dc \uc9d1\ud569\uc744 *\uc0c1\ud0dc \uacf5\uac04* \uc774\ub77c\uace0 \ud569\ub2c8\ub2e4.\n\n**\ud3ec\uc0c1(Reward)** $r$ : \ud3ec\uc0c1\uc740 \ud658\uacbd\uc5d0\uc11c \uc5d0\uc774\uc804\ud2b8\ub85c \uc804\ub2ec\ub418\ub294 \ud575\uc2ec \ud53c\ub4dc\ubc31\uc785\ub2c8\ub2e4.\n\uc5d0\uc774\uc804\ud2b8\uac00 \ud559\uc2b5\ud558\uace0 \ud5a5\ud6c4 \ud589\ub3d9\uc744 \ubcc0\uacbd\ud558\ub3c4\ub85d \uc720\ub3c4\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\uc5ec\ub7ec \uc2dc\uac04 \ub2e8\uacc4\uc5d0 \uac78\uce5c \ud3ec\uc0c1\uc758 \ud569\uc744 **\ub9ac\ud134(Return)** \uc774\ub77c\uace0 \ud569\ub2c8\ub2e4.\n\n**\ucd5c\uc801\uc758 \ud589\ub3d9-\uac00\uce58 \ud568\uc218(Action-Value function)** $Q^*(s,a)$ : \uc0c1\ud0dc $s$\n\uc5d0\uc11c \uc2dc\uc791\ud558\uba74 \uc608\uc0c1\ub418\ub294 \ub9ac\ud134\uc744 \ubc18\ud658\ud558\uace0, \uc784\uc758\uc758 \ud589\ub3d9 $a$\n\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4. \uadf8\ub9ac\uace0 \uac01\uac01\uc758 \ubbf8\ub798\uc758 \ub2e8\uacc4\uc5d0\uc11c \ud3ec\uc0c1\uc758 \ud569\uc744 \uadf9\ub300\ud654\ud558\ub294 \ud589\ub3d9\uc744 \uc120\ud0dd\ud558\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n$Q$ \ub294 \uc0c1\ud0dc\uc5d0\uc11c \ud589\ub3d9\uc758 \u201c\ud488\uc9c8\u201d \n\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \uc6b0\ub9ac\ub294 \uc774 \ud568\uc218\ub97c \uadfc\uc0ac \uc2dc\ud0a4\ub824\uace0 \ud569\ub2c8\ub2e4.\n\n\n\n\n\ud658\uacbd(Environment)\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n\ud658\uacbd \ucd08\uae30\ud654\ud558\uae30\n------------------------\n\n\ub9c8\ub9ac\uc624 \uac8c\uc784\uc5d0\uc11c \ud658\uacbd\uc740 \ud29c\ube0c, \ubc84\uc12f, \uadf8 \uc774\uc678 \ub2e4\ub978 \uc5ec\ub7ec \uc694\uc18c\ub4e4\ub85c \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub9c8\ub9ac\uc624\uac00 \ud589\ub3d9\uc744 \ucde8\ud558\uba74, \ud658\uacbd\uc740 \ubcc0\uacbd\ub41c (\ub2e4\uc74c)\uc0c1\ud0dc, \ud3ec\uc0c1 \uadf8\ub9ac\uace0\n\ub2e4\ub978 \uc815\ubcf4\ub4e4\ub85c \uc751\ub2f5\ud569\ub2c8\ub2e4.\n\n\n\n\n\n```python\n# \uc288\ud37c \ub9c8\ub9ac\uc624 \ud658\uacbd \ucd08\uae30\ud654\ud558\uae30\nenv = gym_super_mario_bros.make(\"SuperMarioBros-1-1-v0\")\n\n# \uc0c1\ud0dc \uacf5\uac04\uc744 2\uac00\uc9c0\ub85c \uc81c\ud55c\ud558\uae30\n# 0. \uc624\ub978\ucabd\uc73c\ub85c \uac77\uae30\n# 1. \uc624\ub978\ucabd\uc73c\ub85c \uc810\ud504\ud558\uae30\nenv = JoypadSpace(env, [[\"right\"], [\"right\", \"A\"]])\n\nenv.reset()\nnext_state, reward, done, info = env.step(action=0)\nprint(f\"{next_state.shape},\\n {reward},\\n {done},\\n {info}\")\n```\n\n\ud658\uacbd \uc804\ucc98\ub9ac \uacfc\uc815 \uac70\uce58\uae30\n------------------------\n\n``\ub2e4\uc74c \uc0c1\ud0dc(next_state)`` \uc5d0\uc11c \ud658\uacbd \ub370\uc774\ud130\uac00 \uc5d0\uc774\uc804\ud2b8\ub85c \ubc18\ud658\ub429\ub2c8\ub2e4.\n\uc55e\uc11c \uc0b4\ud3b4\ubcf4\uc558\ub4ef\uc774, \uac01\uac01\uc758 \uc0c1\ud0dc\ub294 ``[3, 240, 256]`` \uc758 \ubc30\uc5f4\ub85c \ub098\ud0c0\ub0b4\uace0 \uc788\uc2b5\ub2c8\ub2e4.\n\uc885\uc885 \uc0c1\ud0dc\uac00 \uc81c\uacf5\ud558\ub294 \uac83\uc740 \uc5d0\uc774\uc804\ud2b8\uac00 \ud544\uc694\ub85c \ud558\ub294 \uac83\ubcf4\ub2e4 \ub354 \ub9ce\uc740 \uc815\ubcf4\uc785\ub2c8\ub2e4.\n\uc608\ub97c \ub4e4\uc5b4, \ub9c8\ub9ac\uc624\uc758 \ud589\ub3d9\uc740 \ud30c\uc774\ud504\uc758 \uc0c9\uae54\uc774\ub098 \ud558\ub298\uc758 \uc0c9\uae54\uc5d0 \uc88c\uc6b0\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4!\n\n\uc544\ub798\uc5d0 \uc124\uba85\ud560 \ud074\ub798\uc2a4\ub4e4\uc740 \ud658\uacbd \ub370\uc774\ud130\ub97c \uc5d0\uc774\uc804\ud2b8\uc5d0 \ubcf4\ub0b4\uae30 \uc804 \ub2e8\uacc4\uc5d0\uc11c \uc804\ucc98\ub9ac \uacfc\uc815\uc5d0 \uc0ac\uc6a9\ud560\n**\ub798\ud37c(Wrappers)** \uc785\ub2c8\ub2e4.\n\n``GrayScaleObservation`` \uc740 RGB \uc774\ubbf8\uc9c0\ub97c \ud751\ubc31 \uc774\ubbf8\uc9c0\ub85c \ubc14\uafb8\ub294 \uc77c\ubc18\uc801\uc778 \ub798\ud37c\uc785\ub2c8\ub2e4.\n``GrayScaleObservation`` \ud074\ub798\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \uc720\uc6a9\ud55c \uc815\ubcf4\ub97c \uc783\uc9c0 \uc54a\uace0 \uc0c1\ud0dc\uc758 \ud06c\uae30\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n``GrayScaleObservation`` \ub97c \uc801\uc6a9\ud558\uba74 \uac01\uac01 \uc0c1\ud0dc\uc758 \ud06c\uae30\ub294\n``[1, 240, 256]`` \uc774 \ub429\ub2c8\ub2e4.\n\n``ResizeObservation`` \uc740 \uac01\uac01\uc758 \uc0c1\ud0dc(Observation)\ub97c \uc815\uc0ac\uac01\ud615 \uc774\ubbf8\uc9c0\ub85c \ub2e4\uc6b4 \uc0d8\ud50c\ub9c1\ud569\ub2c8\ub2e4.\n\uc774 \ub798\ud37c\ub97c \uc801\uc6a9\ud558\uba74 \uac01\uac01 \uc0c1\ud0dc\uc758 \ud06c\uae30\ub294 ``[1, 84, 84]`` \uc774 \ub429\ub2c8\ub2e4.\n\n``SkipFrame`` \uc740 ``gym.Wrapper`` \uc73c\ub85c\ubd80\ud130 \uc0c1\uc18d\uc744 \ubc1b\uc740 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ud074\ub798\uc2a4\uc774\uace0,\n``step()`` \ud568\uc218\ub97c \uad6c\ud604\ud569\ub2c8\ub2e4. \uc65c\ub0d0\ud558\uba74 \uc5f0\uc18d\ub418\ub294 \ud504\ub808\uc784\uc740 \ud070 \ucc28\uc774\uac00 \uc5c6\uae30 \ub54c\ubb38\uc5d0\nn\uac1c\uc758 \uc911\uac04 \ud504\ub808\uc784\uc744 \ud070 \uc815\ubcf4\uc758 \uc190\uc2e4 \uc5c6\uc774 \uac74\ub108\ub6f8 \uc218 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.\nn\ubc88\uc9f8 \ud504\ub808\uc784\uc740 \uac74\ub108\ub6f4 \uac01 \ud504\ub808\uc784\uc5d0 \uac78\uccd0 \ub204\uc801\ub41c \ud3ec\uc0c1\uc744\n\uc9d1\uacc4\ud569\ub2c8\ub2e4.\n\n``FrameStack`` \uc740 \ud658\uacbd\uc758 \uc5f0\uc18d \ud504\ub808\uc784\uc744\n\ub2e8\uc77c \uad00\ucc30 \uc9c0\uc810\uc73c\ub85c \ubc14\uafb8\uc5b4 \ud559\uc2b5 \ubaa8\ub378\uc5d0 \uc81c\uacf5\ud560 \uc218 \uc788\ub294 \ub798\ud37c\uc785\ub2c8\ub2e4.\n\uc774\ub807\uac8c \ud558\uba74 \ub9c8\ub9ac\uc624\uac00 \ucc29\uc9c0 \uc911\uc774\uc600\ub294\uc9c0 \ub610\ub294 \uc810\ud504 \uc911\uc774\uc5c8\ub294\uc9c0\n\uc774\uc804 \uba87 \ud504\ub808\uc784\uc758 \uc6c0\uc9c1\uc784 \ubc29\ud5a5\uc5d0 \ub530\ub77c \ud655\uc778\ud560 \uc218\n\uc788\uc2b5\ub2c8\ub2e4.\n\n\n\n\n\n```python\nclass SkipFrame(gym.Wrapper):\n def __init__(self, env, skip):\n \"\"\"\ubaa8\ub4e0 `skip` \ud504\ub808\uc784\ub9cc \ubc18\ud658\ud569\ub2c8\ub2e4.\"\"\"\n super().__init__(env)\n self._skip = skip\n\n def step(self, action):\n \"\"\"\ud589\ub3d9\uc744 \ubc18\ubcf5\ud558\uace0 \ud3ec\uc0c1\uc744 \ub354\ud569\ub2c8\ub2e4.\"\"\"\n total_reward = 0.0\n done = False\n for i in range(self._skip):\n # \ud3ec\uc0c1\uc744 \ub204\uc801\ud558\uace0 \ub3d9\uc77c\ud55c \uc791\uc5c5\uc744 \ubc18\ubcf5\ud569\ub2c8\ub2e4.\n obs, reward, done, info = self.env.step(action)\n total_reward += reward\n if done:\n break\n return obs, total_reward, done, info\n\n\nclass GrayScaleObservation(gym.ObservationWrapper):\n def __init__(self, env):\n super().__init__(env)\n obs_shape = self.observation_space.shape[:2]\n self.observation_space = Box(low=0, high=255, shape=obs_shape, dtype=np.uint8)\n\n def permute_orientation(self, observation):\n # [H, W, C] \ubc30\uc5f4\uc744 [C, H, W] \ud150\uc11c\ub85c \ubc14\uafc9\ub2c8\ub2e4.\n observation = np.transpose(observation, (2, 0, 1))\n observation = torch.tensor(observation.copy(), dtype=torch.float)\n return observation\n\n def observation(self, observation):\n observation = self.permute_orientation(observation)\n transform = T.Grayscale()\n observation = transform(observation)\n return observation\n\n\nclass ResizeObservation(gym.ObservationWrapper):\n def __init__(self, env, shape):\n super().__init__(env)\n if isinstance(shape, int):\n self.shape = (shape, shape)\n else:\n self.shape = tuple(shape)\n\n obs_shape = self.shape + self.observation_space.shape[2:]\n self.observation_space = Box(low=0, high=255, shape=obs_shape, dtype=np.uint8)\n\n def observation(self, observation):\n transforms = T.Compose(\n [T.Resize(self.shape), T.Normalize(0, 255)]\n )\n observation = transforms(observation).squeeze(0)\n return observation\n\n\n# \ub798\ud37c\ub97c \ud658\uacbd\uc5d0 \uc801\uc6a9\ud569\ub2c8\ub2e4.\nenv = SkipFrame(env, skip=4)\nenv = GrayScaleObservation(env)\nenv = ResizeObservation(env, shape=84)\nenv = FrameStack(env, num_stack=4)\n```\n\n\uc55e\uc11c \uc18c\uac1c\ud55c \ub798\ud37c\ub97c \ud658\uacbd\uc5d0 \uc801\uc6a9\ud55c \ud6c4,\n\ucd5c\uc885 \ub798\ud551 \uc0c1\ud0dc\ub294 \uc67c\ucabd \uc544\ub798 \uc774\ubbf8\uc9c0\uc5d0 \ud45c\uc2dc\ub41c \uac83\ucc98\ub7fc 4\uac1c\uc758 \uc5f0\uc18d\ub41c \ud751\ubc31 \ud504\ub808\uc784\uc73c\ub85c \n\uad6c\uc131\ub429\ub2c8\ub2e4. \ub9c8\ub9ac\uc624\uac00 \ud589\ub3d9\uc744 \ud560 \ub54c\ub9c8\ub2e4,\n\ud658\uacbd\uc740 \uc774 \uad6c\uc870\uc758 \uc0c1\ud0dc\ub85c \uc751\ub2f5\ud569\ub2c8\ub2e4.\n\uad6c\uc870\ub294 ``[4, 84, 84]`` \ud06c\uae30\uc758 3\ucc28\uc6d0 \ubc30\uc5f4\ub85c \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n.. figure:: /_static/img/mario_env.png\n :alt: picture\n\n\n\n\n\n\uc5d0\uc774\uc804\ud2b8(Agent)\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n``Mario`` \ub77c\ub294 \ud074\ub798\uc2a4\ub97c \uc774 \uac8c\uc784\uc758 \uc5d0\uc774\uc804\ud2b8\ub85c \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\ub9c8\ub9ac\uc624\ub294 \ub2e4\uc74c\uacfc \uac19\uc740 \uae30\ub2a5\uc744 \ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\n- **\ud589\ub3d9(Act)** \uc740 (\ud658\uacbd\uc758) \ud604\uc7ac \uc0c1\ud0dc\ub97c \uae30\ubc18\uc73c\ub85c \n \ucd5c\uc801\uc758 \ud589\ub3d9 \uc815\ucc45\uc5d0 \ub530\ub77c \uc815\ud574\uc9d1\ub2c8\ub2e4.\n\n- \uacbd\ud5d8\uc744 **\uae30\uc5b5(Remember)** \ud558\ub294 \uac83. \n \uacbd\ud5d8\uc740 (\ud604\uc7ac \uc0c1\ud0dc, \ud604\uc7ac \ud589\ub3d9, \ud3ec\uc0c1, \ub2e4\uc74c \uc0c1\ud0dc) \ub85c \uc774\ub8e8\uc5b4\uc838 \uc788\uc2b5\ub2c8\ub2e4. \n \ub9c8\ub9ac\uc624\ub294 \uadf8\uc758 \ud589\ub3d9 \uc815\ucc45\uc744 \uc5c5\ub370\uc774\ud2b8 \ud558\uae30 \uc704\ud574 *\uce90\uc2dc(caches)* \ub97c \ud55c \ub2e4\uc74c, \uadf8\uc758 \uacbd\ud5d8\uc744 *\ub9ac\ucf5c(recalls)* \ud569\ub2c8\ub2e4.\n\n- **\ud559\uc2b5(Learn)** \uc744 \ud1b5\ud574 \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \ub354 \ub098\uc740 \ud589\ub3d9 \uc815\ucc45\uc744 \ud0dd\ud569\ub2c8\ub2e4.\n\n\n\n\n\n```python\nclass Mario:\n def __init__():\n pass\n\n def act(self, state):\n \"\"\"\uc0c1\ud0dc\uac00 \uc8fc\uc5b4\uc9c0\uba74, \uc785\uc2e4\ub860-\uadf8\ub9ac\ub514 \ud589\ub3d9(epsilon-greedy action)\uc744 \uc120\ud0dd\ud574\uc57c \ud569\ub2c8\ub2e4.\"\"\"\n pass\n\n def cache(self, experience):\n \"\"\"\uba54\ubaa8\ub9ac\uc5d0 \uacbd\ud5d8\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4.\"\"\"\n pass\n\n def recall(self):\n \"\"\"\uba54\ubaa8\ub9ac\ub85c\ubd80\ud130 \uacbd\ud5d8\uc744 \uc0d8\ud50c\ub9c1\ud569\ub2c8\ub2e4.\"\"\"\n pass\n\n def learn(self):\n \"\"\"\uc77c\ub828\uc758 \uacbd\ud5d8\ub4e4\ub85c \uc2e4\uc2dc\uac04 \ud589\ub3d9 \uac00\uce58(online action value) (Q) \ud568\uc218\ub97c \uc5c5\ub370\uc774\ud2b8 \ud569\ub2c8\ub2e4.\"\"\"\n pass\n```\n\n\uc774\ubc88 \uc139\uc158\uc5d0\uc11c\ub294 \ub9c8\ub9ac\uc624 \ud074\ub798\uc2a4\uc758 \ub9e4\uac1c\ubcc0\uc218\ub97c \ucc44\uc6b0\uace0, \n\ub9c8\ub9ac\uc624 \ud074\ub798\uc2a4\uc758 \ud568\uc218\ub4e4\uc744 \uc815\uc758\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n\n\n\ud589\ub3d9\ud558\uae30(Act)\n--------------\n\n\uc8fc\uc5b4\uc9c4 \uc0c1\ud0dc\uc5d0 \ub300\ud574, \uc5d0\uc774\uc804\ud2b8\ub294 \ucd5c\uc801\uc758 \ud589\ub3d9\uc744 \uc774\uc6a9\ud560 \uac83\uc778\uc9c0\n\uc784\uc758\uc758 \ud589\ub3d9\uc744 \uc120\ud0dd\ud558\uc5ec \ubd84\uc11d\ud560 \uac83\uc778\uc9c0 \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub9c8\ub9ac\uc624\ub294 \uc784\uc758\uc758 \ud589\ub3d9\uc744 \uc120\ud0dd\ud588\uc744 \ub54c ``self.exploration_rate`` \ub97c \ud65c\uc6a9\ud569\ub2c8\ub2e4.\n\ucd5c\uc801\uc758 \ud589\ub3d9\uc744 \uc774\uc6a9\ud55c\ub2e4\uace0 \ud588\uc744 \ub54c, \uadf8\ub294 \ucd5c\uc801\uc758 \ud589\ub3d9\uc744 \uc218\ud589\ud558\uae30 \uc704\ud574 \n(``\ud559\uc2b5\ud558\uae30(Learn)`` \uc139\uc158\uc5d0\uc11c \uad6c\ud604\ub41c) ``MarioNet`` \uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\n\n\n\n\n\n```python\nclass Mario:\n def __init__(self, state_dim, action_dim, save_dir):\n self.state_dim = state_dim\n self.action_dim = action_dim\n self.save_dir = save_dir\n\n self.use_cuda = torch.cuda.is_available()\n\n # \ub9c8\ub9ac\uc624\uc758 DNN\uc740 \ucd5c\uc801\uc758 \ud589\ub3d9\uc744 \uc608\uce21\ud569\ub2c8\ub2e4 - \uc774\ub294 \ud559\uc2b5\ud558\uae30 \uc139\uc158\uc5d0\uc11c \uad6c\ud604\ud569\ub2c8\ub2e4.\n self.net = MarioNet(self.state_dim, self.action_dim).float()\n if self.use_cuda:\n self.net = self.net.to(device=\"cuda\")\n\n self.exploration_rate = 1\n self.exploration_rate_decay = 0.99999975\n self.exploration_rate_min = 0.1\n self.curr_step = 0\n\n self.save_every = 5e5 # Mario Net \uc800\uc7a5 \uc0ac\uc774\uc758 \uacbd\ud5d8 \ud69f\uc218\n\n def act(self, state):\n \"\"\"\n \uc8fc\uc5b4\uc9c4 \uc0c1\ud0dc\uc5d0\uc11c, \uc785\uc2e4\ub860-\uadf8\ub9ac\ub514 \ud589\ub3d9(epsilon-greedy action)\uc744 \uc120\ud0dd\ud558\uace0, \uc2a4\ud15d\uc758 \uac12\uc744 \uc5c5\ub370\uc774\ud2b8 \ud569\ub2c8\ub2e4.\n\n \uc785\ub825\uac12:\n state(LazyFrame): \ud604\uc7ac \uc0c1\ud0dc\uc5d0\uc11c\uc758 \ub2e8\uc77c \uc0c1\ud0dc(observation)\uac12\uc744 \ub9d0\ud569\ub2c8\ub2e4. \ucc28\uc6d0\uc740 (state_dim)\uc785\ub2c8\ub2e4.\n \ucd9c\ub825\uac12:\n action_idx (int): Mario\uac00 \uc218\ud589\ud560 \ud589\ub3d9\uc744 \ub098\ud0c0\ub0b4\ub294 \uc815\uc218 \uac12\uc785\ub2c8\ub2e4.\n \"\"\"\n # \uc784\uc758\uc758 \ud589\ub3d9\uc744 \uc120\ud0dd\ud558\uae30\n if np.random.rand() < self.exploration_rate:\n action_idx = np.random.randint(self.action_dim)\n\n # \ucd5c\uc801\uc758 \ud589\ub3d9\uc744 \uc774\uc6a9\ud558\uae30\n else:\n state = state.__array__()\n if self.use_cuda:\n state = torch.tensor(state).cuda()\n else:\n state = torch.tensor(state)\n state = state.unsqueeze(0)\n action_values = self.net(state, model=\"online\")\n action_idx = torch.argmax(action_values, axis=1).item()\n\n # exploration_rate \uac10\uc18c\ud558\uae30\n self.exploration_rate *= self.exploration_rate_decay\n self.exploration_rate = max(self.exploration_rate_min, self.exploration_rate)\n\n # \uc2a4\ud15d \uc218 \uc99d\uac00\ud558\uae30\n self.curr_step += 1\n return action_idx\n```\n\n\uce90\uc2dc(Cache)\uc640 \ub9ac\ucf5c(Recall)\ud558\uae30\n------------------------------\n\n\uc774 \ub450\uac00\uc9c0 \ud568\uc218\ub294 \ub9c8\ub9ac\uc624\uc758 \u201c\uba54\ubaa8\ub9ac\u201d \ud504\ub85c\uc138\uc2a4 \uc5ed\ud560\uc744 \ud569\ub2c8\ub2e4.\n\n``cache()``: \ub9c8\ub9ac\uc624\uac00 \ud589\ub3d9\uc744 \ud560 \ub54c\ub9c8\ub2e4, \uadf8\ub294\n``\uacbd\ud5d8`` \uc744 \uadf8\uc758 \uba54\ubaa8\ub9ac\uc5d0 \uc800\uc7a5\ud569\ub2c8\ub2e4. \uadf8\uc758 \uacbd\ud5d8\uc5d0\ub294 \ud604\uc7ac *\uc0c1\ud0dc* \uc5d0 \ub530\ub978 \uc218\ud589\ub41c\n*\ud589\ub3d9* , \ud589\ub3d9\uc73c\ub85c\ubd80\ud130 \uc5bb\uc740 *\ud3ec\uc0c1* , *\ub2e4\uc74c \uc0c1\ud0dc*,\n\uadf8\ub9ac\uace0 \uac8c\uc784 *\uc644\ub8cc* \uc5ec\ubd80\uac00 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n``recall()``: Mario\ub294 \uc790\uc2e0\uc758 \uae30\uc5b5\uc5d0\uc11c \ubb34\uc791\uc704\ub85c \uc77c\ub828\uc758 \uacbd\ud5d8\uc744 \uc0d8\ud50c\ub9c1\ud558\uc5ec\n\uac8c\uc784\uc744 \ud559\uc2b5\ud558\ub294 \ub370 \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n\n\n\n\n```python\nclass Mario(Mario): # \uc5f0\uc18d\uc131\uc744 \uc704\ud55c \ud558\uc704 \ud074\ub798\uc2a4\uc785\ub2c8\ub2e4.\n def __init__(self, state_dim, action_dim, save_dir):\n super().__init__(state_dim, action_dim, save_dir)\n self.memory = deque(maxlen=100000)\n self.batch_size = 32\n\n def cache(self, state, next_state, action, reward, done):\n \"\"\"\n Store the experience to self.memory (replay buffer)\n\n \uc785\ub825\uac12:\n state (LazyFrame),\n next_state (LazyFrame),\n action (int),\n reward (float),\n done (bool))\n \"\"\"\n state = state.__array__()\n next_state = next_state.__array__()\n\n if self.use_cuda:\n state = torch.tensor(state).cuda()\n next_state = torch.tensor(next_state).cuda()\n action = torch.tensor([action]).cuda()\n reward = torch.tensor([reward]).cuda()\n done = torch.tensor([done]).cuda()\n else:\n state = torch.tensor(state)\n next_state = torch.tensor(next_state)\n action = torch.tensor([action])\n reward = torch.tensor([reward])\n done = torch.tensor([done])\n\n self.memory.append((state, next_state, action, reward, done,))\n\n def recall(self):\n \"\"\"\n \uba54\ubaa8\ub9ac\uc5d0\uc11c \uc77c\ub828\uc758 \uacbd\ud5d8\ub4e4\uc744 \uac80\uc0c9\ud569\ub2c8\ub2e4.\n \"\"\"\n batch = random.sample(self.memory, self.batch_size)\n state, next_state, action, reward, done = map(torch.stack, zip(*batch))\n return state, next_state, action.squeeze(), reward.squeeze(), done.squeeze()\n```\n\n\ud559\uc2b5\ud558\uae30(Learn)\n-----------------\n\n\ub9c8\ub9ac\uc624\ub294 `DDQN \uc54c\uace0\ub9ac\uc998 `__\n\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. DDQN \ub450\uac1c\uc758 ConvNets ( $Q_{online}$ \uacfc\n$Q_{target}$ ) \uc744 \uc0ac\uc6a9\ud558\uace0, \ub3c5\ub9bd\uc801\uc73c\ub85c \ucd5c\uc801\uc758 \ud589\ub3d9-\uac00\uce58 \ud568\uc218\uc5d0 \n\uadfc\uc0ac \uc2dc\ud0a4\ub824\uace0 \ud569\ub2c8\ub2e4.\n\n\uad6c\ud604\uc744 \ud560 \ub54c, \ud2b9\uc9d5 \uc0dd\uc131\uae30\uc5d0\uc11c ``\ud2b9\uc9d5\ub4e4`` \uc744 $Q_{online}$ \uc640 $Q_{target}$\n\uc5d0 \uacf5\uc720\ud569\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \uac01\uac01\uc758 FC \ubd84\ub958\uae30\ub294\n\uac00\uc9c0\uace0 \uc788\ub3c4\ub85d \uc124\uacc4\ud569\ub2c8\ub2e4. $\\theta_{target}$ ($Q_{target}$\n\uc758 \ub9e4\uac1c\ubcc0\uc218 \uac12) \ub294 \uc5ed\uc804\ud30c\uc5d0 \uc758\ud574 \uac12\uc774 \uc5c5\ub370\uc774\ud2b8 \ub418\uc9c0 \uc54a\ub3c4\ub85d \uace0\uc815\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\ub300\uc2e0, $\\theta_{online}$ \uc640 \uc8fc\uae30\uc801\uc73c\ub85c \ub3d9\uae30\ud654\ub97c \uc9c4\ud589\ud569\ub2c8\ub2e4. \n\uc774\uac83\uc5d0 \ub300\ud574\uc11c\ub294 \ucd94\ud6c4\uc5d0 \ub2e4\ub8e8\ub3c4\ub85d \ud558\uaca0\uc2b5\ub2c8\ub2e4.)\n\n\uc2e0\uacbd\ub9dd\n~~~~~~~~~~~~~~~~~~\n\n\n\n\n```python\nclass MarioNet(nn.Module):\n \"\"\"\uc791\uc740 cnn \uad6c\uc870\n \uc785\ub825 -> (conv2d + relu) x 3 -> flatten -> (dense + relu) x 2 -> \ucd9c\ub825\n \"\"\"\n\n def __init__(self, input_dim, output_dim):\n super().__init__()\n c, h, w = input_dim\n\n if h != 84:\n raise ValueError(f\"Expecting input height: 84, got: {h}\")\n if w != 84:\n raise ValueError(f\"Expecting input width: 84, got: {w}\")\n\n self.online = nn.Sequential(\n nn.Conv2d(in_channels=c, out_channels=32, kernel_size=8, stride=4),\n nn.ReLU(),\n nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2),\n nn.ReLU(),\n nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1),\n nn.ReLU(),\n nn.Flatten(),\n nn.Linear(3136, 512),\n nn.ReLU(),\n nn.Linear(512, output_dim),\n )\n\n self.target = copy.deepcopy(self.online)\n\n # Q_target \ub9e4\uac1c\ubcc0\uc218 \uac12\uc740 \uace0\uc815\uc2dc\ud0b5\ub2c8\ub2e4.\n for p in self.target.parameters():\n p.requires_grad = False\n\n def forward(self, input, model):\n if model == \"online\":\n return self.online(input)\n elif model == \"target\":\n return self.target(input)\n```\n\nTD \ucd94\uc815 & TD \ubaa9\ud45c\uac12\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\ud559\uc2b5\uc744 \ud558\ub294\ub370 \ub450 \uac00\uc9c0 \uac12\ub4e4\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n**TD \ucd94\uc815** - \uc8fc\uc5b4\uc9c4 \uc0c1\ud0dc $s$ \uc5d0\uc11c \ucd5c\uc801\uc758 \uc608\uce21 $Q^*$. \n\n\\begin{align}{TD}_e = Q_{online}^*(s,a)\\end{align}\n\n**TD \ubaa9\ud45c** - \ud604\uc7ac\uc758 \ud3ec\uc0c1\uacfc \ub2e4\uc74c\uc0c1\ud0dc $s'$ \uc5d0\uc11c \ucd94\uc815\ub41c $Q^*$ \uc758 \ud569.\n\n\\begin{align}a' = argmax_{a} Q_{online}(s', a)\\end{align}\n\n\\begin{align}{TD}_t = r + \\gamma Q_{target}^*(s',a')\\end{align}\n\n\ub2e4\uc74c \ud589\ub3d9 $a'$ \uac00 \uc5b4\ub5a8\uc9c0 \ubaa8\ub974\uae30 \ub54c\ubb38\uc5d0 \n\ub2e4\uc74c \uc0c1\ud0dc $s'$ \uc5d0\uc11c $Q_{online}$ \uac12\uc774 \ucd5c\ub300\uac00 \ub418\ub3c4\ub85d \ud558\ub294\n\ud589\ub3d9 $a'$ \ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n\uc5ec\uae30\uc5d0\uc11c \ubcc0\ud654\ub3c4 \uacc4\uc0b0\uc744 \ube44\ud65c\uc131\ud654\ud558\uae30 \uc704\ud574\n``td_target()`` \uc5d0\uc11c `@torch.no_grad() `__\n\ub370\ucf54\ub808\uc774\ud130(decorator)\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n($\\theta_{target}$ \uc758 \uc5ed\uc804\ud30c \uacc4\uc0b0\uc774 \ud544\uc694\ub85c \ud558\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.)\n\n\n\n\n\n```python\nclass Mario(Mario):\n def __init__(self, state_dim, action_dim, save_dir):\n super().__init__(state_dim, action_dim, save_dir)\n self.gamma = 0.9\n\n def td_estimate(self, state, action):\n current_Q = self.net(state, model=\"online\")[\n np.arange(0, self.batch_size), action\n ] # Q_online(s,a)\n return current_Q\n\n @torch.no_grad()\n def td_target(self, reward, next_state, done):\n next_state_Q = self.net(next_state, model=\"online\")\n best_action = torch.argmax(next_state_Q, axis=1)\n next_Q = self.net(next_state, model=\"target\")[\n np.arange(0, self.batch_size), best_action\n ]\n return (reward + (1 - done.float()) * self.gamma * next_Q).float()\n```\n\n\ubaa8\ub378\uc744 \uc5c5\ub370\uc774\ud2b8 \ud558\uae30.\n~~~~~~~~~~~~~~~~~~~~~~\n\n\ub9c8\ub9ac\uc624\uac00 \uc7ac\uc0dd \ubc84\ud37c\uc5d0\uc11c \uc785\ub825\uc744 \uc0d8\ud50c\ub9c1\ud560 \ub54c, $TD_t$\n\uc640 $TD_e$ \ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4. \uadf8\ub9ac\uace0 \uc774 \uc190\uc2e4\uc744 \uc774\uc6a9\ud558\uc5ec $Q_{online}$ \uc5ed\uc804\ud30c\ud558\uc5ec\n\ub9e4\uac1c\ubcc0\uc218 $\\theta_{online}$ \ub97c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4. ($\\alpha$ \ub294 \n``optimizer`` \uc5d0 \uc804\ub2ec\ub418\ub294 \ud559\uc2b5\ub960 ``lr`` \uc785\ub2c8\ub2e4.)\n\n\\begin{align}\\theta_{online} \\leftarrow \\theta_{online} + \\alpha \\nabla(TD_e - TD_t)\\end{align}\n\n$\\theta_{target}$ \uc740 \uc5ed\uc804\ud30c\ub97c \ud1b5\ud574 \uc5c5\ub370\uc774\ud2b8 \ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\ub300\uc2e0, \uc8fc\uae30\uc801\uc73c\ub85c $\\theta_{online}$ \uc758 \uac12\uc744 $\\theta_{target}$ \n\ub85c \ubcf5\uc0ac\ud569\ub2c8\ub2e4.\n\n\\begin{align}\\theta_{target} \\leftarrow \\theta_{online}\\end{align}\n\n\n\n\n\n\n```python\nclass Mario(Mario):\n def __init__(self, state_dim, action_dim, save_dir):\n super().__init__(state_dim, action_dim, save_dir)\n self.optimizer = torch.optim.Adam(self.net.parameters(), lr=0.00025)\n self.loss_fn = torch.nn.SmoothL1Loss()\n\n def update_Q_online(self, td_estimate, td_target):\n loss = self.loss_fn(td_estimate, td_target)\n self.optimizer.zero_grad()\n loss.backward()\n self.optimizer.step()\n return loss.item()\n\n def sync_Q_target(self):\n self.net.target.load_state_dict(self.net.online.state_dict())\n```\n\n\uccb4\ud06c\ud3ec\uc778\ud2b8\ub97c \uc800\uc7a5\ud569\ub2c8\ub2e4.\n~~~~~~~~~~~~~~~~~~~~~~~\n\n\n\n\n\n```python\nclass Mario(Mario):\n def save(self):\n save_path = (\n self.save_dir / f\"mario_net_{int(self.curr_step // self.save_every)}.chkpt\"\n )\n torch.save(\n dict(model=self.net.state_dict(), exploration_rate=self.exploration_rate),\n save_path,\n )\n print(f\"MarioNet saved to {save_path} at step {self.curr_step}\")\n```\n\n\ubaa8\ub4e0 \uae30\ub2a5\uc744 \uc885\ud569\ud574\ubd05\uc2dc\ub2e4.\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\n\n\n\n```python\nclass Mario(Mario):\n def __init__(self, state_dim, action_dim, save_dir):\n super().__init__(state_dim, action_dim, save_dir)\n self.burnin = 1e4 # \ud559\uc2b5\uc744 \uc9c4\ud589\ud558\uae30 \uc804 \ucd5c\uc18c\ud55c\uc758 \uacbd\ud5d8\uac12.\n self.learn_every = 3 # Q_online \uc5c5\ub370\uc774\ud2b8 \uc0ac\uc774\uc758 \uacbd\ud5d8 \ud69f\uc218.\n self.sync_every = 1e4 # Q_target\uacfc Q_online sync \uc0ac\uc774\uc758 \uacbd\ud5d8 \uc218\n\n def learn(self):\n if self.curr_step % self.sync_every == 0:\n self.sync_Q_target()\n\n if self.curr_step % self.save_every == 0:\n self.save()\n\n if self.curr_step < self.burnin:\n return None, None\n\n if self.curr_step % self.learn_every != 0:\n return None, None\n\n # \uba54\ubaa8\ub9ac\ub85c\ubd80\ud130 \uc0d8\ud50c\ub9c1\uc744 \ud569\ub2c8\ub2e4.\n state, next_state, action, reward, done = self.recall()\n\n # TD \ucd94\uc815\uac12\uc744 \uac00\uc838\uc635\ub2c8\ub2e4.\n td_est = self.td_estimate(state, action)\n\n # TD \ubaa9\ud45c\uac12\uc744 \uac00\uc838\uc635\ub2c8\ub2e4.\n td_tgt = self.td_target(reward, next_state, done)\n\n # \uc2e4\uc2dc\uac04 Q(Q_online)\uc744 \ud1b5\ud574 \uc5ed\uc804\ud30c \uc190\uc2e4\uc744 \uacc4\uc0b0\ud569\ub2c8\ub2e4.\n loss = self.update_Q_online(td_est, td_tgt)\n\n return (td_est.mean().item(), loss)\n```\n\n\uae30\ub85d\ud558\uae30\n--------------\n\n\n\n\n\n```python\nimport numpy as np\nimport time, datetime\nimport matplotlib.pyplot as plt\n\n\nclass MetricLogger:\n def __init__(self, save_dir):\n self.save_log = save_dir / \"log\"\n with open(self.save_log, \"w\") as f:\n f.write(\n f\"{'Episode':>8}{'Step':>8}{'Epsilon':>10}{'MeanReward':>15}\"\n f\"{'MeanLength':>15}{'MeanLoss':>15}{'MeanQValue':>15}\"\n f\"{'TimeDelta':>15}{'Time':>20}\\n\"\n )\n self.ep_rewards_plot = save_dir / \"reward_plot.jpg\"\n self.ep_lengths_plot = save_dir / \"length_plot.jpg\"\n self.ep_avg_losses_plot = save_dir / \"loss_plot.jpg\"\n self.ep_avg_qs_plot = save_dir / \"q_plot.jpg\"\n\n # \uc9c0\ud45c(Metric)\uc640 \uad00\ub828\ub41c \ub9ac\uc2a4\ud2b8\uc785\ub2c8\ub2e4.\n self.ep_rewards = []\n self.ep_lengths = []\n self.ep_avg_losses = []\n self.ep_avg_qs = []\n\n # \ubaa8\ub4e0 record() \ud568\uc218\ub97c \ud638\ucd9c\ud55c \ud6c4 \uc774\ub3d9 \ud3c9\uade0(Moving average)\uc744 \uacc4\uc0b0\ud569\ub2c8\ub2e4.\n self.moving_avg_ep_rewards = []\n self.moving_avg_ep_lengths = []\n self.moving_avg_ep_avg_losses = []\n self.moving_avg_ep_avg_qs = []\n\n # \ud604\uc7ac \uc5d0\ud53c\uc2a4\ub4dc\uc5d0 \ub300\ud55c \uc9c0\ud45c\ub97c \uae30\ub85d\ud569\ub2c8\ub2e4.\n self.init_episode()\n\n # \uc2dc\uac04\uc5d0 \ub300\ud55c \uae30\ub85d\uc785\ub2c8\ub2e4.\n self.record_time = time.time()\n\n def log_step(self, reward, loss, q):\n self.curr_ep_reward += reward\n self.curr_ep_length += 1\n if loss:\n self.curr_ep_loss += loss\n self.curr_ep_q += q\n self.curr_ep_loss_length += 1\n\n def log_episode(self):\n \"\uc5d0\ud53c\uc2a4\ub4dc\uc758 \ub05d\uc744 \ud45c\uc2dc\ud569\ub2c8\ub2e4.\"\n self.ep_rewards.append(self.curr_ep_reward)\n self.ep_lengths.append(self.curr_ep_length)\n if self.curr_ep_loss_length == 0:\n ep_avg_loss = 0\n ep_avg_q = 0\n else:\n ep_avg_loss = np.round(self.curr_ep_loss / self.curr_ep_loss_length, 5)\n ep_avg_q = np.round(self.curr_ep_q / self.curr_ep_loss_length, 5)\n self.ep_avg_losses.append(ep_avg_loss)\n self.ep_avg_qs.append(ep_avg_q)\n\n self.init_episode()\n\n def init_episode(self):\n self.curr_ep_reward = 0.0\n self.curr_ep_length = 0\n self.curr_ep_loss = 0.0\n self.curr_ep_q = 0.0\n self.curr_ep_loss_length = 0\n\n def record(self, episode, epsilon, step):\n mean_ep_reward = np.round(np.mean(self.ep_rewards[-100:]), 3)\n mean_ep_length = np.round(np.mean(self.ep_lengths[-100:]), 3)\n mean_ep_loss = np.round(np.mean(self.ep_avg_losses[-100:]), 3)\n mean_ep_q = np.round(np.mean(self.ep_avg_qs[-100:]), 3)\n self.moving_avg_ep_rewards.append(mean_ep_reward)\n self.moving_avg_ep_lengths.append(mean_ep_length)\n self.moving_avg_ep_avg_losses.append(mean_ep_loss)\n self.moving_avg_ep_avg_qs.append(mean_ep_q)\n\n last_record_time = self.record_time\n self.record_time = time.time()\n time_since_last_record = np.round(self.record_time - last_record_time, 3)\n\n print(\n f\"Episode {episode} - \"\n f\"Step {step} - \"\n f\"Epsilon {epsilon} - \"\n f\"Mean Reward {mean_ep_reward} - \"\n f\"Mean Length {mean_ep_length} - \"\n f\"Mean Loss {mean_ep_loss} - \"\n f\"Mean Q Value {mean_ep_q} - \"\n f\"Time Delta {time_since_last_record} - \"\n f\"Time {datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S')}\"\n )\n\n with open(self.save_log, \"a\") as f:\n f.write(\n f\"{episode:8d}{step:8d}{epsilon:10.3f}\"\n f\"{mean_ep_reward:15.3f}{mean_ep_length:15.3f}{mean_ep_loss:15.3f}{mean_ep_q:15.3f}\"\n f\"{time_since_last_record:15.3f}\"\n f\"{datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S'):>20}\\n\"\n )\n\n for metric in [\"ep_rewards\", \"ep_lengths\", \"ep_avg_losses\", \"ep_avg_qs\"]:\n plt.plot(getattr(self, f\"moving_avg_{metric}\"))\n plt.savefig(getattr(self, f\"{metric}_plot\"))\n plt.clf()\n```\n\n\uac8c\uc784\uc744 \uc2e4\ud589\uc2dc\ucf1c\ubd05\uc2dc\ub2e4!\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n\uc774\ubc88 \uc608\uc81c\uc5d0\uc11c\ub294 10\uac1c\uc758 \uc5d0\ud53c\uc18c\ub4dc\uc5d0 \ub300\ud574 \ud559\uc2b5 \ub8e8\ud504\ub97c \uc2e4\ud589\uc2dc\ucf30\uc2b5\ub2c8\ub2e4.\ud558\uc9c0\ub9cc \ub9c8\ub9ac\uc624\uac00 \uc9c4\uc815\uc73c\ub85c \n\uc138\uacc4\ub97c \ud559\uc2b5\ud558\uae30 \uc704\ud574\uc11c\ub294 \uc801\uc5b4\ub3c4 40000\uac1c\uc758 \uc5d0\ud53c\uc18c\ub4dc\uc5d0 \ub300\ud574 \ud559\uc2b5\uc744 \uc2dc\ud0ac \uac83\uc744 \uc81c\uc548\ud569\ub2c8\ub2e4!\n\n\n\n\n\n```python\nuse_cuda = torch.cuda.is_available()\nprint(f\"Using CUDA: {use_cuda}\")\nprint()\n\nsave_dir = Path(\"checkpoints\") / datetime.datetime.now().strftime(\"%Y-%m-%dT%H-%M-%S\")\nsave_dir.mkdir(parents=True)\n\nmario = Mario(state_dim=(4, 84, 84), action_dim=env.action_space.n, save_dir=save_dir)\n\nlogger = MetricLogger(save_dir)\n\nepisodes = 10\nfor e in range(episodes):\n\n state = env.reset()\n\n # \uac8c\uc784\uc744 \uc2e4\ud589\uc2dc\ucf1c\ubd05\uc2dc\ub2e4!\n while True:\n\n # \ud604\uc7ac \uc0c1\ud0dc\uc5d0\uc11c \uc5d0\uc774\uc804\ud2b8 \uc2e4\ud589\ud558\uae30\n action = mario.act(state)\n\n # \uc5d0\uc774\uc804\ud2b8\uac00 \uc561\uc158 \uc218\ud589\ud558\uae30\n next_state, reward, done, info = env.step(action)\n\n # \uae30\uc5b5\ud558\uae30\n mario.cache(state, next_state, action, reward, done)\n\n # \ubc30\uc6b0\uae30\n q, loss = mario.learn()\n\n # \uae30\ub85d\ud558\uae30\n logger.log_step(reward, loss, q)\n\n # \uc0c1\ud0dc \uc5c5\ub370\uc774\ud2b8\ud558\uae30\n state = next_state\n\n # \uac8c\uc784\uc774 \ub05d\ub0ac\ub294\uc9c0 \ud655\uc778\ud558\uae30\n if done or info[\"flag_get\"]:\n break\n\n logger.log_episode()\n\n if e % 20 == 0:\n logger.record(episode=e, epsilon=mario.exploration_rate, step=mario.curr_step)\n```\n\n\uacb0\ub860\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n\uc774 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 PyTorch\ub97c \uc0ac\uc6a9\ud558\uc5ec \uac8c\uc784 \ud50c\ub808\uc774 AI\ub97c \ud6c8\ub828\ud558\ub294 \ubc29\ubc95\uc744 \uc0b4\ud3b4\ubcf4\uc558\uc2b5\ub2c8\ub2e4. `OpenAI gym `__\n\uc5d0 \uc788\ub294 \uc5b4\ub5a4 \uac8c\uc784\uc774\ub4e0 \ub3d9\uc77c\ud55c \ubc29\ubc95\uc73c\ub85c AI\ub97c \ud6c8\ub828\uc2dc\ud0a4\uace0 \uac8c\uc784\uc744 \uc9c4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ud29c\ud1a0\ub9ac\uc5bc\uc774 \ub3c4\uc6c0\uc774 \ub418\uc5c8\uae30\ub97c \ubc14\ub77c\uba70, \n`Github \uc800\uc7a5\uc18c `__ \uc5d0\uc11c \ud3b8\ud558\uac8c \uc800\uc790\ub4e4\uc5d0\uac8c \uc5f0\ub77d\uc744 \ud558\uc154\ub3c4 \ub429\ub2c8\ub2e4!\n\n\n", "meta": {"hexsha": "717a3057e9731c7a5177cecf364e5bc89eceb3cb", "size": 41165, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/c195adbae0504b6504c93e0fd18235ce/mario_rl_tutorial.ipynb", "max_stars_repo_name": "9bow/PyTorch-Tutorials-KR", "max_stars_repo_head_hexsha": "bfddf43a696545cb0991262faeb653affe1040b4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 44, "max_stars_repo_stars_event_min_datetime": "2021-12-07T14:51:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T10:34:17.000Z", "max_issues_repo_path": "docs/_downloads/c195adbae0504b6504c93e0fd18235ce/mario_rl_tutorial.ipynb", "max_issues_repo_name": "9bow/PyTorch-Tutorials-KR", "max_issues_repo_head_hexsha": "bfddf43a696545cb0991262faeb653affe1040b4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 128, "max_issues_repo_issues_event_min_datetime": "2021-12-02T18:11:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-27T05:16:39.000Z", "max_forks_repo_path": "docs/_downloads/c195adbae0504b6504c93e0fd18235ce/mario_rl_tutorial.ipynb", "max_forks_repo_name": "9bow/PyTorch-Tutorials-KR", "max_forks_repo_head_hexsha": "bfddf43a696545cb0991262faeb653affe1040b4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2021-12-02T18:56:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T07:18:23.000Z", "avg_line_length": 138.1375838926, "max_line_length": 4437, "alphanum_fraction": 0.6480019434, "converted": true, "num_tokens": 8022, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.42250462027098473, "lm_q2_score": 0.3242354055108441, "lm_q1q2_score": 0.13699095688376794}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n### Save and reload state\n\n\n```python\nimport dill\n# Load\ndill.load_session('notebook_env.db')\n```\n\n /Users/nick/anaconda3/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n\n\n\n```python\n# Save:\ndill.dump_session('notebook_env.db')\n```\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet **$A$** denote the event that our code has **no bugs** in it. Let **$X$** denote the event that the **code passes all debugging tests**. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in **$P(A|X)$**, i.e. **the probability of no bugs, given our debugging tests $X$**. To use the formula above, we need to compute some quantities.\n\nWhat is **$P(X | A)$**, i.e., **the probability that the code passes $X$ tests *given* there are no bugs**? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\n**Bayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.**\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n # alpha is the hyperparameter that determines the prior distribution of the\n # two lambda variables (i.e. the one before and after the switch point)\n alpha = 1.0/count_data.mean()\n \n # The two lambda parameters. (We want to see if their posterior distributions change)\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n # Tau is the point at which the texting behaviour changed. We don't know anything about this \n # so we assign a uniform prior belief\n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\n /Users/nick/anaconda3/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n\n\n\n```python\ntau\n```\n\n\n\n\n$tau \\sim \\text{DiscreteUniform}(\\mathit{lower}=f(f()), \\mathit{upper}=f(f()))$\n\n\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\n\n```python\nlambda_\n```\n\n\n\n\n Elemwise{switch,no_inplace}.0\n\n\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\n\n```python\nobservation\n```\n\n\n\n\n$obs \\sim \\text{Poisson}(\\mathit{mu}=f(f(f(tau),array),f(lambda\\_1),f(lambda\\_2)))$\n\n\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. **Below, we collect the samples (called *traces* in the MCMC literature) into histograms.**\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15000/15000 [00:08<00:00, 1773.12it/s]\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nprint(''.join([ \"{} {} {}\\n\".format(x,y,z) for (x,y,z) in zip(tau_samples,lambda_1_samples, lambda_2_samples)]))\n```\n\n \n\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. **In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK!** This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\n\n```python\n\n```\n\n [False False False ... False False False]\n\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n\nprint(\"Lambda1 mean: {} \\nLambda2 mean: {}\".format(np.mean(lambda_1_samples), np.mean(lambda_2_samples) ) )\n```\n\n Lambda1 mean: 17.75942384685025 \n Lambda2 mean: 22.72799947930949\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\npct = (lambda_2_samples.mean() - lambda_1_samples.mean()) / lambda_1_samples.mean() * 100\nprint(\"Percentage change in text messaging rates: {}\".format(pct))\n```\n\n Percentage change in text messaging rates: 27.97712175409591\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\nlambda_1_samples[tau_samples < 45].mean()\n#tau_samples < 45\n```\n\n\n\n\n 17.752720861774307\n\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "30cc1a6c5f8edb9c4661e8332cf62d7b2fe541cb", "size": 304881, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "nickmalleson/ProbabilisticProgramming", "max_stars_repo_head_hexsha": "c2e83f8ce9e38f9850014aeebaab7d9658fa214e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "nickmalleson/ProbabilisticProgramming", "max_issues_repo_head_hexsha": "c2e83f8ce9e38f9850014aeebaab7d9658fa214e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "nickmalleson/ProbabilisticProgramming", "max_forks_repo_head_hexsha": "c2e83f8ce9e38f9850014aeebaab7d9658fa214e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 251.759702725, "max_line_length": 89752, "alphanum_fraction": 0.8973501137, "converted": true, "num_tokens": 11759, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3557748935136304, "lm_q2_score": 0.3849121444839335, "lm_q1q2_score": 0.13694207721587456}} {"text": "\n\n## An Aside about Degrees of Freedom (N-1)\n\n\n```\ncoinflips = [0,1,1]\n\nmean = 2.0/3.0\n\ncoinflips = [1, 0, 1]\n```\n\n# Lambda School Data Science Module 143\n\n## Introduction to Bayesian Inference\n\n!['Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'](https://imgs.xkcd.com/comics/frequentists_vs_bayesians.png)\n\n*[XKCD 1132](https://www.xkcd.com/1132/)*\n\n\n## Prepare - Bayes' Theorem and the Bayesian mindset\n\nBayes' theorem possesses a near-mythical quality - a bit of math that somehow magically evaluates a situation. But this mythicalness has more to do with its reputation and advanced applications than the actual core of it - deriving it is actually remarkably straightforward.\n\n### The Law of Total Probability\n\nBy definition, the total probability of all outcomes (events) if some variable (event space) $A$ is 1. That is:\n\n$$P(A) = \\sum_n P(A_n) = 1$$\n\nThe law of total probability takes this further, considering two variables ($A$ and $B$) and relating their marginal probabilities (their likelihoods considered independently, without reference to one another) and their conditional probabilities (their likelihoods considered jointly). A marginal probability is simply notated as e.g. $P(A)$, while a conditional probability is notated $P(A|B)$, which reads \"probability of $A$ *given* $B$\".\n\nThe law of total probability states:\n\n$$P(A) = \\sum_n P(A | B_n) P(B_n)$$\n\nIn words - the total probability of $A$ is equal to the sum of the conditional probability of $A$ on any given event $B_n$ times the probability of that event $B_n$, and summed over all possible events in $B$.\n\n### The Law of Conditional Probability\n\nWhat's the probability of something conditioned on something else? To determine this we have to go back to set theory and think about the intersection of sets:\n\nThe formula for actual calculation:\n\n$$P(A|B) = \\frac{P(A \\cap B)}{P(B)}$$\n\n\n\nThink of the overall rectangle as the whole probability space, $A$ as the left circle, $B$ as the right circle, and their intersection as the red area. Try to visualize the ratio being described in the above formula, and how it is different from just the $P(A)$ (not conditioned on $B$).\n\nWe can see how this relates back to the law of total probability - multiply both sides by $P(B)$ and you get $P(A|B)P(B) = P(A \\cap B)$ - replaced back into the law of total probability we get $P(A) = \\sum_n P(A \\cap B_n)$.\n\nThis may not seem like an improvement at first, but try to relate it back to the above picture - if you think of sets as physical objects, we're saying that the total probability of $A$ given $B$ is all the little pieces of it intersected with $B$, added together. The conditional probability is then just that again, but divided by the probability of $B$ itself happening in the first place.\n\n\\begin{align}\nP(A|B) &= \\frac{P(A \\cap B)}{P(B)}\\\\\n\\Rightarrow P(A|B)P(B) &= P(A \\cap B)\\\\\nP(B|A) &= \\frac{P(B \\cap A)}{P(A)}\\\\\n\\Rightarrow P(B|A)P(A) &= P(B \\cap A)\\\\\n\\Rightarrow P(A|B)P(B) &= P(B|A)P(A) \\\\\nP(A \\cap B) &= P(B \\cap A)\\\\\nP(A|B) &= \\frac{P(B|A) \\times P(A)}{P(B)}\n\\end{align}\n \n### Bayes Theorem\n\n\n\nHere is is, the seemingly magic tool:\n\n$P(A|B) = \\frac{P(B|A)P(A)}{P(B)}$\n\nIn words - the probability of $A$ conditioned on $B$ is the probability of $B$ conditioned on $A$, times the probability of $A$ and divided by the probability of $B$. These unconditioned probabilities are referred to as \"prior beliefs\", and the conditioned probabilities as \"updated.\"\n\nWhy is this important? Scroll back up to the XKCD example - the Bayesian statistician draws a less absurd conclusion because their prior belief in the likelihood that the sun will go nova is extremely low. So, even when updated based on evidence from a detector that is $35/36 = 0.972$ accurate, the prior belief doesn't shift enough to change their overall opinion.\n\nThere's many examples of Bayes' theorem - one less absurd example is to apply to [breathalyzer tests](https://www.bayestheorem.net/breathalyzer-example/). You may think that a breathalyzer test that is 100% accurate for true positives (detecting somebody who is drunk) is pretty good, but what if it also has 8% false positives (indicating somebody is drunk when they're not)? And furthermore, the rate of drunk driving (and thus our prior belief) is 1/1000.\n\nWhat is the likelihood somebody really is drunk if they test positive? Some may guess it's 92% - the difference between the true positives and the false positives. But we have a prior belief of the background/true rate of drunk driving. Sounds like a job for Bayes' theorem!\n\n$$\n\\begin{aligned}\nP(Drunk | Positive) &= \\frac{P(Positive | Drunk)P(Drunk)}{P(Positive)} \\\\\n&= \\frac{1 \\times 0.001}{0.08} \\\\\n&= 0.0125\n\\end{aligned}\n$$\n\nIn other words, the likelihood that somebody is drunk given they tested positive with a breathalyzer in this situation is only 1.25% - probably much lower than you'd guess. This is why, in practice, it's important to have a repeated test to confirm (the probability of two false positives in a row is $0.08 * 0.08 = 0.0064$, much lower), and Bayes' theorem has been relevant in court cases where proper consideration of evidence was important.\n\n\n```\nfirst_test = (1*.001)/.08\nprint(first_test)\n```\n\n 0.0125\n\n\n\n```\nsecond_test = (1*.0125)/.08\nprint(second_test)\n```\n\n 0.15625\n\n\n\n```\nthird_test = (1*.15625)/.08\nprint(third_test)\n```\n\n 1.953125\n\n\n## Live Lecture - Deriving Bayes' Theorem, Calculating Bayesian Confidence\n\nNotice that $P(A|B)$ appears in the above laws - in Bayesian terms, this is the belief in $A$ updated for the evidence $B$. So all we need to do is solve for this term to derive Bayes' theorem. Let's do it together!\n\n\n```\n# Activity 2 - Use SciPy to calculate Bayesian confidence intervals\n# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bayes_mvs.html#scipy.stats.bayes_mvs\n\nfrom scipy import stats\nimport numpy as np\n\nnp.random.seed(seed=42)\n\ncoinflips = np.random.binomial(n=1, p=.5, size=100)\nprint(coinflips)\n```\n\n [0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 1 0 0 1 1 1 0\n 0 1 0 0 0 0 1 0 1 0 1 1 0 1 1 1 1 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 0 1\n 1 1 1 0 0 0 1 1 0 0 0 0 1 1 1 0 0 1 1 1 1 0 1 0 0 0]\n\n\n\n```\ndef confidence_interval(data, confidence=.95):\n n = len(data)\n mean = sum(data)/n\n data = np.array(data)\n stderr = stats.sem(data)\n interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n-1)\n return (mean , mean-interval, mean+interval)\n```\n\n\n```\nconfidence_interval(coinflips, confidence=.95)\n```\n\n\n\n\n (0.47, 0.3704689875017368, 0.5695310124982632)\n\n\n\n\n```\n??stats.bayes_mvs\n```\n\n\n```\nstats.bayes_mvs(coinflips, alpha=.95)\n```\n\n\n\n\n (Mean(statistic=0.47, minmax=(0.37046898750173674, 0.5695310124982632)),\n Variance(statistic=0.25680412371134015, minmax=(0.1939698977025208, 0.3395533426586547)),\n Std_dev(statistic=0.5054540733507159, minmax=(0.44042013771229943, 0.5827120581030176)))\n\n\n\n\n```\ncoinflips_mean_dist, _, _ = stats.mvsdist(coinflips)\ncoinflips_mean_dist\n```\n\n\n\n\n \n\n\n\n\n```\ncoinflips_mean_dist.rvs(100)\n```\n\n\n\n\n array([0.47447628, 0.51541425, 0.54722018, 0.4589882 , 0.51501386,\n 0.53819192, 0.43382292, 0.53546659, 0.47026173, 0.44967562,\n 0.4621107 , 0.42691904, 0.37324325, 0.47531437, 0.46052277,\n 0.48711257, 0.52456771, 0.43332181, 0.49545882, 0.44671454,\n 0.47520117, 0.47047251, 0.41828918, 0.50159477, 0.42965501,\n 0.45273383, 0.48045849, 0.45342529, 0.48238344, 0.53966291,\n 0.48230241, 0.48073422, 0.48553525, 0.47962228, 0.41274185,\n 0.42892633, 0.5170948 , 0.42678096, 0.42249309, 0.51499109,\n 0.47059199, 0.39903942, 0.41790336, 0.46406817, 0.42232382,\n 0.42163269, 0.47848227, 0.48232842, 0.4731858 , 0.51077244,\n 0.3957508 , 0.48504646, 0.49014295, 0.53252732, 0.45495376,\n 0.47883978, 0.60393033, 0.4492549 , 0.44797902, 0.54782121,\n 0.43380002, 0.5760073 , 0.36941266, 0.44467418, 0.4939245 ,\n 0.45278835, 0.55635162, 0.48695459, 0.39080983, 0.45948606,\n 0.2941779 , 0.35950718, 0.44805696, 0.4725126 , 0.42218381,\n 0.45985418, 0.47545393, 0.44317753, 0.46267013, 0.4458753 ,\n 0.44204707, 0.51334913, 0.50914181, 0.49923748, 0.46895674,\n 0.43892798, 0.45984946, 0.44984632, 0.53560791, 0.45865723,\n 0.48646824, 0.55937503, 0.41464303, 0.50701457, 0.46934196,\n 0.37681534, 0.42748113, 0.49812825, 0.48278895, 0.4964763 ])\n\n\n\n## Assignment - Code it up!\n\nMost of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions!\n\nSpecific goals/targets:\n\n1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations\n2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week\n3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach\n4. In your own words, summarize the difference between Bayesian and Frequentist statistics\n\nIf you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar!\n\nStretch goals:\n\n- Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective\n- Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples\n- Take PyMC3 further - see if you can build something with it!\n\n\n```\ndef prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk):\n return (prob_positive_drunk*prob_drunk_prior)/prob_positive\n```\n\n\n```\n# Reproduction of the example from class\n\nprob_drunk_given_positive(.001,.08,1)\n```\n\n\n\n\n 0.0125\n\n\n\n\n```\n# Create a more generalizable function with the full denominator\n\ndef proba_givenb(probb_givena, proba, probb_givennota):\n num = probb_givena * proba\n prob_nota = 1 - proba\n denom = num + (probb_givennota*prob_nota)\n return num/denom\n```\n\n\n```\n# Sanity check to make sure the new function gets the correct result in the case of \n# the breathalyzer example\n\nproba_givenb(1,.001, .08)\n```\n\n\n\n\n 0.012357884330202669\n\n\n\n\n```\n# Function to recurvsively do multiple tests\n\ndef recursive_proba_givenb(probb_givena, proba, probb_givennota, times_to_run):\n prob = proba_givenb(probb_givena, proba, probb_givennota)\n print('Test 1: ' + str(prob))\n for i in range(1,times_to_run):\n prob = proba_givenb(probb_givena, prob, probb_givennota)\n print('Test ' + str(i+1) + ': ' + str(prob))\n #return prob\n```\n\n\n```\nrecursive_proba_givenb(1,.001, .08, 6)\n```\n\n Test 1: 0.012357884330202669\n Test 2: 0.13525210993291495\n Test 3: 0.6615996951348605\n Test 4: 0.9606895076105054\n Test 5: 0.9967371577896734\n Test 6: 0.9997381867081508\n\n\n\n```\n# Explore scipy.stats.bayes_mvs\n\n# Typing the example code to get a feel for the outputs\n\nfrom scipy import stats\ndata = [6,9,12,7,8,8,13]\nmean, var, std = stats.bayes_mvs(data, alpha=0.95)\nmean\n```\n\n\n\n\n Mean(statistic=9.0, minmax=(6.612058548265569, 11.38794145173443))\n\n\n\n\n```\nvar\n```\n\n\n\n\n Variance(statistic=10.0, minmax=(2.768285761244642, 32.3273011015803))\n\n\n\n\n```\nstd\n```\n\n\n\n\n Std_dev(statistic=2.9724954732045084, minmax=(1.6638166248852793, 5.685710254803731))\n\n\n\n\n```\n# OK. Lets get some data from the adult dataset.\n\nimport pandas as pd\n\ndf = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv', na_values=\" ?\")\nprint(df.shape)\ndf.head(10)\n```\n\n (32561, 15)\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ageworkclassfnlwgteducationeducation-nummarital-statusoccupationrelationshipracesexcapital-gaincapital-losshours-per-weekcountrysalary
039State-gov77516Bachelors13Never-marriedAdm-clericalNot-in-familyWhiteMale2174040United-States<=50K
150Self-emp-not-inc83311Bachelors13Married-civ-spouseExec-managerialHusbandWhiteMale0013United-States<=50K
238Private215646HS-grad9DivorcedHandlers-cleanersNot-in-familyWhiteMale0040United-States<=50K
353Private23472111th7Married-civ-spouseHandlers-cleanersHusbandBlackMale0040United-States<=50K
428Private338409Bachelors13Married-civ-spouseProf-specialtyWifeBlackFemale0040Cuba<=50K
537Private284582Masters14Married-civ-spouseExec-managerialWifeWhiteFemale0040United-States<=50K
649Private1601879th5Married-spouse-absentOther-serviceNot-in-familyBlackFemale0016Jamaica<=50K
752Self-emp-not-inc209642HS-grad9Married-civ-spouseExec-managerialHusbandWhiteMale0045United-States>50K
831Private45781Masters14Never-marriedProf-specialtyNot-in-familyWhiteFemale14084050United-States>50K
942Private159449Bachelors13Married-civ-spouseExec-managerialHusbandWhiteMale5178040United-States>50K
\n
\n\n\n\n\n```\n# Looking at capital gain\n\nmean, var, std = stats.bayes_mvs(df['capital-gain'], alpha=0.95)\n```\n\n\n```\nmean\n```\n\n\n\n\n Mean(statistic=1077.6488437087312, minmax=(997.4329861097451, 1157.8647013077173))\n\n\n\n\n```\nvar\n```\n\n\n\n\n Variance(statistic=54540864.09044188, minmax=(53703072.03858372, 55378656.14230005))\n\n\n\n\n```\nstd\n```\n\n\n\n\n Std_dev(statistic=7385.178676947626, minmax=(7328.457500080688, 7441.899853814563))\n\n\n\n\n```\ndf['capital-gain'].mean()\n```\n\n\n\n\n 1077.6488437087312\n\n\n\n\n```\ndf['capital-gain'].var()\n```\n\n\n\n\n 54542539.17839\n\n\n\n\n```\ndf['capital-gain'].std()\n```\n\n\n\n\n 7385.292084839299\n\n\n\n\n```\n# The results are very close.\n```\n\n\n```\nvotes = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data', header=None)\nvotes.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
012345678910111213141516
0republicannynyyynnny?yyyny
1republicannynyyynnnnnyyyn?
2democrat?yy?yynnnnynyynn
3democratnyyn?ynnnnynynny
4democratyyynyynnnny?yyyy
\n
\n\n\n\n\n```\nfeature_names = ['Class Name',\n 'handicapped-infants',\n 'water-project-cost-sharing',\n 'adoption-of-the-budget-resolution',\n 'physician-fee-freeze',\n 'el-salvador-aid',\n 'religious-groups-in-schools',\n 'anti-satellite-test-ban',\n 'aid-to-nicaraguan-contras',\n 'mx-missile',\n 'immigration',\n 'synfuels-corporation-cutback',\n 'education-spending',\n 'superfund-right-to-sue',\n 'crime',\n 'duty-free-exports',\n 'export-administration-act-south-africa'\n ]\n```\n\n\n```\n# Get congressional data, fill '?' values with 0 to save time\n\nvotes.columns = feature_names\nvotes = votes.replace({'?':0, 'n':0, 'y':1})\nvotes.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Class Namehandicapped-infantswater-project-cost-sharingadoption-of-the-budget-resolutionphysician-fee-freezeel-salvador-aidreligious-groups-in-schoolsanti-satellite-test-banaid-to-nicaraguan-contrasmx-missileimmigrationsynfuels-corporation-cutbackeducation-spendingsuperfund-right-to-suecrimeduty-free-exportsexport-administration-act-south-africa
0republican0101110001011101
1republican0101110000011100
2democrat0110110000101100
3democrat0110010000101001
4democrat1110110000101111
\n
\n\n\n\n\n```\nvotes['aid-to-nicaraguan-contras']\n```\n\n\n```\nmean, var, std = stats.bayes_mvs(votes['aid-to-nicaraguan-contras'], alpha=0.95)\n```\n\n\n```\nmean\n```\n\n\n\n\n Mean(statistic=0.5563218390804597, minmax=(0.5094498779265323, 0.6031938002343872))\n\n\n\n\n```\nvar\n```\n\n\n\n\n Variance(statistic=0.24854193273733502, minmax=(0.21751815577206396, 0.28391808852107286))\n\n\n\n\n```\nstd\n```\n\n\n\n\n Std_dev(statistic=0.4982513774472293, minmax=(0.4663884172790571, 0.5328396461610875))\n\n\n\n\n```\nconfidence_interval(votes['aid-to-nicaraguan-contras'])\n```\n\n\n\n\n (0.5563218390804597, 0.5094498779265323, 0.6031938002343872)\n\n\n\n\n```\n# OK, so it looks like I'm getting the same results for the mean\n# Maybe try some coin flips?\ncoinflips = np.random.binomial(n=1, p=.5, size=25)\nmean, var, std = stats.bayes_mvs(coinflips, alpha=0.95)\nmean\n```\n\n\n\n\n Mean(statistic=0.68, minmax=(0.48347754851147945, 0.8765224514885206))\n\n\n\n\n```\nconfidence_interval(coinflips)\n```\n\n\n\n\n (0.68, 0.48347754851147945, 0.8765224514885206)\n\n\n\n\n```\nbayes_min = [0.24471097896044025,0.030591282220162996,0.47984663964462176,0.25991367527459486,0.48347754851147945]\nbayes_max = [1.3552890210395598,0.769408717779837,0.9868200270220449,0.7400863247254051,0.8765224514885206]\nfreq_min = [0.24471097896044036,0.030591282220162996,0.4798466396446218,0.2599136752745949,0.48347754851147945]\nfreq_max = [1.3552890210395598,0.769408717779837,0.9868200270220447,0.740086324725405,0.8765224514885206]\nx=[5,10,15,20,25]\n```\n\n\n```\nimport matplotlib.pyplot as plt\nplt.plot(x, bayes_min)\n```\n\n\n```\n# freqlow = []\n# freqhigh = []\n# for x in range(5, 105, 5):\n# flip = confidence_interval(np.random.binomial(n=1, p=.5, size=x)\n# freqlow.append(flip[1])\n# freqhigh.append(flip[2]) \n```\n\n\n```\nfreqs=[]\nfreqlow, baylow = [], []\nfreqhigh, bayhigh = [], []\nfor x in range(5, 105, 5):\n freqs.append(confidence_interval(coinflips[:x]))\n bay = stats.bayes_mvs(coinflips[:x], alpha=.999)\n #bay_mean = bay[0][0]\n #baylow, bayhigh = bay[0][1][0], bay[0][1][1]\n baylow.append(bay[0][1][0])\n bayhigh.append(bay[0][1][1])\n bays.append((bay_mean, bay_low, bay_high))\n```\n\n\n```\nbaylow\n```\n\n\n\n\n [-1.5220603162759303,\n -0.38071975602580255,\n -0.018727215031760447,\n 0.10677568080028821,\n 0.13804278711810153,\n 0.12765531944568637,\n 0.20565293165377158,\n 0.24047872001902548,\n 0.22318724538308576,\n 0.23016847028463414,\n 0.20150412908956256,\n 0.22569422202140438,\n 0.24660584664734833,\n 0.2789041220141196,\n 0.2941656071381257,\n 0.29528341196258784,\n 0.2848713461853316,\n 0.28669734807173103,\n 0.30910805628110316,\n 0.31960335271099655]\n\n\n\n\n```\nfreqs\n```\n\n\n\n\n [(0.2, -0.3552890210395599, 0.75528902103956),\n (0.4, 0.030591282220162996, 0.769408717779837),\n (0.5333333333333333, 0.24736177494440975, 0.819304891722257),\n (0.55, 0.31111712307712147, 0.7888828769228786),\n (0.52, 0.3095228192036529, 0.7304771807963472),\n (0.4666666666666667, 0.27719432002040323, 0.6561390133129301),\n (0.5142857142857142, 0.34009332777276696, 0.6884781007986616),\n (0.525, 0.36325767707151374, 0.6867423229284864),\n (0.4888888888888889, 0.337012356706235, 0.6407654210715428),\n (0.48, 0.3365737906626731, 0.6234262093373268),\n (0.43636363636363634, 0.3010582370482283, 0.5716690356790444),\n (0.45, 0.3203992003781563, 0.5796007996218437),\n (0.46153846153846156, 0.33705030893586463, 0.5860266141410585),\n (0.4857142857142857, 0.3656817058200476, 0.6057468656085239),\n (0.49333333333333335, 0.37752939020471143, 0.6091372764619553),\n (0.4875, 0.37556342415272176, 0.5994365758472783),\n (0.47058823529411764, 0.3622885411691642, 0.5788879294190711),\n (0.4666666666666667, 0.3615912882899227, 0.5717420450434106),\n (0.4842105263157895, 0.38186604186613715, 0.5865550107654418),\n (0.49, 0.39030929062808245, 0.5896907093719175)]\n\n\n\n\n```\nfreqs[0][1]\n```\n\n\n\n\n -0.3552890210395599\n\n\n\n\n```\nfor i in range (0,20):\n freqlow.append(freqs[i][1])\n freqhigh.append(freqs[i][2])\n```\n\n\n```\nfreqlow\n```\n\n\n\n\n [-0.3552890210395599,\n 0.030591282220162996,\n 0.24736177494440975,\n 0.31111712307712147,\n 0.3095228192036529,\n 0.27719432002040323,\n 0.34009332777276696,\n 0.36325767707151374,\n 0.337012356706235,\n 0.3365737906626731,\n 0.3010582370482283,\n 0.3203992003781563,\n 0.33705030893586463,\n 0.3656817058200476,\n 0.37752939020471143,\n 0.37556342415272176,\n 0.3622885411691642,\n 0.3615912882899227,\n 0.38186604186613715,\n 0.39030929062808245]\n\n\n\n\n```\n# Frequentist confidence interval for the coin flips is in blue, Bayesian is in \n# the red. Didn't have time to make this graph more polished.\n\nx = range(5,105,5)\nplt.plot(x, freqlow, color='r')\nplt.plot(x, freqhigh, color='r')\nplt.plot(x, baylow, color='b')\nplt.plot(x, bayhigh, color='b')\n```\n\n## Resources\n\n\n```\n\n```\n\n- [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator)\n- [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139)\n", "meta": {"hexsha": "4400e271ea87583ea3679857ffc7470810d95793", "size": 107906, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Stephen_Sinclair_LS_DS4_143_Introduction_to_Bayesian_Inference.ipynb", "max_stars_repo_name": "SMSinclair/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments", "max_stars_repo_head_hexsha": "d2e56cb34f375f7a68d86f1f451f963e498f5c94", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Stephen_Sinclair_LS_DS4_143_Introduction_to_Bayesian_Inference.ipynb", "max_issues_repo_name": "SMSinclair/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments", "max_issues_repo_head_hexsha": "d2e56cb34f375f7a68d86f1f451f963e498f5c94", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Stephen_Sinclair_LS_DS4_143_Introduction_to_Bayesian_Inference.ipynb", "max_forks_repo_name": "SMSinclair/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments", "max_forks_repo_head_hexsha": "d2e56cb34f375f7a68d86f1f451f963e498f5c94", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5356828194, "max_line_length": 16532, "alphanum_fraction": 0.5556317536, "converted": true, "num_tokens": 10348, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43014733397551624, "lm_q2_score": 0.31742627850202554, "lm_q1q2_score": 0.136540067431416}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n\n```\n\n\n\nToggle cell visibility here.\n\n\n## Internal stability\n\nThe concept of stability captures the behaviour of a system state evolution when this is perturbed from an equilibrium condition: stability describes if the state evolution arising after a perturbation from an equilibrium point diverges or not.\n\n### Definition\nGiven a time invariant dynamic system described by the state vector $x(t)\\in \\mathbb{R}^n$, a point of equilibrium $x_e$, an initial state $x_0$ and an initial time $t_0$ if\n$$\n\\forall \\, \\epsilon \\in \\mathbb{R}, \\, \\epsilon > 0 \\quad \\exists \\delta \\in \\mathbb{R}, \\, \\delta > 0 : \\quad ||x_0-x_e|| < \\delta \\, \\Rightarrow \\, ||x(t)-x_e|| < \\epsilon \\quad \\forall t \\ge t_0\n$$\nthat could be read as: if a small enough initial perturbation $\\delta$ from the equilibrium point exists so that the state evolution $x(t)$ from the perturbed point does not get too far (more than $\\epsilon$) from the equilibrium itself, \nthen the equilibrium point is stable. \n\nIf it also happens that $\\lim_{t\\to\\infty}||x(t)-x_e|| = 0$, that can be read as: the state evolution returns back to the equilibrium point, then the equilibrium is said to be asymptotically stable.\n\nIn the case of linear time invariant systems of the type:\n\\begin{cases}\n\\dot{x} = Ax +Bu \\\\\ny = Cx + Du,\n\\end{cases}\n\nit is possible to prove that stability of one equilibrium point implies stability of all equilibrium points, thus we can talk about stability of the system even if, in general, the stability property is related to an equilibrium point. This linear system's peculiar feature is due to the fact that the evolution of this type of systems is strictly related to the eigenvalues of the dynamics matrix $A$, which are rotation-, translation-, initial-condition- and time-invariant.\n\nRecall what is explained in example [Modal Analysis](SS-02-Modal_Analysis.ipynb):\n\n> The closed form solution of the differential equation, from initial time $t_0$, with initial condition $x(t_0)$ is: \n$$\nx(t) = e^{A(t-t_0)}x(t_0).\n$$ The matrix $e^{A(t-t_0)}x(t_0)$ is composed of linear combinations of functions of time $t$, each one of the type: $$e^{\\lambda t},$$ where the $\\lambda$(s) are the eigenvalues of the matrix $A$; these functions are the modes of the system.\n\nthus:\n- a linear dynamic system is stable if and only if all its modes are not divergent,\n- a linear dynamic system is asymptotically stable if and only if all its modes are convergent,\n- a linear dynamic system is unstable if it has at least one divergent mode. \n\nand, with respect to dynamic matrix eigenvalues, these happen if:\n- all the eigenvalues of $A$ belong to the closed left half complex plane (that is they have negative or zero real part) and, in case they have zero real part, their algebraic multiplicity is the same of the geometric multiplicity, or, equivalently, they have scalar blocks in the Jordan form; \n- all the eigenvalues belong to the open left half imaginary plane, that is they have strictly negative real part;\n- at least one eigenvalue has positive real part, or eigenvalues with zero real part and non-scalar Jordan blocks exist.\n\nThis interactive example presents an editable dynamic matrix $A$ and shows the system free response with the corresponding eigenvalues.\n\n### How to use this notebook?\n- Try to change the eigenvalues and the initial condition $x_0$ and see how the response changes.\n\n\n```python\n%matplotlib inline\n#%matplotlib notebook \nimport control as control\nimport numpy\nimport sympy as sym\nfrom IPython.display import display, Markdown\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\n\n\n#print a matrix latex-like\ndef bmatrix(a):\n \"\"\"Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)\n\n :a: numpy array\n :returns: LaTeX bmatrix as a string\n \"\"\"\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{bmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{bmatrix}']\n return '\\n'.join(rv)\n\n\n# Display formatted matrix: \ndef vmatrix(a):\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{vmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{vmatrix}']\n return '\\n'.join(rv)\n\n\n#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !\nclass matrixWidget(widgets.VBox):\n def updateM(self,change):\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.M_[irow,icol] = self.children[irow].children[icol].value\n #print(self.M_[irow,icol])\n self.value = self.M_\n\n def dummychangecallback(self,change):\n pass\n \n \n def __init__(self,n,m):\n self.n = n\n self.m = m\n self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))\n self.value = self.M_\n widgets.VBox.__init__(self,\n children = [\n widgets.HBox(children = \n [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]\n ) \n for j in range(n)\n ])\n \n #fill in widgets and tell interact to call updateM each time a children changes value\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n self.children[irow].children[icol].observe(self.updateM, names='value')\n #value = Unicode('example@example.com', help=\"The email value.\").tag(sync=True)\n self.observe(self.updateM, names='value', type= 'All')\n \n def setM(self, newM):\n #disable callbacks, change values, and reenable\n self.unobserve(self.updateM, names='value', type= 'All')\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].unobserve(self.updateM, names='value')\n self.M_ = newM\n self.value = self.M_\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].observe(self.updateM, names='value')\n self.observe(self.updateM, names='value', type= 'All') \n\n #self.children[irow].children[icol].observe(self.updateM, names='value')\n\n \n#overlaod class for state space systems that DO NOT remove \"useless\" states (what \"professor\" of automatic control would do this?)\nclass sss(control.StateSpace):\n def __init__(self,*args):\n #call base class init constructor\n control.StateSpace.__init__(self,*args)\n #disable function below in base class\n def _remove_useless_states(self):\n pass\n```\n\n\n```python\n# Preparatory cell\n\nA = numpy.matrix([[0,1],[-2/5,-1/5]])\nX0 = numpy.matrix('5; 3')\n\nAw = matrixWidget(2,2)\nAw.setM(A)\nX0w = matrixWidget(2,1)\nX0w.setM(X0)\n```\n\n\n```python\n# Misc\n\n#create dummy widget \nDW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))\n\n#create button widget\nSTART = widgets.Button(\n description='Test',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Test',\n icon='check'\n)\n \ndef on_start_button_clicked(b):\n #This is a workaround to have intreactive_output call the callback:\n # force the value of the dummy widget to change\n if DW.value> 0 :\n DW.value = -1\n else: \n DW.value = 1\n pass\nSTART.on_click(on_start_button_clicked)\n```\n\n\n```python\n# Main cell\n\ndef main_callback(A, X0, DW):\n sols = numpy.linalg.eig(A)\n sys = sss(A,[[1],[0]],[0,1],0)\n pole = control.pole(sys)\n if numpy.real(pole[0]) != 0:\n p1r = abs(numpy.real(pole[0]))\n else:\n p1r = 1\n if numpy.real(pole[1]) != 0:\n p2r = abs(numpy.real(pole[1]))\n else:\n p2r = 1\n if numpy.imag(pole[0]) != 0:\n p1i = abs(numpy.imag(pole[0]))\n else:\n p1i = 1\n if numpy.imag(pole[1]) != 0:\n p2i = abs(numpy.imag(pole[1]))\n else:\n p2i = 1\n \n print('A\\'s eigenvalues are:',round(sols[0][0],4),'and',round(sols[0][1],4))\n \n #T = numpy.linspace(0, 60, 1000)\n T, yout, xout = control.initial_response(sys,X0=X0,return_x=True)\n \n fig = plt.figure(\"A's eigenvalues\", figsize=(16,16))\n ax = fig.add_subplot(311,title='Poles (Real vs Imag)')\n #plt.axis(True)\n # Move left y-axis and bottim x-axis to centre, passing through (0,0)\n # Eliminate upper and right axes\n ax.spines['left'].set_position(('data',0.0))\n ax.spines['bottom'].set_position(('data',0.0))\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n ax.set_xlim(-max([p1r+p1r/3,p2r+p2r/3]),\n max([p1r+p1r/3,p2r+p2r/3]))\n ax.set_ylim(-max([p1i+p1i/3,p2i+p2i/3]),\n max([p1i+p1i/3,p2i+p2i/3]))\n \n plt.plot([numpy.real(pole[0]),numpy.real(pole[1])],[numpy.imag(pole[0]),numpy.imag(pole[1])],'o')\n plt.grid()\n\n ax1 = fig.add_subplot(312,title='Free response')\n plt.plot(T,xout[0])\n plt.grid()\n ax1.set_xlabel('time [s]')\n ax1.set_ylabel('$x_1$')\n ax1.axvline(x=0,color='black',linewidth='0.8')\n ax1.axhline(y=0,color='black',linewidth='0.8')\n ax2 = fig.add_subplot(313)\n plt.plot(T,xout[1])\n plt.grid()\n ax2.set_xlabel('time [s]')\n ax2.set_ylabel('$x_2$')\n ax2.axvline(x=0,color='black',linewidth='0.8')\n ax2.axhline(y=0,color='black',linewidth='0.8')\n \n #plt.show()\n \n\n \nalltogether = widgets.HBox([widgets.VBox([widgets.Label('$A$:',border=3),\n Aw]),\n widgets.Label(' ',border=3),\n widgets.VBox([widgets.Label('$X_0$:',border=3),\n X0w]),\n START])\nout = widgets.interactive_output(main_callback, {'A':Aw, 'X0':X0w, 'DW':DW})\nout.layout.height = '1000px'\ndisplay(out, alltogether)\n```\n\n\n Output(layout=Layout(height='1000px'))\n\n\n\n HBox(children=(VBox(children=(Label(value='$A$:'), matrixWidget(children=(HBox(children=(FloatText(value=0.0, \u2026\n\n", "meta": {"hexsha": "98db9f20771eb7ae4930348bb89d29a0b6db2472", "size": 16045, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_en/examples/04/.ipynb_checkpoints/SS-17-Internal_stability-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_en/examples/04/SS-17-Internal_stability.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_en/examples/04/SS-17-Internal_stability.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 40.0124688279, "max_line_length": 485, "alphanum_fraction": 0.5237768775, "converted": true, "num_tokens": 2973, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.27512971787959806, "lm_q1q2_score": 0.13649015534414352}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\nimport sympy\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n# High-School Maths Exercise\n## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow\n\n### Problem 1. Markdown\nJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.\n\nFirst, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press Ctrl + Enter.\n\nSecond, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).\n\nLet me give you a...\n#### Quick Introduction to Markdown\n##### Text and Paragraphs\nThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:\n```\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n```\n**Result:**\n\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n\n##### Headings\nThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six \"#\" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:\n```\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n```\n\n**Result:**\n\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n\nIt is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.\n\n##### Emphasis\nYou can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\\*) or underscores (\\_)). In order to \"escape\" a symbol, prefix it with a backslash (\\). You can also strike thorugh your text in order to signify a correction.\n```\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not \\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n```\n\n**Result:**\n\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not\\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n\n##### Lists\nYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press Tab once (it will be converted to 4 spaces).\n\nTo create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...\n```\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n```\n\n**Result:**\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n \nTo create an unordered list, type an asterisk, plus or minus at the beginning:\n```\n* This is\n* An\n + Unordered\n - list\n```\n\n**Result:**\n* This is\n* An\n + Unordered\n - list\n \n##### Links\nThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:\n```\nThis is [a link](http://google.com) to Google.\n```\n\n**Result:**\n\nThis is [a link](http://google.com) to Google.\n\n##### Images\nThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):\n```\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n```\n\n**Result:**\n\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n\nIf you want to resize images or do some more advanced stuff, just use HTML. \n\nDid I mention these cells support HTML, CSS and JavaScript? Now I did.\n\n##### Tables\nThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.\n```\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n```\n\n**Result:**\n\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n
\n```python\ndef square(x):\n    return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n
\n\n**Result:**\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n\n**Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).\n\n# Simeon I of Bulgaria\n\n## Background and early life\nSimeon was born in 864 or 865, as the third son of [Knyaz Boris](https://en.wikipedia.org/wiki/Boris_I_of_Bulgaria) I of [Krum](https://en.wikipedia.org/wiki/Krum)'s dynasty.\nAs Boris was the ruler who Christianized Bulgaria in 865, Simeon was a Christian all his life.\n\n```python\nprint(4 + 2)\n```\n\n\n### Problem 2. Formulas and LaTeX\nWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.\n\nThere are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.\n\nMost commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \\frac{a}{b} $$`: $$ \\frac{a}{b} $$.\n\n[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.\n\nYou're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.\n\nNote that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.\n\n\n\nEquation of a line:\n$$ y = ax + b $$\nRoots of the quadratic equation $ ax^2 + bx + c = 0: $\n$$ x_{1,2} = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a} $$\nTaylor series expansion:\n$$ f(x)|_{x=a} = f(a) + f'(a)(x - a) + \\frac{f''(a)}{2!} (x - a)^2 + ... + \\frac{f^{(n)}(a)}{n!}(x - a)^n + ... $$\nBinomial theorem:\n$$ (x + y)^n = \\begin{pmatrix}n\\\\0\\end{pmatrix} x^n y^0 + \\begin{pmatrix}n\\\\1\\end{pmatrix} x^{n-1} y^1 + ... + \\begin{pmatrix}n\\\\n\\end{pmatrix} x^0 y^n = \\sum_{k=0}{n} \\begin{pmatrix}n\\\\k\\end{pmatrix} x^{n-k} y^k $$\nAn integral:\n$$ \\int_{-\\infty}^{+\\infty} e^{-x^2} dx = \\sqrt{\\pi} $$\nA short matrix:\n$$\\begin{pmatrix}2 & 1 & 3 \\\\ 2 & 6 & 8 \\\\ 6 & 8 & 18 \\end{pmatrix}$$\nA long matrix:\n$$ A = \\begin{pmatrix}a_{11} & a_{12} & \\cdots & a_{1n} \\\\ a_{21} & a_{22} & \\cdots & a_{2n} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ a_{m1} & a_{m2} & \\cdots & a_{mn} \\end{pmatrix}$$\n\n### Problem 3. Solving with Python\nLet's first do some symbolic computation. We need to import `sympy` first. \n\n**Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**\n\nLet's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): \n```python \nimport sympy \n```\n\nNext, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:\n```python \nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n```\n\nNow solve:\n```python \nsympy.solve(a * x**2 + b * x + c)\n```\n\nHmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:\n```python \nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nFinally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.\n\n\n```python\nsympy.init_printing()\nx, a, b, c = sympy.symbols('x a b c')\n\n```\n\n\n```python\n\nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nHow about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?\n\nRemember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.\n\nIf $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$\n\nIf $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$\n\nIf $b^2 - 4ac < 0$, the equation has zero real roots\n\nWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.\n\n\n```python\ndef solve_quadratic_equation(a, b, c):\n \"\"\"\n Returns the real solutions of the quadratic equation ax^2 + bx + c = 0\n \"\"\"\n discriminant = b**2 - 4 * a * c\n if discriminant > 0:\n x1 = - ( math.sqrt(discriminant) + b) / (2 * a)\n x2 = (math.sqrt(discriminant) - b) / (2 * a)\n return [x1, x2]\n elif discriminant == 0:\n return - b / (2 * a)\n else:\n return []\n```\n\n\n```python\n# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests\nprint(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]\nprint(solve_quadratic_equation(1, -8, 16)) # [4.0]\nprint(solve_quadratic_equation(1, 1, 1)) # []\n```\n\n [-1.0, 2.0]\n 4.0\n []\n\n\n**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).\n\n### Problem 4. Equation of a Line\nLet's go back to our linear equations and systems. There are many ways to define what \"linear\" means, but they all boil down to the same thing.\n\nThe equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).\n\nThe function produces a straight line and we can see it.\n\nHow do we plot functions in general? Ww know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.\n\nNow, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:\n* All elements in it must be of the same type\n* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.\n\nThere's one more thing: it's blazingly fast because all computations are done in C, instead of Python.\n\nFirst let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:\n```python\nimport numpy as np\n```\n\nImport that at the top cell and don't forget to re-run it.\n\nNext, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).\n```python\nx = np.linspace(-3, 5, 1000)\n```\nNow, let's generate our function variable\n```python\ny = 2 * x + 3\n```\n\nWe can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.\n```python\nimport matplotlib.pyplot as plt\n```\n\nNow, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a \"magic string\": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.\n```python\nplt.plot(x, y)\nplt.show()\n```\n\n\n```python\n# y = 2 * x + 3\nx = np.linspace(-100, 100, 5000)\ny = [2 * current_x + 3 for current_x in x]\n```\n\n\n```python\nplt.plot(x, y)\nplt.show()\n```\n\nIt doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the \"spines\" of the plot (i.e. the borders).\n\nAll `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for \"axis\".\nLet's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n```\n\n**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.\n\nThis should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).\n\n\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\nplt.plot(x, y)\nplt.show()\n```\n\n### * Problem 5. Linearizing Functions\nWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. \n\nA commonly used method for linearizing functions is through algebraic transformations. Try to linearize \n$$ y = ae^{bx} $$\n\nHint: The inverse operation of $e^{x}$ is $\\ln(x)$. Start by taking $\\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).\n\n$lny = lna + bx$\n\n### * Problem 6. Generalizing the Plotting Function\nLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.\n\nNote: We can also pass *lambda expressions* (anonymous functions) like this: \n```python\nlambda x: x + 2```\nThis is a shorter way to write\n```python\ndef some_anonymous_function(x):\n return x + 2\n```\n\nWe'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.\n\nWrite a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.\n\n**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):\n```python\nf_vectorized = np.vectorize(f)\ny = f_vectorized(x)\n```\n\n\n```python\ndef plot_math_function(f, min_x, max_x, num_points):\n # Write your code here\n pass\n```\n\n\n```python\nplot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)\nplot_math_function(lambda x: -x + 8, -1, 10, 1000)\nplot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)\nplot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)\nplot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)\n```\n\n### * Problem 7. Solving Equations Graphically\nNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the \"=\" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.\n\nTo do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.\n\n```python\nvectorized_fs = [np.vectorize(f) for f in functions]\nys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n```\n\n\n```python\ndef plot_math_functions(functions, min_x, max_x, num_points):\n # Write your code here\n pass\n```\n\n\n```python\nplot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)\nplot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)\n```\n\nThis is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.\n\n\n```python\nplot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)\n```\n\n### Problem 8. Trigonometric Functions\nWe already saw the graph of the function $y = \\sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.\n\n\n\nThe two basic trigonometric functions are defined as the ratio of two sides:\n$$ \\sin(x) = \\frac{\\text{opposite}}{\\text{hypotenuse}} $$\n$$ \\cos(x) = \\frac{\\text{adjacent}}{\\text{hypotenuse}} $$\n\nAnd also:\n$$ \\tan(x) = \\frac{\\text{opposite}}{\\text{adjacent}} = \\frac{\\sin(x)}{\\cos(x)} $$\n$$ \\cot(x) = \\frac{\\text{adjacent}}{\\text{opposite}} = \\frac{\\cos(x)}{\\sin(x)} $$\n\nThis is fine, but using this, \"right-triangle\" definition, we're able to calculate the trigonometric functions of angles up to $90^\\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a \"unit circle\".\n\n\n\nWe can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\\cos(\\alpha)$ and the $y$-coordinate - to $\\sin(\\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\\circ$. After that, the same values repeat: these functions are **periodic**: \n$$ \\sin(k.360^\\circ + \\alpha) = \\sin(\\alpha), k = 0, 1, 2, \\dots $$\n$$ \\cos(k.360^\\circ + \\alpha) = \\cos(\\alpha), k = 0, 1, 2, \\dots $$\n\nWe can, of course, use this picture to derive other identities, such as:\n$$ \\sin(90^\\circ + \\alpha) = \\cos(\\alpha) $$\n\nA very important property of the sine and cosine is that they accept values in the range $(-\\infty; \\infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\\infty; \\infty)$ **except when their denominators are zero** and produce values in the same range. \n\n#### Radians\nA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\\text{rad}$ or without any designation, so $\\sin(2)$ means \"sine of two radians\".\n\n\nIt's defined as *the central angle of an arc with length equal to the circle's radius* and $1\\text{rad} \\approx 57.296^\\circ$.\n\nWe know that the circle circumference is $C = 2\\pi r$, therefore we can fit exactly $2\\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\\circ$ or $2\\pi\\ \\text{rad}$. Also, $\\pi rad = 180^\\circ$.\n\n(Some people prefer using $\\tau = 2\\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)\n\n**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\\text{[deg]} = 180/\\pi.\\text{[rad]}, \\text{[rad]} = \\pi/180.\\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.\n\n#### Inverse trigonometric functions\nAll trigonometric functions have their inverses. If you plug in, say $\\pi/4$ in the $\\sin(x)$ function, you get $\\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:\n$$ \\arcsin(y) = x: sin(y) = x $$\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} $$\n\nPlease note that this is NOT entirely correct. From the relations we found:\n$$\\sin(x) = sin(2k\\pi + x), k = 0, 1, 2, \\dots $$\n\nit follows that $\\arcsin(x)$ has infinitely many values, separated by $2k\\pi$ radians each:\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} + 2k\\pi, k = 0, 1, 2, \\dots $$\n\nIn most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.\n\nNote 1: There are inverse functions for all four basic trigonometric functions: $\\arcsin$, $\\arccos$, $\\arctan$, $\\text{arccot}$. These are sometimes written as $\\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent. \n\nJust notice the difference between $\\sin^{-1}(x) := \\arcsin(x)$ and $\\sin(x^{-1}) = \\sin(1/x)$.\n\n#### Exercise\nUse the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions).\n\n\n```python\nx = np.linspace(0.3, 0.9, 30)\ny = np.arcsin(x)\nplt.plot(x, y)\nplt.show()\n```\n\n### ** Problem 9. Perlin Noise\nThis algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).\n#### Noise\nNoise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.\nWe can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.\n\n$$ \\text{noise}(x, y) = N, N \\in [n_{min}, n_{max}] $$\n\nThis function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a \"scalar field\").\n\nRandom variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have \"uniform noise\" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.\n\n#### Perlin noise\nThere are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.\n\n#### Algorithm\n... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).\n\n#### Your task\n1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created\n2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using\n3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created\n4. Test and improve the algorithm\n5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)\n6. Communicate the results (e.g. in the Softuni forum)\n\nHint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.\n", "meta": {"hexsha": "e8f800a3377ab4f6ee5154a5c0f69436a48b88c9", "size": 81247, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01HighSchoolMaths/High-School Maths Exercise.ipynb", "max_stars_repo_name": "georgi4c/Math-Concepts-for-Developers", "max_stars_repo_head_hexsha": "cc3fb5a7e3539e412d76c94af129d91091838eb8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01HighSchoolMaths/High-School Maths Exercise.ipynb", "max_issues_repo_name": "georgi4c/Math-Concepts-for-Developers", "max_issues_repo_head_hexsha": "cc3fb5a7e3539e412d76c94af129d91091838eb8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01HighSchoolMaths/High-School Maths Exercise.ipynb", "max_forks_repo_name": "georgi4c/Math-Concepts-for-Developers", "max_forks_repo_head_hexsha": "cc3fb5a7e3539e412d76c94af129d91091838eb8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.6895705521, "max_line_length": 14832, "alphanum_fraction": 0.795623223, "converted": true, "num_tokens": 7486, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.36658973632215985, "lm_q2_score": 0.37022539259558657, "lm_q1q2_score": 0.1357208290513842}} {"text": "```python\n%autosave 0\n```\n\n# MCPC rehearsal problem Oct 25 2017 at UCSY\n\n## Problem E: Stacking Plates\n\n### Input format\n\n- 1st Line: 1 integer, Number of Test Case, each Test Case has following data\n + 1 Line: 1 integer, **n**(Number of Stacks)\n + **n** Lines: first integer: **h** (Number of Plates), and **h** integers (Plate size)\n \n\n### Output format\n\nCase: (test case number): Number of Operations\n\n\n### Sample Input\n\n```\n3\n2\n3 1 2 4\n2 3 5\n3\n4 1 1 1 1\n4 1 1 1 1\n4 1 1 1 1\n2\n15 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3\n15 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3\n```\n\n\n### Sample Output\n\n```\nCase 1:5\nCase 2:2\nCase 3:5\n```\n\n### Explanation of sample I/O\n\n- 3 test cases\n + Stack of (1,2,4) and (3,5)\n + Stack of 3 (1,1,1,1)\n + Stack of 2 (1,1,1,1,1,2,2,2,2,2,3,3,3,3,3)\n \n- 1st case:\nSplit between 2 and 4, 3 and 5, Move 4 on 5, 3 on 4, (1,2) on 3 ==> Total 5 operations\n\n- 2nd case:\nMove 1st stack (1,1,1,1,1) on 2nd stack, move (1st+2nd stack) on 3rd stack ==> Total 2 operations\n\n- 3rd case:\nSplit between 1 and 2 of 1st stack, between 2 and 3 of 2nd stack, move (2,2,2,2,2,3,3,3,3,3) of 1st stack on (3,3,3,3,3) of 2nd, move (1,1,1,1,1,2,2,2,2,2) of 2nd stack on it, move (1,1,1,1,1) on top ==> Total 5 operations\n\n### Specific vs Abstract, Find a General Rule from Detail\n\nWhen solving problems and puzzles that you can not find how to solve, thinking specificlly then abstractly is important. Finding general rules from specific examples give you a path to the answer.\n\n- Think about simple cases\n- Find general pattern (idea, rule) from there\n- Proove the rule (if possible)\n- Extend the rule to more complex cases\n\n### How to calculate number of operation(movement)\n\nIf there are N stacks and each contains just 1 piece (Split is not necessary), (N-1) operations are reuired. (N-1) is a minimum number of oprations.\n\nFor each Split operation, Join operation is required to create single stack. Total number of operation increases by 2 for each Split operations (S). The order of Split and Join does not affect total number of movement. (Split-Split-Join-Join) = (Split-Join-Split-Join) \\begin{equation} Nmber Of Movement = 2S + (N-1) \\end{equation}\n\nSame size of pieces in original stacks (Case 2 and Case 3) can be considered to be same as single piece. Case 2: 3 stack of (1), Case 3: 2 stack of (1,2,3)\n\n### Optimized movement\n\nReverse-Thinking is sometimes very effective. Create Final Stack and check the boundary. If the combination of the boundary exists in original stacks, it can be used (not necessary to split). **Stack ID needs to be checked, as for detail see later**.\n\n$S = (Maximum Number Of Split) - (Number Of Reused Boundary)$\n\n- Case 1: [1,2,3,4,5] is final form. Boundary of [1,2] exist in Original Stack-1, $S=(2+1)-1=2, Movement=2*2+(2-1)=5$\n- Case 2: Convert original stacks to [1], $Movement=2*0+(3-1)=2$\n- Case 3: Convert original stacks to [1,2,3], Final form is [1,1,2,2,3,3]. Boundary of [1,2] and [2,3] exists. $S=(2+2)-2=2$\n\n### Sample I/O gives hint\n\nSample Input/Output often gives great hint to solve problems. Same number in original stack cause problem in above idea, but same number can be considered to be 1 digit, so convert input data to eliminate duplicate number.\n\n### Stack ID checking\n\n- Assign stack ID\n- Merge every plates and sort by radius (size of plate)\n- Manage the list of candidate for boundary reuse (top and bottom)\n- Boundary assignment can be done greedy, if there is only 1 combination between top of stack and next, use it\n\n\n\n```python\n#!/usr/bin/python3\n# -*- coding: utf-8 -*-\n\n'''\n 2017 MCPC at UCSY\n Problem-E: Stacking Plates\n'''\nimport sys\n\n\nclass TestCase():\n pass\n\n\nclass Plates():\n # Groupt of same radius plates in merged stack\n # id_list: list of stack ID\n def __init__(self, radius, id_list):\n self.radius = radius\n self.id_list = id_list\n self.top = None\n self.bottom = None\n\n def match_prev(self, prev_bottom):\n self.top = list()\n for stack_id in self.id_list:\n if stack_id in prev_bottom:\n self.top.append(stack_id)\n self.bottom = self.id_list.copy()\n if len(self.top) == 1 and len(self.bottom) != 1:\n self.bottom.remove(self.top[0])\n\n return\n\n def __repr__(self):\n return ('Plates {}: {}, top: {} bottom: {}'.format(self.radius, self.id_list, self.top, self.bottom))\n\n\ndef parse_tc(tc):\n '''\n Input: Test Case\n Update: \n Return: None\n '''\n\n tc.n = int(tc.stdin.readline())\n tc.stacks = list()\n\n for i in range(tc.n):\n stack = tc.stdin.readline().split()[1:] # 2d List, 1st=len\n tc.stacks.append(stack)\n\n return\n\n\ndef reform_stack(org):\n '''\n Input: tc.stacks\n Output: cosolidated stacks (no prefix, no duplicate)\n '''\n\n stacks = list()\n stack_id = 0\n\n for stack in org:\n prev_radius = None\n new_stack = list()\n\n for radius in stack:\n if radius != prev_radius:\n new_stack.append((radius, stack_id))\n prev_radius = radius\n\n stacks.append(new_stack)\n stack_id += 1\n\n return stacks\n\n\ndef merge(stacks):\n '''\n stacks: 2D List of tuple(radius, id)\n Return: 1D sorted List\n '''\n\n merged_stack = list()\n\n for stack in stacks:\n merged_stack.extend(stack)\n\n merged_stack.sort()\n\n return merged_stack\n\n\ndef stack2plates(merged_stack):\n '''\n merged_stack: List of Tuple(radius, id)\n return: List of Plates\n '''\n\n plates_list = list()\n id_list = list()\n prev_size = None\n\n for plate in merged_stack:\n radius, plate_id = plate\n if radius != prev_size:\n if id_list:\n plates_list.append(Plates(prev_size, id_list))\n id_list = [plate_id]\n else:\n id_list.append(plate_id)\n\n prev_size = radius\n\n if id_list:\n plates_list.append(Plates(radius, id_list))\n\n return plates_list\n\n\ndef max_reuse(plates_list):\n\n reuse = 0\n prev_bottom = list()\n\n for plates in plates_list:\n plates.match_prev(prev_bottom)\n if plates.top: reuse += 1\n prev_bottom = plates.bottom\n #print(plates, file=sys.stderr)\n\n return reuse\n\ndef solve(tc):\n '''\n Input: Test Case\n Return: Numper of movement\n '''\n\n parse_tc(tc)\n stacks = reform_stack(tc.stacks)\n #print(stacks)\n num_merge = len(stacks) - 1 ## Join Stacks\n for stack in stacks:\n num_merge += (len(stack) - 1) * 2 ## Split and Join\n\n merged_stack = merge(stacks)\n plates_list = stack2plates(merged_stack) # list of Plates\n\n #return (num_merge - check_bound(merged_stack, stack_bound) * 2)\n return (num_merge - max_reuse(plates_list) * 2)\n\n```\n\n\n```python\n### Main routine\n\ninfile = open('reh_e.in', 'r')\n\ntc = TestCase()\ntc.stdin = infile\ntc.t = int(tc.stdin.readline())\n\nfor i in range(tc.t):\n print('Case {}:{}'.format(i+1, solve(tc)))\n```\n\n Case 1:5\n Case 2:2\n Case 3:5\n Case 4:8\n Case 5:3\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "d0ad7ceab923c198077337d18b800ed646b47c90", "size": 10729, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MCPC-Rehearsal-E.ipynb", "max_stars_repo_name": "ShinjiKatoA16/mcpc2017ucsy", "max_stars_repo_head_hexsha": "7f6fbfbb2b05a34c58d30519874b4ca5d0232b94", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MCPC-Rehearsal-E.ipynb", "max_issues_repo_name": "ShinjiKatoA16/mcpc2017ucsy", "max_issues_repo_head_hexsha": "7f6fbfbb2b05a34c58d30519874b4ca5d0232b94", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MCPC-Rehearsal-E.ipynb", "max_forks_repo_name": "ShinjiKatoA16/mcpc2017ucsy", "max_forks_repo_head_hexsha": "7f6fbfbb2b05a34c58d30519874b4ca5d0232b94", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.969273743, "max_line_length": 342, "alphanum_fraction": 0.5008854506, "converted": true, "num_tokens": 2101, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4610167793123159, "lm_q2_score": 0.2942149783515163, "lm_q1q2_score": 0.13563804174505878}} {"text": "+ This notebook is part of lecture 10 *The four fundamental subspaces* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]\n+ Created by me, Dr Juan H Klopper\n + Head of Acute Care Surgery\n + Groote Schuur Hospital\n + University Cape Town\n + Email me with your thoughts, comments, suggestions and corrections \n
Linear Algebra OCW MIT18.06 IPython notebook [2] study notes by Dr Juan H Klopper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.\n\n+ [1] OCW MIT 18.06\n+ [2] Fernando P\u00e9rez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org\n\n\n```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n#import numpy as np\nfrom sympy import init_printing, Matrix, symbols\n#import matplotlib.pyplot as plt\n#import seaborn as sns\n#from IPython.display import Image\nfrom warnings import filterwarnings\n\ninit_printing(use_latex = 'mathjax') # Pretty Latex printing to the screen\n#%matplotlib inline\nfilterwarnings('ignore')\n```\n\n# The four fundamental subspaces\n# Introducing the matrix space\n\n## The four fundamental subspaces\n\n* Columnspace, C(A)\n* Nullspace, N(A)\n* Rowspaces\n * All linear combinations of rows\n * All the linear combinations of the colums of AT, C(AT)\n* Nullspace of AT, N(AT) (the left nullspace of A)\n\n## Where are these spaces for a matrix Am×n?\n\n* C(A) is in ℝm\n* N(A) is in ℝn\n* C(AT) is in ℝn\n* N(AT) is in ℝm\n\n## Calculating basis and dimension\n\n### For C(A)\n\n* The bases are the pivot columns\n* The dimension is the rank *r*\n\n### For N(A)\n\n* The bases are the special solutions (one for every free variable, *n* - *r*)\n* The dimension is *n* - *r*\n\n### For C(AT)\n\n* If A undergoes row reduction to row echelon form (R), then C(R) ≠ C(A), but rowspace(R) = rowspace(A) (or C(RT) = C(AT))\n* A basis for the rowspace of A (or R) is the first *r* rows of R\n * So we row reduce A and take the pivot rows and transpose them\n* The dimension is also equal to the rank *r*\n\n### For N(AT)\n\n* It is also called the left, because it ends up on the left (as seen below)\n* Here we have AT**y** = **0**\n * **y**T(AT)T = **0**T\n * **y**TA = **0**T\n * This is (again) the pivot columns of AT (after row reduction)\n* The dimension is *m* - *r*\n\n## Example problems\n\n### Consider this example matrix and calculate the bases and dimension for all four fundamental spaces\n\n\n```python\nA = Matrix([[1, 2, 3, 1], [1, 1, 2, 1], [1, 2, 3, 1]]) # We note that rows 1 and three are identical and that\n# columns 3 is the addtion of columns 1 and 2 and column 1 equals column 4\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 2 & 3 & 1\\\\1 & 1 & 2 & 1\\\\1 & 2 & 3 & 1\\end{matrix}\\right]$$\n\n\n\n#### Columnspace\n\n\n```python\nA.rref() # Remember that the columnspace contains the pivot columns as a basis\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & 1 & 1\\\\0 & 1 & 1 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* The basis is thus:\n$$ \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\\\ 0 & 0 \\end{bmatrix} $$\n* It is indeed in ℝ3 (rows of A = *m* = 3, i.e. each column vector is in 3-space or has 3 components)\n\n* The rank (no of columns with pivots) are 2, thus dim(A) = 2\n\n#### Nullspace\n\n\n```python\nA.nullspace() # Calculating the nullspace vectors\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}-1\\\\-1\\\\1\\\\0\\end{matrix}\\right], & \\left[\\begin{matrix}-1\\\\0\\\\0\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n* So, indeed the basis is in ℝ4 (A has *n* = 4 columns)\n\n\n```python\nA.rref() # No pivots for columns 3 and 4\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & 1 & 1\\\\0 & 1 & 1 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* The dimension is two (there are 2 column vectors, which is indeed *n* - *r* = 4 - 2 = 2)\n\n#### Rowspace C(AT)\n\n* So we are looking for the pivot columns of AT\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & 1 & 1\\\\0 & 1 & 1 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* The pivot rows are rows 1 and 2\n* We take them and transpose them\n$$ \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\\\ 1 & 1 \\\\ 1 & 0 \\end{bmatrix} $$\n\n* As stated above, it is in ℝ4\n\n* The dimension is *n* - *r* = 4 - 2 = 2\n\n#### Nullspace of AT\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}-1\\\\-1\\\\1\\\\0\\end{matrix}\\right], & \\left[\\begin{matrix}-1\\\\0\\\\0\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n* Which is indeed in ℝ3\n\n* The dimension is 1, since *m* - *r* = 3 - 2 = 1 (remember that the rank is the number of pivot columns)\n\n### Consider this example matrix (in LU form) and calculate the bases and dimension for all four fundamental spaces\n\n\n```python\nL = Matrix([[1, 0, 0], [2, 1, 0], [-1, 0, 1]])\nU = Matrix([[5, 0, 3], [0, 1, 1], [0, 0, 0]])\nA = L * U\nL, U, A\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & 0\\\\2 & 1 & 0\\\\-1 & 0 & 1\\end{matrix}\\right], & \\left[\\begin{matrix}5 & 0 & 3\\\\0 & 1 & 1\\\\0 & 0 & 0\\end{matrix}\\right], & \\left[\\begin{matrix}5 & 0 & 3\\\\10 & 1 & 7\\\\-5 & 0 & -3\\end{matrix}\\right]\\end{pmatrix}$$\n\n\n\n#### Columnspace of A\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & \\frac{3}{5}\\\\0 & 1 & 1\\\\0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* The basis is thus:\n$$ \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\\\ 0 & 0 \\end{bmatrix} $$\n* Another basis would be the pivot columns of L:\n$$ \\begin{bmatrix} 1 & 0 \\\\ 2 & 1 \\\\ -1 & 0 \\end{bmatrix} $$\n* It is in ℝ3, since *m* = 3\n* It has a rank of 2 (two pivot columns)\n* Since the dimension of the columnspace is equal to the rank, dim(A) = 2\n * Note that it is also equal to the number of pivot columns in U\n\n#### Nullspace of A\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}- \\frac{3}{5}\\\\-1\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n* The nullspace is in ℝ3, since *n* = 3\n* The basis is the special solution(s), which is one column vector for every free variable\n * Since we only have a single free variable, we have a single nullspace column vector\n * This fits in with the fact that it needs to be *n* - *r*\n * It can also be calculated by taking U, setting the free variable to 1 and solving for the other rows by setting each equal to zero\n* The dimension of the nullspace is also 1 (*n* - *r*, i.e. a single column)\n * It is also the number of free variables\n\n#### The rowspace\n\n* This is the columnspace of AT\n* Don't take the transpose first!\n* Row reduce, identify the rows with pivots and transpose them\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & \\frac{3}{5}\\\\0 & 1 & 1\\\\0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* The basis is can also be written down by identifying the rows with pivots in U and writing them down as columns (getting their transpose)\n$$ \\begin{bmatrix} 5 & 0 \\\\ 0 & 1 \\\\ 3 & 1 \\end{bmatrix} $$\n* It is in ℝ3, since *n* = 3\n* The rank *r* = 2, which is equal to the dimension, i.e. dim(AT) = 2\n\n#### The nullspace of AT\n\n\n```python\nA.transpose().nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}1\\\\0\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n* It is indeed in ℝ3, since *m* = 3\n* A good way to do it is to take the inverse of L, such that L-1A = U\n * Now the free variable row in U is row three\n * Take the corresponding row in L-1 and transpose it\n* The dimension in *m* - 2 = 3 - 2 = 1\n\n## The matrix space\n\n* A square matrix is also a 'vector' space, because they obey the vector space rules of addition and scalar multiplication\n* Subspaces (of same) would include\n * Upper triangular matrices\n * Symmetric matrices\n\n\n```python\n\n```\n", "meta": {"hexsha": "61708a405aec9eeff6b0e0827718f3d3fdb3dd6f", "size": 22751, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/I_11_Subspaces.ipynb", "max_stars_repo_name": "solomonxie/jupyter-notebooks", "max_stars_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-13T05:52:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T09:52:35.000Z", "max_issues_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/I_11_Subspaces.ipynb", "max_issues_repo_name": "solomonxie/jupyter-notebooks", "max_issues_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/I_11_Subspaces.ipynb", "max_forks_repo_name": "solomonxie/jupyter-notebooks", "max_forks_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.829009434, "max_line_length": 708, "alphanum_fraction": 0.4738692805, "converted": true, "num_tokens": 3674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.480478678047907, "lm_q2_score": 0.28140560140262283, "lm_q1q2_score": 0.13520939135720847}} {"text": "```python\nfrom IPython.core.display import display_html\nfrom urllib.request import urlopen\n\ncssurl = 'http://j.mp/1DnuN9M'\ndisplay_html(urlopen(cssurl).read(), raw=True)\n```\n\n\n\n\n\n\n\n\n\n\n# Examen Final\n\n## Modelado de Carro Pendulo\n\n\n\n### Variables de estado, posiciones y velocidades\n\nUtilizaremos el enfoque de Euler-Lagrange, el cual nos dice que el primer paso para conseguir el modelo matem\u00e1tico es calcular el Lagrangiano $L$ del sistema, definido por:\n\n$$\nL = K - U\n$$\n\nen donde $K$ es la energ\u00eda cin\u00e9tica del sistema y $U$ es la energia potencial del sistema.\n\nEl estado del sistema estar\u00e1 descrito por una distancia $x$ del centro del carro a un marco de referencia y el angulo $\\theta$ del pendulo con respecto a la horizontal.\n\n$$\nq = \\begin{pmatrix} x \\\\ \\theta \\end{pmatrix} = \\begin{pmatrix} q_1 \\\\ q_2 \\end{pmatrix} \\implies\n\\dot{q} = \\begin{pmatrix} \\dot{x} \\\\ \\dot{\\theta} \\end{pmatrix} = \\begin{pmatrix} \\dot{q}_1 \\\\ \\dot{q}_2 \\end{pmatrix} \\implies\n\\ddot{q} = \\begin{pmatrix} \\ddot{x} \\\\ \\ddot{\\theta} \\end{pmatrix} = \\begin{pmatrix} \\ddot{q}_1 \\\\ \\ddot{q}_2 \\end{pmatrix}\n$$\n\n### Energ\u00eda cin\u00e9tica\n\nPara calcular la energ\u00eda cin\u00e9tica del sistema, obtenemos $K_1$ y $K_2$ asociadas al carro y al pendulo, en donde $K_i = \\frac{1}{2} m_i v_i^2$, por lo que tenemos:\n\n$$\nK_1 = \\frac{1}{2} m_1 v_1^2 = \\frac{1}{2} m_1 \\dot{x}^2 = \\frac{1}{2} m_1 \\dot{q}_1^2\n$$\n\n$$\nK_2 = \\frac{1}{2} m_2 v_2^2 = \\frac{1}{2} m_2 \\left[ \\left( \\dot{x} + \\dot{x}_2 \\right)^2 + \\dot{y}_2^2 \\right]\n$$\n\ncon $x_2 = l \\cos{\\theta}$ y $y_2 = l \\sin{\\theta}$, por lo que sus derivadas son $\\dot{x}_2 = -\\dot{\\theta} l \\sin{\\theta}$ y $\\dot{y}_2 = \\dot{\\theta} l \\cos{\\theta}$, por lo que $K_2$ queda:\n\n$$\n\\begin{align}\nK_2 &= \\frac{1}{2} m_2 \\left[ \\left( \\dot{x} -\\dot{\\theta} l \\sin{\\theta} \\right)^2 + \\left( \\dot{\\theta} l \\cos{\\theta} \\right)^2 \\right] \\\\\n&= \\frac{1}{2} m_2 \\left[ \\left( \\dot{x} -\\dot{\\theta} l \\sin{\\theta} \\right)^2 + \\left( \\dot{\\theta} l \\cos{\\theta} \\right)^2 \\right] \\\\\n&= \\frac{1}{2} m_2 \\left[ \\left( \\dot{q}_1 -\\dot{q}_2 l \\sin{q_2} \\right)^2 + \\left( \\dot{q}_2 l \\cos{q_2} \\right)^2 \\right]\n\\end{align}\n$$\n\nEntonces la energ\u00eda cin\u00e9tica ser\u00e1:\n\n$$\nK = \\frac{1}{2} \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 l \\sin{q_2} \\dot{q}_1 \\dot{q}_2 \\right]\n$$\n\nLo cual puede ser escrito como una forma matricial cuadratica:\n\n$$\nK = \\frac{1}{2}\n\\begin{pmatrix}\n\\dot{q}_1 & \\dot{q}_2\n\\end{pmatrix}\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 l \\sin{q_2} \\\\\n-m_2 l \\sin{q_2} & m_2 l^2\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n$$\n\nEn este punto, introducimos una variable intermedia, tan solo para reducir un poco la notaci\u00f3n:\n\n$$\n\\lambda = l \\sin{q_2}\n$$\n\ny su derivada con respecto al tiempo, la representamos como:\n\n$$\n\\dot{\\lambda} = \\lambda' \\dot{q}_2 \\implies \\lambda' = l \\cos{q_2}\n$$\n\npor lo que la energ\u00eda cinetica queda como:\n\n$$\nK = \\frac{1}{2}\n\\begin{pmatrix}\n\\dot{q}_1 & \\dot{q}_2\n\\end{pmatrix}\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 \\lambda \\\\\n-m_2 \\lambda & m_2 l^2\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n$$\n\nen donde el termino matricial es la matriz de masa $M(q)$ y $K(q, \\dot{q}) = \\dot{q}^T M(q) \\dot{q}$.\n\n### Energ\u00eda potencial\n\nPor otro lado, para calcular la energ\u00eda potencial del sistema, tenemos que:\n\n$$\nU_1 = m_1 g h_1 = 0\n$$\n\n$$\nU_2 = m_2 g h_2 = m_2 g l \\sin{q_1} = m_2 g \\lambda\n$$\n\npor lo que la energ\u00eda potencial del sistema ser\u00e1:\n\n$$\nU = m_2 g \\lambda\n$$\n\n### Lagrangiano\n\nEl Lagrangiano del sistema, esta dado por la expresi\u00f3n:\n\n$$\nL = K - U\n$$\n\npor lo que solo queda sumar las dos expresiones y obtener las condiciones de optimalidad para la energ\u00eda del sistema por medio de la ecuaci\u00f3n de Euler-Lagrange\n\n### Euler-Lagrange\n\nCuando aplicamos la primer condici\u00f3n de optimalidad al Lagrangiano $L(t, q(t), \\dot{q}(t)) = K(q(t), \\dot{q}(t)) - U(q)$, tenemos la ecuaci\u00f3n de Euler-Lagrage, la cual nos dice que:\n\n$$\n\\frac{d}{dt} L_{\\dot{q}} - L_q = 0\n$$\n\npor lo que debemos encontrar la derivada del Lagrangiano, con respecto a $q$, $\\dot{q}$ y derivar esta ultima con respecto al tiempo. Empecemos con la derivada con respecto a $q$:\n\n$$\nL_q = K_q - U_q\n$$\n\nen donde:\n\n$$\n\\begin{align}\nK_q &= \\frac{\\partial}{\\partial q} \\left\\{ \\frac{1}{2} \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 \\lambda \\dot{q}_1 \\dot{q}_2 \\right] \\right\\} \\\\\n&= \\frac{1}{2}\n\\begin{pmatrix}\n\\frac{\\partial}{\\partial q_1} \\left\\{ \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 \\lambda \\dot{q}_1 \\dot{q}_2 \\right] \\right\\} \\\\\n\\frac{\\partial}{\\partial q_2} \\left\\{ \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 \\lambda \\dot{q}_1 \\dot{q}_2 \\right] \\right\\}\n\\end{pmatrix} \\\\\n&= \\frac{1}{2}\n\\begin{pmatrix}\n0 \\\\\n- 2 m_2 \\lambda' \\dot{q}_1 \\dot{q}_2\n\\end{pmatrix} = - m_2 \\lambda'\n\\begin{pmatrix}\n0 & 0 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n\\end{align}\n$$\n\ny la derivada de la energ\u00eda potencial con respecto a $q$:\n\n$$\nU_q = \\frac{\\partial}{\\partial q} \\left\\{ m_2 g \\lambda \\right\\} =\n\\begin{pmatrix}\n\\frac{\\partial}{\\partial q_1} \\left\\{ m_2 g \\lambda \\right\\} \\\\\n\\frac{\\partial}{\\partial q_2} \\left\\{ m_2 g \\lambda \\right\\}\n\\end{pmatrix} =\n\\begin{pmatrix}\n0 \\\\\nm_2 g \\lambda'\n\\end{pmatrix}\n$$\n\nAhora obtenemos la derivada con respecto a $\\dot{q}$:\n\n$$\nL_{\\dot{q}} = K_{\\dot{q}} - U_{\\dot{q}}\n$$\n\nen donde:\n\n$$\nK_{\\dot{q}} = \\frac{1}{2} \\frac{\\partial}{\\partial \\dot{q}} \\left\\{ \\dot{q}^T M(q) \\dot{q} \\right\\} = M(q) \\dot{q}\n$$\n\n$$\nU_{\\dot{q}} = 0\n$$\n\nDerivando con respeto al tiempo estas ultimas expresiones, obtenemos:\n\n$$\n\\frac{d}{dt} K_{\\dot{q}} = \\dot{M}(q, \\dot{q}) \\dot{q} + M(q) \\ddot{q}\n$$\n\nen donde\n\n$$\n\\begin{align}\n\\dot{M}(q, \\dot{q}) &= \\frac{d}{dt} M(q) = \\frac{d}{dt}\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 \\lambda \\\\\n-m_2 \\lambda & m_2 l^2\n\\end{pmatrix} \\\\\n&=\n\\begin{pmatrix}\n0 & -m_2 \\lambda' \\dot{q}_2 \\\\\n-m_2 \\lambda' \\dot{q}_2 & 0\n\\end{pmatrix} = -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\end{align}\n$$\n\ny ya tenemos todos los elementos que integran nuestra ecuaci\u00f3n de Euler-Lagrange:\n\n$$\n\\begin{align}\n\\frac{d}{dt} L_{\\dot{q}} - L_q &= 0 \\\\\nM(q) \\ddot{q} + \\dot{M}(q, \\dot{q}) \\dot{q} - K_q + U_q &= 0\n\\end{align}\n$$\n\nTan solo cabe recalcar que el segundo y tercer termino se pueden reducir a uno solo:\n\n$$\n\\begin{align}\n\\dot{M}(q, \\dot{q}) \\dot{q} - K_q &= -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} + m_2 \\lambda'\n\\begin{pmatrix}\n0 & 0 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} \\\\\n&= -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n0 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} = C(q, \\dot{q})\\dot{q}\n\\end{align}\n$$\n\nPor lo que finalmente para el sistema libre tenemos:\n\n$$\nM(q) \\ddot{q} + C(q, \\dot{q}) \\dot{q} + U_q = 0\n$$\n\ncon:\n\n$$\nM(q) =\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 \\lambda \\\\\n-m_2 \\lambda & m_2 l^2\n\\end{pmatrix} \\quad\nC(q, \\dot{q}) = -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n0 & 0\n\\end{pmatrix} \\quad\nU_q =\n\\begin{pmatrix}\n0 \\\\\nm_2 g \\lambda'\n\\end{pmatrix}\n$$\n\n### Se\u00f1ales de control\n\nY para el sistema bajo control tan solo tenemos que agregar la se\u00f1al de control asociada al primer grado de libertad, por lo que tendremos:\n\n$$\nM(q) \\ddot{q} + C(q, \\dot{q}) \\dot{q} + U_q = G u\n$$\n\ncon $G = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}$.\n\n### Simulaci\u00f3n\n\n\n```python\ndef f(estado, tiempo):\n from numpy import zeros, matrix, sin, cos\n \n m1 = 0.2\n m2 = 0.4\n g = 9.81\n l = 0.6\n \n q1, q2, q1p, q2p = estado\n \n q = matrix([[q1], [q2]])\n qp = matrix([[q1p], [q2p]])\n \n \u03bb = l*sin(q2)\n \u03bb\u0307 = l*cos(q2)\n \n M = matrix([[m1 + m2, -m2*\u03bb], [-m2*\u03bb, m2*l**2]])\n C = -m2*\u03bb\u0307*matrix([[0, q2p], [0, 0]])\n U = matrix([[0], [m2*g*\u03bb\u0307]])\n \n qpp = M.I*(-C*qp - U)\n \n dydx = zeros(4)\n \n dydx[0] = q1p\n dydx[1] = q2p\n dydx[2] = qpp[0]\n dydx[3] = qpp[1]\n \n return dydx\n```\n\n\n```python\nfrom scipy.integrate import odeint\nfrom numpy import linspace, arange, sin, cos\nfrom matplotlib.pyplot import figure, style, plot\nfrom matplotlib import animation\nfrom matplotlib.patches import Rectangle, Circle\n```\n\n\n```python\nts = linspace(0, 4, 100)\nestado_inicial = [0, 0.1, 0, 0]\n```\n\n\n```python\nestados = odeint(f, estado_inicial, ts)\nq1, q2 = estados[:, 0], estados[:, 1]\n```\n\n\n```python\nfig = figure(figsize=(8, 6))\n\nax = fig.add_subplot(111, autoscale_on=False,\n xlim=(-0.8333, 1.8333), ylim=(-1.25, 0.75))\n\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\n\nax.axes.spines[\"right\"].set_color(\"none\")\nax.axes.spines[\"left\"].set_color(\"none\")\nax.axes.spines[\"top\"].set_color(\"none\")\nax.axes.spines[\"bottom\"].set_color(\"none\")\n\nax.set_axis_bgcolor('#F2F1EC')\n\nlinea, = ax.plot([], [], 'o-', lw=1.5, color='#393F40')\ncarro = Rectangle((10,10), 0.8, 0.35, lw=1.5, fc='#E5895C')\nguia = Rectangle((10, 10), 2.6666, 0.1, lw=1.5, fc='#A4B187')\npendulo = Circle((10, 10), 0.125, lw=1.5, fc='#F3D966')\n\ndef init():\n linea.set_data([], [])\n guia.set_xy((-0.8333, -0.05))\n carro.set_xy((-0.4, -0.175))\n pendulo.center = (1, 0)\n ax.add_patch(guia)\n ax.add_patch(carro)\n ax.add_patch(pendulo)\n return linea, carro, pendulo\n\ndef animate(i):\n xs = [q1[i], q1[i] + cos(q2[i])]\n ys = [0, sin(q2[i])]\n\n linea.set_data(xs, ys)\n carro.set_xy((xs[0] - 0.4, ys[0] - 0.175))\n pendulo.center = (xs[1], ys[1])\n return linea, carro, pendulo\n\nani = animation.FuncAnimation(fig, animate, arange(1, len(q1)), interval=25,\n blit=True, init_func=init)\n\nani.save('./imagenes/carropendulolibre.gif', writer='imagemagick');\n```\n\n\n\nEspero te hayas divertido con esta larga explicaci\u00f3n y al final sepas un truco mas.\n\nSi deseas compartir este Notebook de IPython utiliza la siguiente direcci\u00f3n:\n\nhttp://bit.ly/1M2tenc\n\no bien el siguiente c\u00f3digo QR:\n\n\n\n\n```python\n# Codigo para generar codigo :)\nfrom qrcode import make\nimg = make(\"http://bit.ly/1M2tenc\")\nimg.save(\"./codigos/carropendulo.jpg\")\n```\n", "meta": {"hexsha": "e5982335d8730bfdfba9e478e597a2731abe3faa", "size": 21253, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "IPythonNotebooks/Control Optimo/Examen Final/Carro Pendulo.ipynb", "max_stars_repo_name": "robblack007/DCA", "max_stars_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IPythonNotebooks/Control Optimo/Examen Final/Carro Pendulo.ipynb", "max_issues_repo_name": "robblack007/DCA", "max_issues_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IPythonNotebooks/Control Optimo/Examen Final/Carro Pendulo.ipynb", "max_forks_repo_name": "robblack007/DCA", "max_forks_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-20T12:44:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T12:44:13.000Z", "avg_line_length": 28.8371777476, "max_line_length": 216, "alphanum_fraction": 0.4552298499, "converted": true, "num_tokens": 4736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.28140560140262283, "lm_q1q2_score": 0.13520939135720847}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nPromijeni vidljivost ovdje.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nPromijeni vidljivost ovdje.\n\n\n\n```python\n# Examples: \n# Factored form: 1/(x**2*(x**2 + 1))\n# Expanded form: 1/(x**4+x**2)\n\nimport sympy as sym\nfrom IPython.display import Latex, display, Markdown, Javascript, clear_output\nfrom ipywidgets import widgets, Layout # Interactivity module\n```\n\n## Rastav na parcijalne razlomke\n\nKada se Laplaceova transformacija koristi za analizu sustava, Laplaceova transformacija izlaznog signala dobiva se u obliku umno\u0161ka prijenosne funkcije i Laplaceove transformacije ulaznog signala. Rezultat ovog mno\u017eenja \u010desto je slo\u017een za shvatiti. Da bismo izvr\u0161ili inverznu Laplaceovu transformaciju, prvo se izvodi postupak rastava na parcijalne razlomke. Ovaj primjer demonstrira taj postupak.\n\n---\n\n### Kako koristiti ovaj interaktivni primjer?\nIzaberite jednu izme\u0111u ponu\u0111enih opcija: *Unos funkcije* ili *Unos koeficijenata polinoma*.\n\n1. *Unos funkcije*:\n * Primjer: Za unos funkcije $\\frac{1}{x^2(x^2 + 1)}$ (faktorizirani oblik) upi\u0161ite 1/(x\\*\\*2\\*(x\\*\\*2 + 1)); za unos iste funkcije u pro\u0161irenom obliku ($\\frac{1}{x^4+x^2}$) upi\u0161ite 1/(x\\*\\*4+x\\*\\*2).\n\n2. *Unos koeficijenata polinoma*:\n * Pomo\u0107u kliza\u010da odaberite red brojnika i nazivnika \u017eeljene funkcije.\n * Upi\u0161ite koeficijente za brojnik i nazivnik u odgovaraju\u0107a unosna polja te kliknite na *Potvrdi*.\n\n\n```python\n## System selector buttons\nstyle = {'description_width': 'initial'}\ntypeSelect = widgets.ToggleButtons(\n options=[('Unos funkcije', 0), ('Unos koeficijenata polinoma', 1),],\n description='Odaberi: ',style={'button_width':'230px'})\n\nbtnReset=widgets.Button(description=\"Reset\")\n\n# function\ntextbox=widgets.Text(description=('Unos funkcije:'),style=style)\nbtnConfirmFunc=widgets.Button(description=\"Potvrdi\") # ex btnConfirm\n\n# poly\nbtnConfirmPoly=widgets.Button(description=\"Potvrdi\") # ex btn\n\ndisplay(typeSelect)\n\ndef on_button_clickedReset(ev):\n display(Javascript(\"Jupyter.notebook.execute_cells_below()\"))\n\ndef on_button_clickedFunc(ev):\n eq = sym.sympify(textbox.value)\n\n if eq==sym.factor(eq):\n display(Markdown('Ulazna funkcija $%s$ je zapisana u faktoriziranom obliku. ' %sym.latex(eq) + 'Njezina pro\u0161irena verzija je $%s$.' %sym.latex(sym.expand(eq))))\n \n else:\n display(Markdown('Ulazna funkcija $%s$ je zapisana u pro\u0161irenom obliku. ' %sym.latex(eq) + 'Njezin faktorizirani oblik je $%s$.' %sym.latex(sym.factor(eq))))\n \n display(Markdown('Rezultat rastava na parcijalne razlomke: $%s$' %sym.latex(sym.apart(eq)) + '.'))\n display(btnReset)\n \ndef transfer_function(num,denom):\n num = np.array(num, dtype=np.float64)\n denom = np.array(denom, dtype=np.float64)\n len_dif = len(denom) - len(num)\n if len_dif<0:\n temp = np.zeros(abs(len_dif))\n denom = np.concatenate((temp, denom))\n transferf = np.vstack((num, denom))\n elif len_dif>0:\n temp = np.zeros(len_dif)\n num = np.concatenate((temp, num))\n transferf = np.vstack((num, denom))\n return transferf\n\ndef f(orderNum, orderDenom):\n global text1, text2\n text1=[None]*(int(orderNum)+1)\n text2=[None]*(int(orderDenom)+1)\n display(Markdown('2. Upi\u0161ite koeficijente za brojnik.'))\n for i in range(orderNum+1):\n text1[i]=widgets.Text(description=(r'a%i'%(orderNum-i)))\n display(text1[i])\n display(Markdown('3. Upi\u0161ite koeficijente za nazivnik.')) \n for j in range(orderDenom+1):\n text2[j]=widgets.Text(description=(r'b%i'%(orderDenom-j)))\n display(text2[j])\n global orderNum1, orderDenom1\n orderNum1=orderNum\n orderDenom1=orderDenom\n\ndef on_button_clickedPoly(btn):\n clear_output()\n global num,denom\n enacbaNum=\"\"\n enacbaDenom=\"\"\n num=[None]*(int(orderNum1)+1)\n denom=[None]*(int(orderDenom1)+1)\n for i in range(int(orderNum1)+1):\n if text1[i].value=='' or text1[i].value=='Upi\u0161ite koeficijent':\n text1[i].value='Upi\u0161ite koeficijent'\n else:\n try:\n num[i]=int(text1[i].value)\n except ValueError:\n if text1[i].value!='' or text1[i].value!='Upi\u0161ite koeficijent':\n num[i]=sym.var(text1[i].value)\n \n for i in range (len(num)-1,-1,-1):\n if i==0:\n enacbaNum=enacbaNum+str(num[len(num)-i-1])\n elif i==1:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x+\"\n elif i==int(len(num)-1):\n enacbaNum=enacbaNum+str(num[0])+\"*x**\"+str(len(num)-1)\n else:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x**\"+str(i) \n \n for j in range(int(orderDenom1)+1):\n if text2[j].value=='' or text2[j].value=='Upi\u0161ite koeficijent':\n text2[j].value='Upi\u0161ite koeficijent'\n else:\n try:\n denom[j]=int(text2[j].value)\n except ValueError:\n if text2[j].value!='' or text2[j].value!='Upi\u0161ite koeficijent':\n denom[j]=sym.var(text2[j].value)\n \n for i in range (len(denom)-1,-1,-1):\n if i==0:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])\n elif i==1:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x\"\n elif i==int(len(denom)-1):\n enacbaDenom=enacbaDenom+str(denom[0])+\"*x**\"+str(len(denom)-1)\n else:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x**\"+str(i)\n \n funcSym=sym.sympify('('+enacbaNum+')/('+enacbaDenom+')')\n\n DenomSym=sym.sympify(enacbaDenom)\n NumSym=sym.sympify(enacbaNum)\n DenomSymFact=sym.factor(DenomSym);\n funcFactSym=NumSym/DenomSymFact;\n \n if DenomSym==sym.expand(enacbaDenom):\n if DenomSym==DenomSymFact:\n display(Markdown('Zadana funkcija je: $%s$. Brojnik se ne mo\u017ee faktorizirati.' %sym.latex(funcSym)))\n else:\n display(Markdown('Zadana funkcija je: $%s$. Brojnik se ne mo\u017ee faktorizirati. Ista funkcija s faktoriziranim nazivnikom mo\u017ee se zapisati kao: $%s$.' %(sym.latex(funcSym), sym.latex(funcFactSym))))\n\n if sym.apart(funcSym)==funcSym:\n display(Markdown('Nemogu\u0107e je napraviti rastav na parcijalne razlomke.'))\n else:\n display(Markdown('Rezultat rastava na parcijalne razlomke je: $%s$' %sym.latex(sym.apart(funcSym)) + '.'))\n \n btnReset.on_click(on_button_clickedReset)\n display(btnReset)\n \ndef partial_frac(index):\n\n if index==0:\n x = sym.Symbol('x') \n display(widgets.HBox((textbox, btnConfirmFunc)))\n btnConfirmFunc.on_click(on_button_clickedFunc)\n btnReset.on_click(on_button_clickedReset)\n \n elif index==1:\n display(Markdown('1. Definirajte red brojnika (orderNum) i nazivnika (orderDenom).'))\n widgets.interact(f, orderNum=widgets.IntSlider(min=0,max=10,step=1,value=0),\n orderDenom=widgets.IntSlider(min=0,max=10,step=1,value=0));\n btnConfirmPoly.on_click(on_button_clickedPoly)\n display(btnConfirmPoly) \n\ninput_data=widgets.interactive_output(partial_frac,{'index':typeSelect})\ndisplay(input_data)\n```\n\n\n ToggleButtons(description='Odaberi: ', options=(('Unos funkcije', 0), ('Unos koeficijenata polinoma', 1)), sty\u2026\n\n\n\n Output()\n\n", "meta": {"hexsha": "919467a484d0b19a2b8c2841966ae61ec7926800", "size": 11622, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-09-Rastav_na_parcijalne_razlomke-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-09-Rastav_na_parcijalne_razlomke-checkpoint.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-09-Rastav_na_parcijalne_razlomke-checkpoint.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 37.7337662338, "max_line_length": 406, "alphanum_fraction": 0.5311478231, "converted": true, "num_tokens": 2428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.39606818053136394, "lm_q2_score": 0.3380771174808128, "lm_q1q2_score": 0.1339015887999137}} {"text": "+ This notebook is part of lecture 9 *Independence, basis and dimension* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]\n+ Created by me, Dr Juan H Klopper\n + Head of Acute Care Surgery\n + Groote Schuur Hospital\n + University Cape Town\n + Email me with your thoughts, comments, suggestions and corrections \n
Linear Algebra OCW MIT18.06 IPython notebook [2] study notes by Dr Juan H Klopper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.\n\n+ [1] OCW MIT 18.06\n+ [2] Fernando P\u00e9rez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org\n\n\n```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n#import numpy as np\nfrom sympy import init_printing, Matrix, symbols\n#import matplotlib.pyplot as plt\n#import seaborn as sns\n#from IPython.display import Image\nfrom warnings import filterwarnings\n\ninit_printing(use_latex = 'mathjax') # Pretty Latex printing to the screen\n#%matplotlib inline\nfilterwarnings('ignore')\n```\n\n# Independence\n# Spanning\n# Basis\n# Dimension\n\n## Independence\n\n* Vectors are linearly independent if\n * No combination of these vectors results in the zero vector (except the zero combinations)\n$$ { c }_{ 1 }{ x }_{ 1 }+{ c }_{ 2 }{ x }_{ 2 }+\\dots +{ c }_{ n }{ x }_{ n }\\neq 0,\\quad { c }_{ i }\\neq 0 $$\n * In 2-space, this means that they should noy be on the same line through the origin\n * In 3-space they should not be on the same line through the origin or on a plane through the origin\n * In higher-dimensional space they should not be on the same line through the origin or a hyperplane through the origin\n\n* If they are independent by the constraints above, only a zero combination of them will results in zero\n* If there are vectors in the nullspace (apart from the zero vector), then (there is a linear combination that will give zero and) then the vectors are not linearly independent\n\n\n```python\nA= Matrix([[1, 2, 4], [3, 1, 4]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 2 & 4\\\\3 & 1 & 4\\end{matrix}\\right]$$\n\n\n\n* Here we will have a rank of 2 (2 pivots) and 3 unknowns and 2 rows\n* Thus, *r* = *m* (full row rank)\n* We are left with *n* - *r* freen variable, i.e. 3 - 2 = 1 and will have one vector in the nullspace\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & \\frac{4}{5}\\\\0 & 1 & \\frac{8}{5}\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}- \\frac{4}{5}\\\\- \\frac{8}{5}\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n### Another way to state independence\n\n* Consider the columns of the matrix A as vectors v1, v2, ..., vn\n\n* If *r* = *n* then the nullspace only contains the zero vector and the column vectors are linearly independent\n\n## Spanning\n\n* If we have a set of linearly independent vector that all their linear combinations (including zero) span a subspace ( in this instance a column space)\n\n* We are particularly interested in a set of (column) vectors (in a matrix) that are linearly independent and span a subspace\n* This leads us to the next topic, **basis**\n\n## Basis\n\n* A set of vectors (in a space *W*) with the properties\n * They are linearly independent\n * They span the space (linear combinations of them fill the space)\n\n* Up until now we looked at columns in a matrix A\n* It is more common in textbooks to look at a space first and ask about basis vectors, spanning vectors, dimension, etc\n\n* So let's look at ℝ3\n* The obvious set of basis vectors are\n$$ \\hat {i}, \\quad \\hat {j}, \\quad \\hat {k} $$\n\n* What about\n$$ \\begin{bmatrix} 1 \\\\ 1 \\\\ 2 \\end{bmatrix},\\quad \\begin{bmatrix} 2 \\\\ 2 \\\\ 5 \\end{bmatrix} $$\n* So, are they linearly independent and do they span ℝ3?|\n\n\n```python\nA = Matrix([[1, 2], [1, 2], [2, 5]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 2\\\\1 & 2\\\\2 & 5\\end{matrix}\\right]$$\n\n\n\n* Here we will have *r* = 2, *n* = 2 and thus a (*n* - *r* = 0) zero nullspace\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\end{bmatrix}$$\n\n\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0\\\\0 & 1\\\\0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* For now, our intuition is that they will not span ℝ3\n* This intuition is correct, because all their linear combinations will only fill a plane through the origin\n* The zero combination does result in the zero vector, though, so they do fill a subspace of ℝ3\n* Some textbooks refer to this as *V* = ℝn, with *W* a subspace of *V*\n\n* If we added a column vector that is a linear combination of these, it will also fall in the plane and thus not be linearly independent (there will be a vector in the nullspace other than the zero vector\n\n\n```python\nA = Matrix([[1, 2, 3], [1, 2, 3], [2, 5, 7]])\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}-1\\\\-1\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & 1\\\\0 & 1 & 1\\\\0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* Indeed, we have a column without a pivot and thus a free variable\n\n* Let's add another, such that we have\n\n\n```python\nA = Matrix([[1, 2, 3], [1, 2, 3], [2, 5, 8]])\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & -1\\\\0 & 1 & 2\\\\0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* Again, a column without a pivot and sure enough, we'll find a vector (other than the zero vector) in the nullspace\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}1\\\\-2\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n### The special case of a square matrix\n\n* If we now end up with a square matrix, we need only look at it's determinant, i.e., is it invertible\n\n\n```python\nA.det() # .det() calculates the determinant\n```\n\n\n\n\n$$0$$\n\n\n\n* Indeed the determinant is zero as expected\n\n## Dimension\n\n* Given a (sub)space, every basis for that (sub)space has the same number of vectors (there are usally more than one basis for every (sub)space)\n* This called the dimension of a (sub)space\n\n## Important point to remember\n\n* The (sub)space which a set of (column) vectors (matrix of coefficients, A) span, is the set of possible **b**-values\n\n## More examples\n\n#### Example problem\n\n* Consider the column space\n\n\n```python\nA = Matrix([[1, 2, 3, 1], [1, 1, 2, 1], [1, 2, 3, 1]])\n```\n\n\n```python\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 2 & 3 & 1\\\\1 & 1 & 2 & 1\\\\1 & 2 & 3 & 1\\end{matrix}\\right]$$\n\n\n\n* There are *n* = 4 unknowns, *m* = 3 unknowns\n* We note that column 1 = column 4\n* We note that with 4 unknowns we are dealing with ℝ4\n* In essence, there are at most three independent columns, thus the matrix cannot be a basis for ℝ4\n* It is possible for them to span the column space (don't get confused by column space (ℝ4) and matrix here)\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}-1\\\\-1\\\\1\\\\0\\end{matrix}\\right], & \\left[\\begin{matrix}-1\\\\0\\\\0\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & 1 & 1\\\\0 & 1 & 1 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* As we can see here (columns three and four have free variables, i.e. no pivots)\n$$ { x }_{ 1 }+0{ x }_{ 2 }+{ x }_{ 3 }+{ x }_{ 4 }=0\\\\ 0{ x }_{ 1 }+1{ x }_{ 2 }+{ x }_{ 3 }+{ 0 }_{ 4 }=0\\\\ { x }_{ 4 }={ c }_{ 2 }\\\\ { x }_{ 3 }={ c }_{ 1 }\\\\ \\therefore \\quad { x }_{ 2 }=-{ c }_{ 1 }\\\\ \\therefore \\quad { x }_{ 1 }=-{ c }_{ 1 }-{ c }_{ 2 }\\\\ \\begin{bmatrix} { x }_{ 1 } \\\\ { x }_{ 2 } \\\\ { x }_{ 3 } \\\\ { x }_{ 4 } \\end{bmatrix}=\\begin{bmatrix} -{ c }_{ 1 }-{ c }_{ 2 } \\\\ -{ c }_{ 1 } \\\\ { c }_{ 1 } \\\\ { c }_{ 2 } \\end{bmatrix}=\\begin{bmatrix} -{ c }_{ 1 } \\\\ -{ c }_{ 1 } \\\\ { c }_{ 1 } \\\\ 0 \\end{bmatrix}+\\begin{bmatrix} -{ c }_{ 2 } \\\\ 0 \\\\ 0 \\\\ { c }_{ 2 } \\end{bmatrix}={ c }_{ 1 }\\begin{bmatrix} -1 \\\\ -1 \\\\ 1 \\\\ 0 \\end{bmatrix}+{ c }_{ 2 }\\begin{bmatrix} -1 \\\\ 0 \\\\ 0 \\\\ 1 \\end{bmatrix} $$\n\n* The rank of the matrix is two (it is the number of pivot columns)\n* This matrix space thus have two basis vectors (column vectors 1 and 2) and we say the dimension of this space is two\n* Remember, a matrix has a rank, which is the dimension of a column space (the column space representing the space 'produced' by the column vectors)\n* We talk about the rank of a matrix, rank(A) and the column space of a matrix, C(A)\n\n* In summary we have two basis above (they span a space)\n * Any two vectors that are not linearly dependent will also span this space, they can't help but to\n * dimC(A)= *r*\n * The nullspace will have *n* - *r* vectors (the dimension of the null space equal the number of free variables)\n\n\n```python\n\n```\n", "meta": {"hexsha": "73b0582178443ddc8de81b29047686b3aaac4ab4", "size": 23713, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/I_10_Independence_Spanning_Basis_Dimension.ipynb", "max_stars_repo_name": "solomonxie/jupyter-notebooks", "max_stars_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-13T05:52:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T09:52:35.000Z", "max_issues_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/I_10_Independence_Spanning_Basis_Dimension.ipynb", "max_issues_repo_name": "solomonxie/jupyter-notebooks", "max_issues_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/I_10_Independence_Spanning_Basis_Dimension.ipynb", "max_forks_repo_name": "solomonxie/jupyter-notebooks", "max_forks_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.1005714286, "max_line_length": 788, "alphanum_fraction": 0.4874541391, "converted": true, "num_tokens": 3774, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.27825679968760103, "lm_q1q2_score": 0.13369645927173981}} {"text": "T\u00e0i li\u1ec7u n\u00e0y mang gi\u1ea5y ph\u00e9p Creative Commons Attribution (CC BY). (c) Nguy\u1ec5n Ng\u1ecdc S\u00e1ng, Zhukovsky 06/2019.\n\n[@SangVn](https://github.com/SangVn) [@VnCFD](https://vncfdgroup.wordpress.com/)\n\n*Th\u1ef1c h\u00e0nh CFD v\u1edbi Python!*\n\n## Ph\u1ea7n 3: H\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler hai chi\u1ec1u, vncfd_2D\n`N\u1ebfu kh\u00f4ng \u0111\u00e0o b\u1edbi th\u00ec c\u00f2n g\u00ec l\u00e0 th\u00fa v\u1ecb!`\n\nB\u1ea1n h\u00e3y t\u01b0\u1edfng t\u01b0\u1ee3ng vi\u1ec7c `nghi\u00ean c\u1ee9u CFD` gi\u1ed1ng nh\u01b0 h\u00e0nh tr\u00ecnh `\u0111i t\u00ecm kho b\u00e1u`. \u0110\u1ee9ng trong m\u1ed9t khu r\u1eebng nhi\u1ec7t \u0111\u1edbi r\u1eadm r\u1ea1p, tr\u00ean b\u00e3i bi\u1ec3n c\u1ee7a m\u1ed9t h\u00f2n \u0111\u1ea3o hoang s\u01a1 hay \u0111\u1ee9ng tr\u01b0\u1edbc \u0111\u1ea1i d\u01b0\u01a1ng r\u1ed9ng l\u1edbn, ta t\u1ef1 h\u1ecfi kh\u00f4ng bi\u1ebft n\u00ean l\u00e0m nh\u1eefng g\u00ec, \u0111i \u0111\u1ebfn \u0111\u00e2u... Ch\u00fang ta c\u1ea7n nh\u1eefng l\u1eddi ch\u1ec9 d\u1eabn, c\u1ea7n `m\u1ed9t chi\u1ebfc b\u1ea3n \u0111\u1ed3 kho b\u00e1u`. Nh\u1eefng b\u00e0i b\u00e1o, nh\u1eefng quy\u1ec3n s\u00e1ch v\u1ec1 l\u01b0u ch\u1ea5t, ph\u01b0\u01a1ng ph\u00e1p t\u00ednh, v\u1ec1 CFD ch\u00ednh l\u00e0 t\u1edd b\u1ea3n \u0111\u1ed3 kho b\u00e1u \u0111\u00f3. Ch\u00fang ta \u0111\u1ecdc, nghi\u00ean c\u1ee9u v\u00e0 \u0111i theo, ch\u00fang ta \u0111\u00e3 t\u1edbi \u0111\u01b0\u1ee3c `nh\u1eefng n\u01a1i \u0111\u01b0\u1ee3c \u0111\u00e1nh d\u1ea5u` tr\u00ean b\u1ea3n \u0111\u1ed3. Th\u1ebf nh\u01b0ng, trong tay ta `kh\u00f4ng c\u00f3 m\u1ed9t t\u1ea5c s\u1eaft`, kh\u00f4ng c\u00f3 m\u1ed9t `chi\u1ebfc x\u1ebbng` \u0111\u1ec3 \u0111\u00e0o b\u1edbi... `Ch\u01b0\u01a1ng tr\u00ecnh CFD` ch\u00ednh l\u00e0 chi\u1ebfc x\u1ebbng \u0111\u1ec3 ta kh\u00e1m ph\u00e1 kho b\u00e1u, \u0111\u00e0o hay ch\u00f4n. M\u00e0 `n\u1ebfu kh\u00f4ng \u0111\u00e0o b\u1edbi th\u00ec c\u00f2n g\u00ec l\u00e0 th\u00fa v\u1ecb!`. Ch\u00fang ta c\u1ea7n r\u00e8n m\u1ed9t c\u00e1i x\u1ebbng. N\u00f3 c\u00f3 th\u1ec3 \u0111\u01b0\u1ee3c r\u00e8n b\u1eb1ng Python, b\u1eb1ng C++, b\u1eb1ng FORTRAN, MATLAB... hay m\u1ed9t `v\u1eadt li\u1ec7u` n\u00e0o kh\u00e1c. Khi ch\u00fang ta \u0111\u00e3 bi\u1ebft c\u00e1ch r\u00e8n x\u1ebbng, bi\u1ebft c\u00e1ch s\u1eed d\u1ee5ng m\u1ed9t d\u1ea1ng v\u1eadt li\u1ec7u th\u00ec vi\u1ec7c chuy\u1ec3n sang r\u00e8n b\u1eb1ng m\u1ed9t v\u1eadt li\u1ec7u kh\u00e1c kh\u00f4ng qu\u00e1 ph\u1ee9c t\u1ea1p. \n\nT\u00f3m l\u1ea1i l\u00e0 ta c\u1ea7n m\u1ed9t **c\u00f4ng c\u1ee5** \u0111\u1ec3 \u0111\u00e0o s\u00e2u nghi\u00ean c\u1ee9u CFD.\n\nKh\u00f3a h\u1ecdc **Th\u1ef1c h\u00e0nh CFD v\u1edbi Python!** s\u1ebd cho b\u1ea1n nh\u1eefng l\u1eddi ch\u1ec9 d\u1eabn, m\u1ed9t m\u1ea3nh c\u1ee7a t\u1ea5m b\u1ea3n \u0111\u1ed3 v\u00e0 c\u00e1ch r\u00e8n x\u1ebbng cho h\u00e0nh tr\u00ecnh `\u0111i t\u00ecm kho b\u00e1u`.\n \nCh\u00fang ta ch\u1ec9 m\u1edbi \u0111i \u0111\u01b0\u1ee3c 2 b\u01b0\u1edbc: \u1edf **[ph\u1ea7n I](https://github.com/SangVn/CFD_Notebook_P1) v\u00e0 [ph\u1ea7n II](https://github.com/SangVn/CFD_Notebook_P2)** ta \u0111\u00e3 t\u00ecm hi\u1ec3u v\u1ec1 ph\u01b0\u01a1ng ph\u00e1p t\u00ednh, \u1ee9ng d\u1ee5ng gi\u1ea3i c\u00e1c ph\u01b0\u01a1ng tr\u00ecnh, h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh \u0111\u1eb7c tr\u01b0ng t\u1eeb \u0111\u01a1n gi\u1ea3n t\u1edbi ph\u1ee9c t\u1ea1p, ta \u0111\u00e3 d\u1eebng l\u1ea1i \u1edf h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler m\u1ed9t chi\u1ec1u. C\u00f2n \u0111\u00f3 h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler hai chi\u1ec1u, ba chi\u1ec1u; h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Navier-Stokes; c\u00e1c m\u00f4 h\u00ecnh r\u1ed1i Spalart-Allmaras, k-omega, k-epsilon, SST, LES, DNS....\n\nTh\u1ef1c ra 3D so v\u1edbi 2D th\u00ec `ch\u1ec9 l\u00e0` th\u00eam 1D, Navier-Stokes hay c\u00e1c m\u00f4 h\u00ecnh r\u1ed1i so v\u1edbi Euler th\u00ec ch\u1ec9 l\u00e0 th\u00eam m\u1ed9t bi\u1ebfn, hai bi\u1ebfn. T\u1ea5t nhi\u00ean trong c\u00e1i t\u1eeb `ch\u1ec9 l\u00e0` \u1ea5y c\u00f2n nhi\u1ec1u kh\u00e1c bi\u1ec7t, nh\u01b0ng khi ta \u0111\u00e3 c\u00f3 n\u1ec1n t\u1ea3ng \u0111\u1ec3 ph\u00e1t tri\u1ec3n ta c\u00f3 th\u1ec3 l\u00e0m \u0111\u01b0\u1ee3c nhi\u1ec1u th\u1ee9.\n\nPh\u1ea7n 3 `Th\u1ef1c h\u00e0nh CFD v\u1edbi Python!` s\u1ebd xoay quanh vi\u1ec7c x\u00e2y d\u1ef1ng ch\u01b0\u01a1ng tr\u00ecnh gi\u1ea3i h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler 2D. Vi\u1ec7c chuy\u1ec3n t\u1eeb 1D sang 2D l\u00e0 m\u1ed9t b\u01b0\u1edbc \u0111i quan tr\u1ecdng v\u00e0 th\u00fa v\u1ecb. Ph\u1ea7n n\u00e0y t\u1eadp trung gi\u1ea3i th\u00edch c\u1ea5u tr\u00fac ch\u01b0\u01a1ng tr\u00ecnh, c\u1ea5u tr\u00fac d\u1eef li\u1ec7u, nhi\u1ec1u ph\u1ea7n l\u00fd thuy\u1ebft ch\u1ec9 d\u1eebng l\u1ea1i \u1edf vi\u1ec7c gi\u1edbi thi\u1ec7u t\u00e0i li\u1ec7u \u0111\u1ec3 b\u1ea1n \u0111\u1ecdc t\u1ef1 tham kh\u1ea3o. Ch\u01b0\u01a1ng tr\u00ecnh s\u1eed d\u1ee5ng **ph\u01b0\u01a1ng ph\u00e1p th\u1ec3 t\u00edch h\u1eefu h\u1ea1n; ph\u01b0\u01a1ng ph\u00e1p t\u00ednh d\u00f2ng Godunov, Roe; s\u01a1 \u0111\u1ed3 hi\u1ec7n, b\u1eadc m\u1ed9t theo th\u1eddi gian; t\u00e1i c\u1ea5u tr\u00fac nghi\u1ec7m b\u1eadc m\u1ed9t Godunov, l\u01b0\u1edbi c\u00f3 c\u1ea5u tr\u00fac** v\u00e0 \u0111\u01b0\u1ee3c vi\u1ebft b\u1eb1ng ng\u00f4n ng\u1eef `python2.7`. Ta c\u0169ng s\u1ebd l\u00e0m quen v\u1edbi ph\u1ea7n m\u1ec1m `ParaView` \u0111\u1ec3 bi\u1ec3u di\u1ec5n v\u00e0 x\u1eed l\u00fd k\u1ebft qu\u1ea3. C\u00e1c b\u00e0i to\u00e1n v\u00ed d\u1ee5 bao g\u1ed3m: **d\u00f2ng ch\u1ea3y tr\u00ean \u00e2m qua d\u1ed1c; d\u00f2ng ch\u1ea3y bao h\u00ecnh tr\u1ee5, NACA profile v\u00e0 t\u00e0u v\u0169 tr\u1ee5 Crew Dragon**.\n\n\n\n\u0110\u1ec3 chu\u1ea9n b\u1ecb cho ph\u1ea7n 3, c\u00e1c b\u1ea1n h\u00e3y c\u00e0i \u0111\u1eb7t v\u00e0 h\u1ecdc c\u00e1ch s\u1eed d\u1ee5ng **PyCharm** \u0111\u1ec3 vi\u1ebft code v\u00e0 **ParaView** \u0111\u1ec3 bi\u1ec3u di\u1ec5n v\u00e0 x\u1eed l\u00fd d\u1eef li\u1ec7u CFD.\n\n**T\u00e0i li\u1ec7u tham kh\u1ea3o:**\n- Eleuterio F. Toro `Riemann Solvers and Numerical Methods for Fluid Dynamics`\n- Randall J. Leveque `Finite-Volume Methods for Hyperbolic Problems`\n- H. K. Versteeg, W. Malalasekera `An introduction to Computational Fluid Dynamics. The Finite Volume Method`\n- Katake Masatsuka `I do like CFD. Governing Equations and Exact Solutions`\n- F. Moukalled, L. Mangani, M. Darwish `The Finite Volume Method in Computational Fluid Dynamics. An Advanced Introduction with OpenFOAM and Matlab`\n\n\n# B\u00e0i 18. H\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler hai chi\u1ec1u, ph\u01b0\u01a1ng ph\u00e1p gi\u1ea3i\n\nC\u00e1c ki\u1ebfn th\u1ee9c c\u01a1 b\u1ea3n \u0111\u00e3 \u0111\u01b0\u1ee3c gi\u1edbi thi\u1ec7u trong ph\u1ea7n II, sau \u0111\u00e2y ta t\u00f3m t\u1eaft m\u1ed9t s\u1ed1 \u0111i\u1ec3m c\u01a1 b\u1ea3n.\n\n# 1. H\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler hai chi\u1ec1u (2D)\n\n\u0110\u00e3 \u0111\u01b0\u1ee3c gi\u1edbi thi\u1ec7u trong [b\u00e0i 16, ph\u1ea7n II](https://nbviewer.jupyter.org/github/SangVn/CFD_Notebook_P2/blob/master/Bai_16.ipynb)\n\nD\u00f2ng ch\u1ea3y c\u1ee7a kh\u00ed l\u00fd t\u01b0\u1edfng trong hai chi\u1ec1u kh\u00f4ng gian \u0111\u01b0\u1ee3c m\u00f4 t\u1ea3 b\u1edfi h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler 2D:\n$$\\frac{\\partial U}{\\partial t} + \\frac{\\partial F}{\\partial x} + \\frac{\\partial G}{\\partial y} = 0\\qquad(1)$$\nv\u1edbi \n\\begin{align}\nU & = \\begin{pmatrix} \\rho \\\\ \\rho u \\\\ \\rho v \\\\ \\rho e\\end{pmatrix}, &\nF & = \\begin{pmatrix} \\rho u \\\\ \\rho u^2 + p \\\\ \\rho u v \\\\ \\rho uh\\end{pmatrix} &\nG & = \\begin{pmatrix} \\rho v \\\\ \\rho uv \\\\ \\rho v^2 + p \\\\ \\rho vh\\end{pmatrix} &\n\\end{align}\nv\u00e0 $h = e + \\frac{p}{\\rho}$, $e = \\varepsilon + \\frac{u^2+v^2}{2}$, $\\varepsilon = \\frac{p}{(\\gamma-1)\\rho}$;\n\ntrong \u0111\u00f3: $\\rho$ - kh\u1ed1i l\u01b0\u1ee3ng ri\u00eang; $u, v$ - v\u1eadn t\u1ed1c theo ph\u01b0\u01a1ng x, y; $p$ - \u00e1p su\u1ea5t; $e$ - n\u0103ng l\u01b0\u1ee3ng m\u1ed9t \u0111\u01a1n v\u1ecb kh\u1ed1i l\u01b0\u1ee3ng ch\u1ea5t kh\u00ed; $\\varepsilon$ - n\u1ed9i n\u0103ng; $h$ - enthalpy.\n\n# 2. Ph\u01b0\u01a1ng ph\u00e1p th\u1ec3 t\u00edch h\u1eefu h\u1ea1n\n\n\u0110\u00e3 \u0111\u01b0\u1ee3c gi\u1edbi thi\u1ec7u trong [b\u00e0i 11, ph\u1ea7n II](https://nbviewer.jupyter.org/github/SangVn/CFD_Notebook_P2/blob/master/Bai_11.ipynb)\n\n\n\nX\u00e9t `th\u1ec3 t\u00edch h\u1eefu h\u1ea1n` ABCD (h\u00ecnh ph\u1ea3i) kh\u00f4ng n\u1eb1m tr\u00ean bi\u00ean, c\u00f3 c\u00e1c `b\u1ec1 m\u1eb7t` AB, BC, CD, DA; l\u1ea5y t\u00edch ph\u00e2n ph\u01b0\u01a1ng tr\u00ecnh (1):\n\n$$\\int_{ABCD} \\left(\\frac{\\partial U}{\\partial t} + \\frac{\\partial F}{\\partial x} + \\frac{\\partial G}{\\partial y} \\right)dxdy = 0$$\n\n\u00e1p d\u1ee5ng \u0111\u1ecbnh l\u00fd Green ta thu \u0111\u01b0\u1ee3c ph\u01b0\u01a1ng tr\u00ecnh d\u1ea1ng t\u00edch ph\u00e2n:\n$$\\frac{d}{dt}\\int U dV + \\oint_{ABCD} \\vec F. \\vec n ds = 0 \\qquad (2)$$\nv\u1edbi $\\vec F = (F, G)$, $\\vec n$ - vector ph\u00e1p tuy\u1ebfn \u0111\u01a1n v\u1ecb, $\\vec n ds = (dy, -dx)$.\n\nPh\u01b0\u01a1ng tr\u00ecnh (2) m\u00f4 t\u1ea3 \u0111\u1ecbnh lu\u1eadt `b\u1ea3o to\u00e0n` kh\u1ed1i l\u01b0\u1ee3ng, \u0111\u1ed9ng l\u01b0\u1ee3ng v\u00e0 n\u0103ng l\u01b0\u1ee3ng. N\u00f3 g\u1ed3m hai th\u00e0nh ph\u1ea7n t\u01b0\u01a1ng \u1ee9ng s\u1ef1 bi\u1ebfn \u0111\u1ed5i theo th\u1eddi gian v\u00e0 d\u00f2ng \u0111i qua c\u00e1c b\u1ec1 m\u1eb7t.\n\n### S\u01a1 \u0111\u1ed3 sai ph\u00e2n \nS\u01a1 \u0111\u1ed3 sai ph\u00e2n c\u1ee7a ph\u01b0\u01a1ng tr\u00ecnh (2) v\u1edbi x\u1ea5p x\u1ec9 th\u1eddi gian b\u1eadc m\u1ed9t c\u00f3 d\u1ea1ng:\n$$\\frac{U^{n+1} - U^n}{\\Delta t} + \\frac{1}{V_{ABCD}} (\\vec F.\\vec S_{AB} + \\vec F.\\vec S_{BC} + \\vec F.\\vec S_{CD} + \\vec F.\\vec S_{DA}) = 0 \\qquad (3)$$\nv\u1edbi:\n\n$U^n = \\frac{1}{V_{ABCD}}\\int U dV$ - gi\u00e1 tr\u1ecb trung b\u00ecnh c\u1ee7a U trong th\u1ec3 t\u00edch \u0111ang x\u00e9t t\u1ea1i th\u1eddi \u0111i\u1ec3m $t$, $V_{ABCD}$ - th\u1ec3 t\u00edch h\u1eefu h\u1ea1n (tr\u01b0\u1eddng h\u1ee3p 2D - di\u1ec7n t\u00edch ABCD);\n\n$\\vec S = \\vec n . S$ - vector ph\u00e1p tuy\u1ebfn b\u1ec1 m\u1eb7t S, c\u00f3 \u0111\u1ed9 l\u1edbn b\u1eb1ng di\u1ec7n t\u00edch b\u1ec1 m\u1eb7t (2D - \u0111\u1ed9 d\u00e0i AB, BC, CD, DA).\n\nTa c\u00f3 c\u00f4ng th\u1ee9c x\u00e1c \u0111\u1ecbnh $U^{n+1}$ t\u1ea1i th\u1eddi \u0111i\u1ec3m $t+\\Delta t$:\n\n$$U^{n+1} = U^n + \\frac{\\Delta t}{V_{ABCD}}\\sum {\\vec F.\\vec S_n} \\qquad (4)$$\nv\u1edbi $\\vec S_n = -\\vec n . S = -(nx, ny).S$\n\n### C\u00f4ng th\u1ee9c t\u00ednh d\u00f2ng qua b\u1ec1 m\u1eb7t\nC\u00f4ng th\u1ee9c t\u00ednh d\u00f2ng qua m\u1ed9t \u0111\u01a1n v\u1ecb di\u1ec7n t\u00edch b\u1ec1 m\u1eb7t:\n\\begin{align}\nFlux = \\vec F.(\\vec n) = (F, G).(nx,ny) & = \\begin{pmatrix} \\rho(un_x+vn_y) \\\\ \\rho u(un_x+vn_y) + pn_x \\\\ \\rho v(un_x+vn_y) + pn_y \\\\ \\rho h(un_x+vn_y)\\end{pmatrix}, &\n\\end{align}\n\nTrong \u0111\u00f3 $un_x+vn_y = V_n$ - v\u1eadn t\u1ed1c c\u1ee7a d\u00f2ng ch\u1ea3y theo ph\u01b0\u01a1ng vu\u00f4ng g\u00f3c v\u1edbi b\u1ec1 m\u1eb7t. Ta c\u00f3:\n\n\\begin{align}\nFlux & = \\begin{pmatrix} \\rho V_n \\\\ \\rho uV_n + pn_x \\\\ \\rho vV_n + pn_y \\\\ \\rho hV_n\\end{pmatrix}, &\n\\end{align}\n\n\n## C\u00e1c \u0111i\u1ec3m ch\u00fa \u00fd\n\nSo v\u1edbi ph\u01b0\u01a1ng ph\u00e1p gi\u1ea3i h\u1ec7 ph\u01b0\u01a1ng tr\u00ecnh Euler 1D \u1edf ph\u1ea7n 2, khi gi\u1ea3i h\u1ec7 trong kh\u00f4ng gian hai chi\u1ec1u (2D) ch\u00fang ta s\u1ebd c\u1ea7n ph\u1ea3i quan t\u00e2m k\u1ef9 t\u1edbi c\u00e1c v\u1ea5n \u0111\u1ec1: \u0111\u1eb7c t\u00ednh h\u00ecnh h\u1ecdc; l\u01b0\u1edbi; c\u1ea5u tr\u00fac d\u1eef li\u1ec7u; \u0111i\u1ec1u ki\u1ec7n bi\u00ean; bi\u1ec3u di\u1ec5n, x\u1eed l\u00fd k\u1ebft qu\u1ea3. \n\nTa s\u1ebd b\u1eaft \u0111\u1ea7u th\u1ef1c h\u00e0nh v\u1edbi vi\u1ec7c t\u1ea1o l\u01b0\u1edbi cho b\u00e0i to\u00e1n d\u00f2ng ch\u1ea3y tr\u00ean \u00e2m \u1edf b\u00e0i sau.\n\n\n# [B\u00e0i 19. D\u1ef1ng l\u01b0\u1edbi c\u00f3 c\u1ea5u tr\u00fac ](Bai_19.ipynb)\n\n\n\n\n\n", "meta": {"hexsha": "7fa490014bf67c1fce0477c5b5ca9414ce6ae73b", "size": 9161, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Bai_18.ipynb", "max_stars_repo_name": "SangVn/CFD_Notebook_P3", "max_stars_repo_head_hexsha": "1c8db538391ddd6d762bbfd2f59a4059ece5f84e", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-10T02:21:42.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-27T16:43:50.000Z", "max_issues_repo_path": "Bai_18.ipynb", "max_issues_repo_name": "TIENBUIHUU/CFD_Notebook_P3", "max_issues_repo_head_hexsha": "1c8db538391ddd6d762bbfd2f59a4059ece5f84e", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Bai_18.ipynb", "max_forks_repo_name": "TIENBUIHUU/CFD_Notebook_P3", "max_forks_repo_head_hexsha": "1c8db538391ddd6d762bbfd2f59a4059ece5f84e", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-09-04T18:49:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-27T16:43:51.000Z", "avg_line_length": 56.900621118, "max_line_length": 999, "alphanum_fraction": 0.6045191573, "converted": true, "num_tokens": 3254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4726834617637482, "lm_q2_score": 0.28140560742914383, "lm_q1q2_score": 0.13301577667933806}} {"text": "\nThere is an infinte amount of resources out there, for instance [here](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/).\n\n\n# Notebook intro\n\n\n## navigating the notebook\n\nThere are three types of cells:\n1. input cells - contain the actual code\n2. output cell - display the results of the computation\n3. markdown cells - provide documentation and instructions\n\n\nThe Jupyter Notebook has two different keyboard input modes.\n1. Edit mode allows you to type code or text into a cell and is indicated by a green cell border.\n2. Command mode binds the keyboard to notebook level commands and is indicated by a grey cell border with a blue left margin.\n\n\nCommand mode is activated by hitting `ESC`\nYou can swtich back to edit mode by hitting `ENTER`\n\n\n\nSome useful shortcuts are\n- `ESC`+`dd` - delete cell\n- `ESC`+`a` - add cell above\n- `ESC`+`b` - add cell below\n- `ESC`+`l` - toggle line numbers\n- `SHIFT`+`ENTER` - execute cell\n- `ENTER` - enter edit mode\n\nTo get more help, open the shortcut by hitting `ESC` followed by `h`\n\n\n\n\n## imports, packages and magic commands\n\nIn almost any case you will use existing packages. A common good practice is to load them at the beginning of the notebook using the `import` command.\n\n\n\n```python\nimport numpy as np # widely used python library for data manipulation, the 'as' allows you to rename the package on import\nfrom scipy import constants # this is how you just get specific subpackages or functions\nimport sys # get some information about the current OS\n```\n\n\n```python\nsys.version # show the python version\n```\n\n\n\n\n '3.7.3 | packaged by conda-forge | (default, Mar 27 2019, 23:01:00) \\n[GCC 7.3.0]'\n\n\n\n\n```python\nsys.executable # show the path to the python executable - very useful to check when things don't work as expected!\n```\n\n\n\n\n '/home/jgieseler/anaconda3/bin/python'\n\n\n\n[magic commands](https://ipython.readthedocs.io/en/stable/interactive/magics.html) are special commands that give some extended functionality to the notebook.\n\n\n```python\n# show figures inline in the notebook\n%matplotlib inline \n\n# the following command reloads external packages that have been changed externally without the need to restart the kernel\n%load_ext autoreload\n%autoreload 2\n```\n\nThis will list all magic commands\n\n\n```python\n%lsmagic\n```\n\n\n\n\n Available line magics:\n %aimport %alias %alias_magic %autoawait %autocall %automagic %autoreload %autosave %bookmark %cat %cd %clear %colors %conda %config %connect_info %cp %debug %dhist %dirs %doctest_mode %ed %edit %env %gui %hist %history %killbgscripts %ldir %less %lf %lk %ll %load %load_ext %loadpy %logoff %logon %logstart %logstate %logstop %ls %lsmagic %lx %macro %magic %man %matplotlib %mkdir %more %mv %notebook %page %pastebin %pdb %pdef %pdoc %pfile %pinfo %pinfo2 %pip %popd %pprint %precision %prun %psearch %psource %pushd %pwd %pycat %pylab %qtconsole %quickref %recall %rehashx %reload_ext %rep %rerun %reset %reset_selective %rm %rmdir %run %save %sc %set_env %store %sx %system %tb %time %timeit %unalias %unload_ext %who %who_ls %whos %xdel %xmode\n \n Available cell magics:\n %%! %%HTML %%SVG %%bash %%capture %%debug %%file %%html %%javascript %%js %%latex %%markdown %%perl %%prun %%pypy %%python %%python2 %%python3 %%ruby %%script %%sh %%svg %%sx %%system %%time %%timeit %%writefile\n \n Automagic is ON, % prefix IS NOT needed for line magics.\n\n\n\n\n```python\na = 'this is a string'\nb = 1 # this is an integer\n\n# list variables\n%who str\n%who int\n```\n\n a\t \n b\t \n\n\nThe `!` allows you to execute shell commands directly from the notebook. You can also [exectute different kernels](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/) in a single notebook!\n\n\n```python\n! ls ../../images/nbextensions.png\n```\n\n ../../images/nbextensions.png\r\n\n\n## extensions\n\nYou can install notebook extensions by \n\nIf everything is installed, you should get the following tab\n\n\nOne extension that I find particularly usefull is the Table of contents, that provides a TOC based on the titles and subtitles in the notebook.\n\n## Latex, Markdown and HTML\n\nYou can write beatiful notebooks using LaTex and [Markdown](https://www.markdownguide.org/cheat-sheet/). Just open any of the cells in this notebook to see the underlying markdown code.\n\nLatex is rendered as expected, for example: $\\alpha$.\n\nYou can also have inline equations:\n\n$$\n\\alpha = \\int_0^\\infty sin(x) dx\n$$\n\nand numbered equations\n\n\n\\begin{align}\n\\alpha &= \\int_0^\\infty \\sin(x) dx \\\\\n\\beta &= \\frac{\\partial}{\\partial y} \\cos(y)\n\\end{align}\n\n\nTo navigate the notebook you can also create internal links within the notebook using regular HTML code:\n\n- `some text` - links to `#my_label`\n- `` - defines the target of the link.\n\nFor examle, this link brings you back to the beginning of the notebook.\nNote that links \"don't work\" when the cell is still in edit mode!\n\n\n

\nNote: HTML also allows you to add some color to you notebooks.\n

\n\n# Python intro\n\n\nThere are more the enough resources on the internet. Here, I will just remind some specifically pythonic code snippets that I find useful.\n\n## functions\n\nAre defined as follows. You can also create classes and more complicated objects.\n\n\n```python\ndef simple_function(my_number, my_string='there is nothing'):\n \"\"\"\n This is the doctring. It contains the documentation of the functions. You can access it with `?`\n \n \"\"\"\n \n \n return my_string + str(my_number)\n```\n\n\n```python\n?simple_function\n```\n\n\n```python\n# a short version of writing the above is using a lambda function\nsimple_function = lambda my_number, my_string='there is nothing' : my_string + str(my_number)\n```\n\n## dictionaries\n\nAre very usefull for writing huma friendly code:\n\n\n```python\nsimple_dict = {'my_number': 1, 'my_string': 'hello'}\n\n# an alternative way to define the same dictionary (this one is usefull when you want to turn a bunch of definitions into a dict)\nsimple_dict = dict(\n my_number = 1,\n my_string = 'hello'\n)\n\nsimple_dict\n```\n\n\n\n\n {'my_number': 1, 'my_string': 'hello'}\n\n\n\nyou can use dicts to pass many arguments in a compact way\n\n\n```python\n# this unpacks the dictionary, the two stars mean that the unpacking is as key:value\n# check what happens if you only have one star\nsimple_function(**simple_dict) \n```\n\n\n\n\n 'hello1'\n\n\n\n## loops\n\n\n```python\nfor k,v in simple_dict.items():\n print('this is the value:',v, ' and this is the key:', k)\n```\n\n this is the value: 1 and this is the key: my_number\n this is the value: hello and this is the key: my_string\n\n\n\n```python\n# zip allows you to bundle different data quickly together\nfor k,v in zip(['a', 'b'], [1,2]):\n print('this is the value:',v, ' and this is the key:', k)\n```\n\n this is the value: 1 and this is the key: a\n this is the value: 2 and this is the key: b\n\n\n\n```python\n{k:v for k,v in zip(['a', 'b'], [1,2])} # you can also loops to create dictionaries\n```\n\n\n\n\n {'a': 1, 'b': 2}\n\n\n\n## paths\n\npython provides a convinient pathlib library\n\n\n```python\nfrom pathlib import Path # working with path objects - usefull for OS independent code\n```\n\n\n```python\nimage_path = Path('../../images/') # define the path - usefull to define a global path at the beginning of the notebook\n```\n\n\n```python\n[f for f in image_path.glob('*')] # glob allows you to search the path\n```\n\n\n\n\n [PosixPath('../../images/MC.jpg'),\n PosixPath('../../images/motivation.svg'),\n PosixPath('../../images/nbextensions.png'),\n PosixPath('../../images/motivation.png'),\n PosixPath('../../images/distro-01-1.png'),\n PosixPath('../../images/MC.png'),\n PosixPath('../../images/PyCharm-Github.png')]\n\n\n\n\n```python\nimage_path/'motivation.svg' # appending to a path\n```\n\n\n\n\n PosixPath('../../images/motivation.svg')\n\n\n\n\n```python\nimage_path.name # f\n```\n\n\n\n\n 'images'\n\n\n\n\n```python\nimage_path.exists() # check if file exists\n```\n\n\n\n\n True\n\n\n\n\n```python\nimage_path.is_dir() # check if is directory\n```\n\n\n\n\n True\n\n\n\n\n```python\n(image_path/'motivation.svg').exists() # check if file exists\n```\n\n\n\n\n True\n\n\n\n\n```python\n(image_path/'motivation.svg').is_dir()\n```\n\n\n\n\n False\n\n\n\n\n```python\nimage_path.absolute() # get the absolute path\n```\n\n\n\n\n PosixPath('/home/jgieseler/PycharmProjects/edaipynb/edaipynb/notebooks/../../images')\n\n\n\n\n```python\nimage_path.parent # get the parent\n```\n\n\n\n\n PosixPath('../..')\n\n\n\n## strings\n\nyou can easily format string useing the format function\n\n\n```python\n'this is a bare string, that takes a float here: {:0.3f}'.format(0.2)\n```\n\n\n\n\n 'this is a bare string, that takes a float here: 0.200'\n\n\n\n\n```python\ns = 'thies is ...'\ns.replace('thies', 'this')\n```\n\n\n\n\n 'this is ...'\n\n\n\n\n```python\ns.split(' ') # breaking up string creates a list\n```\n\n\n\n\n ['thies', 'is', '...']\n\n\n\n\n```python\n# this is usefull to encode and decode information into filenames\n\nfilename = 'run_{:02d}_pressure_{:0.2e}_frequency_{:0.2f}Hz'.format(1, 1.2e-4, 102024)\n\nprint(filename)\n```\n\n run_01_pressure_1.20e-04_frequency_102024.00Hz\n\n\n\n```python\n# now we extract the information\nrun = int(filename.split('_')[1])\npressure = float(filename.split('_')[3])\nfrequency = float(filename.split('_')[5].split('Hz')[0])\n\nrun, pressure, frequency\n```\n\n\n\n\n (1, 0.00012, 102024.0)\n\n\n\n# save notebook as html\n\n\n```python\nfrom edaipynb import save_notebook_as_html\nsave_notebook_as_html('../../html')\n```\n\n\n\n ../../html/1) Notebook_and_Python_1-0-1.html saved\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "0634c19a8cdebabf14b33c97a518ff2ba8b7b34a", "size": 28466, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "edaipynb/notebooks/1) Notebook_and_Python_1-0-1.ipynb", "max_stars_repo_name": "JanGieseler/edaipynb", "max_stars_repo_head_hexsha": "5988773909c69cd19d27449bd8a0f6df437a92c4", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-08T21:23:08.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-08T21:23:08.000Z", "max_issues_repo_path": "edaipynb/notebooks/1) Notebook_and_Python_1-0-1.ipynb", "max_issues_repo_name": "JanGieseler/edaipynb", "max_issues_repo_head_hexsha": "5988773909c69cd19d27449bd8a0f6df437a92c4", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "edaipynb/notebooks/1) Notebook_and_Python_1-0-1.ipynb", "max_forks_repo_name": "JanGieseler/edaipynb", "max_forks_repo_head_hexsha": "5988773909c69cd19d27449bd8a0f6df437a92c4", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.8828671329, "max_line_length": 856, "alphanum_fraction": 0.5279631841, "converted": true, "num_tokens": 2683, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.2598256264980406, "lm_q2_score": 0.5117166047041654, "lm_q1q2_score": 0.13295708740670995}} {"text": "# Class 4\n\nNote: The notes that follow are largely those of Mark Krumholz (ANU) who led the Bootcamp\nlast in 2015. You can find the 2015 lectures [here](https://sites.google.com/a/ucsc.edu/krumholz/teaching-and-courses/python-15)\n\n\n```python\n# These are to display images in-line\nfrom IPython.display import Image\nfrom IPython.core.display import HTML\n\n#Imports\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n# Files: The Basics\n\n## Writing text files\n\nThus far we have mostly dealt with inputs and outputs by keyboard and screen, with the exception of using files to define functions and to write out plots. However, for the vast majority of scientific (and other) applications we want to be able to handle files. Handling files allows us to read in and process data in an automated fashion, and to write out results in a form that is easy to save, and is suitable for something too complex to just print to the screen. Thus half of today's class is devoted to file handling.\n\nFiles are opened with the open() command, which works as follows:\n\n\n```python\nf = open('testfile.txt', 'w')\n```\n\nThe first argument is just the name of the file to be opened. The second argument gives the mode in which it is to be opened. We'll learn about more modes as we go, but a quick summary is that files can be opened for writing, reading, appending, or both reading and writing. A file that is opened for writing is created as an empty file; if a file of the same name already exists, it is automatically deleted. Append mode is the same, except that material is added to the end of the existing file instead of replacing it. Files that are opened for writing or appending can only be written, not read as inputs. Conversely, files that are opened for reading can be read in, but cannot be altered. Files that are opened for both reading and writing can be interacted with in both ways.\n\nOnce a file has been opened, we can write text into it by invoking the write function that is associated with that file:\n\n\n\n```python\nf.write('Output written to a file\\n')\na = 2\nb = 3.0\nf.write(str(a)+' '+str(b)+'\\n')\n```\n\n\n\n\n 6\n\n\n\nThere are two subtle things to notice here. First, we've ended the lines we're writing out with '\\n'. The \\n is the code for a line break. If we don't include that, the output will all be on a single line. By default the write command does not add a line break after writing something, so if you want line breaks, you need to put them in yourself. Second, the write command takes a string as an argument, and only a string: in this example, f.write(a) would produce an error, because a is an integer not a string. To write it out, we must turn a into a string first by doing str(a).\n\nOnce we're done with the file, we can close it by invoking the close() method:\n\n\n```python\nf.close()\n```\n\nNow let's look at the file from the command line using the more command:\n\n\n```python\n# more is a bash (terminal) command, using the sign ! allows you to write bash comands in the notebook ( try !ls) \n!more testfile.txt\n```\n\n Output written to a file\r\n 2 3.0\r\n\n\nWe've successfully written to a file.\n\n## String formatting\n\nWhen writing numerical output to a file, it is often helpful to have a bit more control about how numbers are converted into text. This capability is provided by the format command. Here's a very simple example of how string formatting works:\n\n\n```python\ns = \"Here is an integer: {}; here is a float: {}\".format(a, b)\nprint(s)\n```\n\n Here is an integer: 2; here is a float: 3.0\n\n\nThe basic usage of the format command is as follows: we put a string in quotes, which contains one or more pairs of curly brackets {}. At the end of the string, we put .format(arg1, arg2, arg3,...) where arg1, arg2, arg3, ... are some number of arguments that are to be formatted. These arguments will be matched up with the curly brackets, which are used to specify where the data are to be placed in the output. Each of these arguments will be converted to a string and inserted in the appropriate location. In this example, the variable a was converted to the string \"2\" and the variable b was converted to the string \"3.0\". The resulting string is returned, and may be assigned to a variable, printed on the screen, or given to the write command, as we like.\n\nThe power of the format command comes in the fact that we can put codes inside the brackets to control how the numerical values are converted to strings. For example:\n\n\n\n```python\nprint(\"Here is an integer: {:5}; here is a float: {:5.3f}\".format(a, b))\n```\n\n Here is an integer: 2; here is a float: 3.000\n\n\nLet's break down what appeared inside the brackets. First, there is a colon, which says that we are now going to specify a format. Then in both cases we put the number 5. This specifies that the number is to be converted into a space at least 5 characters wide, padding it with spaces if necessary. Then for the second argument, we had \".3f\". The period followed by a number and f says that we want a floating point number to be printed out using a certain minimum number of decimal places, padding with zeros if necessary.\n\nSome other options:\n\n\n```python\nprint(\"Here is an integer: {:05}; here is a float: {:^10.2e}\".format(a, b))\n```\n\n Here is an integer: 00002; here is a float: 3.00e+00 \n\n\nIn the first location, we put 05 instead of 5. The leading 0 says that the extra spaces are to be filled with zeros instead of spaces. In the second argument, the ^10 says that the string is to be 10 spaces wide, and that the results are to be centered within it, as opposed to right-justified (which is the default). The \".2e\" says that we want two decimal places, and that we want the output to be printed out in exponential (scientific) notation instead of floating point notation.\n\nNumerous other options for string formatting exist: see http://docs.python.org/2.7/library/string.html#formatspec.\n\nLet's use this capability to print out some data describing a sin function:\n\n\n\n```python\nx = np.arange(0,2*np.pi,0.01)\nf = open('sin.txt', 'w')\nfor val in x:\n f.write(\"{:.4f} {:7.4f}\\n\".format(val, np.sin(val)))\nf.close()\n```\n\nNow we can look at the file from the command line, since we have created a large file, we will only print the first 20 lines using the bash command head:\n\n\n```python\n!head -20 sin.txt\n```\n\n 0.0000 0.0000\r\n 0.0100 0.0100\r\n 0.0200 0.0200\r\n 0.0300 0.0300\r\n 0.0400 0.0400\r\n 0.0500 0.0500\r\n 0.0600 0.0600\r\n 0.0700 0.0699\r\n 0.0800 0.0799\r\n 0.0900 0.0899\r\n 0.1000 0.0998\r\n 0.1100 0.1098\r\n 0.1200 0.1197\r\n 0.1300 0.1296\r\n 0.1400 0.1395\r\n 0.1500 0.1494\r\n 0.1600 0.1593\r\n 0.1700 0.1692\r\n 0.1800 0.1790\r\n 0.1900 0.1889\r\n\n\nWe've printed out the sin function to four digits of precision in both the x and y values. For the y values, we specified that the column was to be 7 spaces wide, so that the decimal points would line up.\n\n## Reading text files\n\nNext we come to the inverse operation: reading in a text file. Let's open up the file we just wrote in read mode:\n\n\n```python\n f = open('sin.txt', 'r')\n```\n\nWe can read a single line from the file in as a string with the readline() command:\n\n\n```python\nf.readline()\n```\n\n\n\n\n '0.0000 0.0000\\n'\n\n\n\n\n```python\nf.readline()\n```\n\n\n\n\n '0.0100 0.0100\\n'\n\n\n\n\n```python\nf.readline()\n```\n\n\n\n\n '0.0200 0.0200\\n'\n\n\n\nNote that each call to readline() automatically moves the pointer in the file forward, so that the next invocation of readline() reads in the next line. If one reaches the end of the file, then calling readline() just produces an empty string.\n\n(Note to those familiar with languages like C++: there is no EOF-checking function in python, and reading beyond the end of a file does not produce an error. Python's general philosophy about things like this is \"It is better to ask forgiveness than permission\", meaning that we don't handle things like the ends of files by testing for them, we handle them by trying the operation we want to perform, and seeing if it succeeds.)\n\nIf you want to go back to the beginning, you can use the seek() function:\n\n\n```python\nf.seek(0)\n```\n\n\n\n\n 0\n\n\n\nThe argument is the offset from the start of the file, measured in bytes. Thus a value of 0 points back to the start of the file.\n\nOne can also read the file contents in a couple of other ways. The function readlines(), as opposed to readline(), reads all the lines of the file into a list, with each item in the list corresponding to a separate line. For example:\n\n\n```python\ncontents = f.readlines()\ncontents[0]\n```\n\n\n\n\n '0.0000 0.0000\\n'\n\n\n\n\n```python\ncontents[1]\n```\n\n\n\n\n '0.0100 0.0100\\n'\n\n\n\n\n```python\ncontents[2]\n```\n\n\n\n\n '0.0200 0.0200\\n'\n\n\n\n\n```python\nlen(contents)\n```\n\n\n\n\n 629\n\n\n\nOne can also iterate over a file using a for loop, which reads in the lines one by one. Here's an example of using this approach to read our sin function back into a pair of x and y arrays that we can then plot: \n\n\n```python\nf.seek(0) # Back to the beginning\nxinput = [] # Create an empty list to receive the x values\nyinput = [] # Ditto for the y values\nfor line in f: # Read the file\n spl = line.split() # Break up the line into two parts\n xinput.append(float(spl[0])) # Add first part to xinput list\n yinput.append(float(spl[1])) # Add second part to yinput list\n\nf.close() # We're done with this file\nxinput = np.array(xinput) # Convert x list to array\nyinput = np.array(yinput) # Same for y list\nplt.plot( xinput, yinput ) # Plot\n```\n\nLet's break down this block of code to understand what it does. The first line just returns us to the start of the file. The next two lines create empty lists into which we will place the x and y values we're reading. The last three lines convert the x and y lists into arrays and plot them.\n\nThe fourth line is the meat of the operation. The construct \"for line in f:\" iterates over the file just like we can iterate over a list, with each line of file acting like an element of the list. (We would have gotten exactly the same result by doing \"for line in contents:\", but the construct \"for line in f:\" is generally preferable, because it will work even for files that are too large to hold the entire thing in memory at once.) Inside the loop, we use the string function split() to break the string up into two pieces separated by whitespace, and then we append those two pieces to the x and y lists, respectively. When we're done, we have all the lines of the file read into those lists.\n\n\nReading text files is a common enough operation that, not surprisingly, people have written code to perform operations of this sort before. One particularly flexible bit of code for reading in tables of data stored in text form is the ASCII table reader that is included in the [astropy](http://www.astropy.org/) package of python routines for astronomy. To read the data using astropy, we can do the following:\n\n\n```python\nfrom astropy.io import ascii\n\ndata = ascii.read('sin.txt')\ntype(data)\n```\n\n\n\n\n astropy.table.table.Table\n\n\n\nThe first line imports that ascii package from astropy.io (the input/output part of astropy). The second reads the data table. Astropy is smart enough to guess many common formats, and has no trouble with the simple one we've constructed. The resulting object is stored in a variable called data, and we can see that it is of type table.\n\nIn a table like ours without any headers, the columns of the table are by default named col1, col2, etc. We can access these from the data object using an interface like dict, where we give the name of the column. The data can then be plotted. For example, we can do\n\n\n```python\nplt.clf()\nplt.plot( data['col1'], data['col2'] )\n```\n\n## Binary files: writing\n\nThe files we've dealt with so far are text files, consisting of sets of characters. While text files have the virtue of being mostly human-readable, text format is very inefficient in terms of the number of bytes required to store data. For example, let's think about a standard python floating point value. On most modern architectures a float is 64 bits long. While a float is 64 bits (= 8 bytes) of information, how much space does it take to store that information as text?\n\nFor the way floats are stored, 64 bits translates to 15 digits decimal places of precision and 3 digits in the base-10 exponent. For example, on my laptop, the largest representable float (obtained by typing sys.float_info at the python prompt, for those interested) is 1.7976931348623157e+308. (For the astute, notice that this is 16 decimal places; the last one is not precise because we cannot accurately place arbitrary values in it.) Thus in text format, to represent an arbitrary a floating point number in exponential notation we need something of the form X.XXXXXXXXXXXXXXXe+XXX. This is 22 characters, and each ASCII text character requires 1 byte to represent it, so representing this 8 byte number in ASCII requires 22 bytes of storage -- and that's not counting the extra spaces, carriage returns, or other formatting stuff we might need to render a list of such numbers legible.\n\nThus storing floats as ASCII requires a minimum of 2.75 times as much space as the actual data itself. Clearly this is a waste of disk space, and will also unnecessarily slow down file reading, transfer, etc. For this reason, data sets of any substantial size are always stored as binary rather than text data.\n\nPython allows one to handle binary as well as text data. To deal with binary data, when opening a file one must specify that it is to be opened in binary mode. The main difference between binary and text mode is that, in text mode, one can write data other than strings, and it will not be converted to any sort of string representation before it is written. (There are other, subtle, platform-dependent differences as well, which we will not get into.) To open a file in binary mode, just append a \"b\" after the \"r\", \"w\", or \"a\" specifying the read/write/append mode.\n\nAs an example, we can write out our sin function data to a binary file as follows:\n\n\n\n```python\nf = open('sin.dat', 'wb')\nf.write(x)\nf.write(np.sin(x))\nf.close()\n```\n\nThese commands will create a binary file called sin.dat, which contains the values stored in the array x, followed by the values for sin(x). We can't look at this data directly using the more command, because it's not stored in a format that is human-readable, but we can verify that the right amount of data is there by doing the following at the command line (note that this assumes mac or unix):\n\n\n```python\n!ls -l sin.dat\n```\n\n -rw-r--r-- 1 bruno bruno 10064 Jan 12 17:06 sin.dat\r\n\n\nThe number 10064 is the number of bytes in the file, and you can very quickly verify that this is 629 * 2 * 8 -- here 629 is the number of elements in x, 2 is for two arrays (x and sin(x)), and 8 is the number of bytes for each float. So the file is the right size at least. If you compare this to the size of the text file sin.txt, you will notice that sin.txt is actually almost exactly the same size (about 5% smaller), but this is just because we only wrote out the text data to 4 decimal places, and thereby threw away a huge amount of information. The binary data file contains 15 digits of precision, not 4, in the same amount of disk space.\n\n## Binary files: reading\n\nThe downside to the compactness of binary format is that reading the data back in is considerably more complex. The problem is that a binary file is nothing but a string of bytes. There's no formatting information specifying whether they are floats, integers, or something else. Thus you have to know how to interpret them in order to turn them into something useful. In python, this procedure is done in two steps.\n\nThe first step is to read in the raw data using the read() command. Let's read in the array of x values for our example file:\n\n\n\n```python\nf = open('sin.dat', 'rb')\nxraw = f.read(629*8)\ntype(xraw)\n```\n\n\n\n\n bytes\n\n\n\nNote that the read command takes as an argument the number of bytes to read in; if one omits this argument, the entire file is read. In this example, we know that the file contains two arrays of 629 numbers each, corresponding to the array of x values, and then the array of sin(x) values. Thus 629*8 is the number of bytes used to store the x array, and the command f.read(629*8) reads in this data.\n\nThe data are placed into the variable xraw. This variable is a string, but really it's just a representation of the stream of bits that represents our array. To turn it into a set of floating point numbers that we can actually use, we need to tell python explicitly how to interpret this stream of bits. Python provides a module called struct for performing this sort of operation. Here's how we can use it:\n\n\n\n```python\nimport struct\n\nxin = np.array(struct.unpack('d'*629, xraw))\n```\n\nAfter importing the struct module, we call the function it provides called unpack(). The unpack() function converts streams of raw bits like we just read into numbers. The second argument to unpack() is the stream of bits to be converted, while the first argument is a string specifying how they are to be interpreted. In this example, the letter d means that they are to be interpreted as double precision numbers (meaning 64-bit floats; this is called double precision for [historical reasons](http://en.wikipedia.org/wiki/Double-precision_floating-point_format) having to do with computer architecture), while the *629 makes a string of 629 d's, since there are 629 such numbers. Finally, the array() command says to take these 629 numbers, which come out as a tuple, and convert them to a numpy array suitable for plotting.\n\nTo finish up, we can do the same for the sin(x) array, and then plot:\n\n\n\n```python\nyraw = f.read(629*8)\nf.close()\nyin = np.array(struct.unpack('d'*629, yraw))\nplt.clf()\nplt.plot( xin, yin )\n```\n\nThe result is the same as when we stored the data in text format (or at least indistinguishable by eye -- the text data is much less precise because we only stored 4 decimal places, but the differences too small an amount to see):\n\n# Specialized File Formats\n\nThe choice between binary and text files is somewhat unappetizing. Text files are easy to read by humans, but are impractically-large and inefficient for substantial amounts of data. Binary files are compact, but you have to know exactly what they contain to get anything useful out of them. Is there an alternative? Yes!\n\nThis is what specialized file formats are for. They are file formats that consist (usually) of all or mostly binary data, but in an agreed-upon format that is self-documenting, meaning that, if you know the format, the file itself contains enough information to interpret it the rest of it. A simple example of self-documentation would be if we started our example binary file by writing out the number of elements in the x array, and then writing the data, so that we wouldn't have to know in advance how long the arrays of x and sin(x) values are. This would only add a very small amount to the file size, but would make the data much more useful.\n\nThe number of file formats in the world is immense, but there are a few that are particularly useful for astronomical and/or python applications, which we'll discuss briefly.\n\n## Numpy array files\n\nThe numpy library provides a standard tool for saving the contents of numpy arrays, and reading them back in: the save() and load() functions. Usage is very simple. To save an array, just do\n\n\n```python\nnp.save( 'x.npy', x )\nnp.save( 'sinx.npy', np.sin(x) )\n```\n\nThe first argument is the file name to save to. If you don't type in the extension .npy manually, one will be added for you. The second argument is the array to save.\n\nThen to read the arrays back in, just do\n\n\n```python\nxin = np.load( 'x.npy' )\nsinxin = np. load( 'sinx.npy' )\n```\n\nThat's it.\n\n## Pickling and pickle files\n\nThe numpy array format is fine for saving numpy arrays, but suppose that we want to save something else. Is there a way to do that? Yes!\n\nPython provides a technique called pickling. The [pickle](http://docs.python.org/2/library/pickle.html) module provides tools to take an arbitrary variable or object in python, turn it into a stream of bytes that are then saved to a file, and reconstruct the object from a file. The procedure of taking an object and turning it into a file that can be saved is called pickling, and the procedure of unpacking the file to get back a python object that can be manipulated in a python program or session is called unpickling. Any built-in python variable can be pickled and unpickled, as can most user-defined objects, leaving out some subtle complications.\n\nHere's an example. Suppose we make a dict that contains an array of x values and then its sin, cosine, and tangent:\n\n\n```python\ntrigfuncs = {'x': x, 'sin': np.sin(x), 'cos': np.cos(x), 'tan': np.tan(x)}\n```\n\nWe can save this object using pickle as follows:\n\n\n\n```python\nimport pickle\n\nf = open('trigfuncs.pkl', 'wb')\npickle.dump(trigfuncs, f)\nf.close()\n```\n\nThe first command here imports the pickle module. The second opens a file for writing in binary format; the standard extension for pickle files is .pkl. The third line invokes the pickle.dump() method, which writes the data to the file. The first argument is the data to be written, and the second is the file where it is to go. The final line closes the file.\n\nTo reconstruct the object, we just open the file again, and read it in using the pickle.load() command:\n\n\n```python\nf = open('trigfuncs.pkl', 'rb')\ndata=pickle.load(f)\nf.close()\n```\n\nThe first command opens the file, the second loads its contents into an object called data, and the third closes the file. We can readily verify that the object we've created, called data, is a dict that contains the same information as the original object trigfuncs:\n\n\n```python\ntype(data)\n```\n\n\n\n\n dict\n\n\n\n\n```python\nplt.plot( data['x'], data['cos'] )\n```\n\n## FITS image files\n\nA second very common format in astronomical applications is FITS, which stands for Flexible Image Transport System. FITS is a file format that was originally designed to store astronomical images. It differs from most standard image formats in that it was set up to store, in addition to the image itself, a lot of metadata describing things like what instrument the image came from, how that instrument was configured, where in the sky it was pointed, etc. FITS is supported these days by the Goddard Space Flight Center (GSFC); more information about FITS can be found at http://fits.gsfc.nasa.gov/.\n\nTo experiment with FITS data, we need some to play with. You can download an example file [here](https://sites.google.com/a/ucsc.edu/krumholz/teaching-and-courses/python14/class-4/w0ck0101t_c0h.fit?attredirects=0&d=1). This particular file is an example from the GSFC website, at http://fits.gsfc.nasa.gov/fits_nraodata.html.\n\nOnce we've got the file downloaded and moved into the directory where we're running python, we're ready to look at it. The first step is to load the required libraries, which are part of the [astropy](http://www.astropy.org/) package of python routines for astronomy:\n\n\n```python\nfrom astropy.io import fits\n```\n\nThen we open the data file.\n\n\n```python\nhdulist = fits.open('w0ck0101t_c0h.fit')\n```\n\nThe open command returns a list of HDUs, short for header data units. An HDU is a block of data. The most common format for FITS files, which this example follows, is that there are two HDUs, one containing an image, and one containing metadata describing that image. To see a summary of what the HDUs contain, we can do\n\n\n```python\nhdulist.info()\n```\n\n Filename: w0ck0101t_c0h.fit\n No. Name Ver Type Cards Dimensions Format\n 0 PRIMARY 1 PrimaryHDU 196 (800, 800, 4) int16 (rescales to float32) \n 1 w0ck0101t_cvt.c0h.tab 1 TableHDU 194 4R x 37C [D25.16, D25.16, D25.16, E15.7, E15.7, E15.7, E15.7, E15.7, E15.7, E15.7, E15.7, I11, E15.7, I11, I11, A24, A24, A8, A8, A8, I11, E15.7, E15.7, E15.7, E15.7, I11, I11, I11, I11, I11, I11, I11, A24, E15.7, E15.7, E15.7, E15.7] \n\n\nThis output tells us that there are two HDUs. The first one, called PRIMARY, consists of an array of 800 x 800 x 4 floating point numbers. The type of HDU is PrimaryHDU, which is a type that corresponds to an astronomical image, or, in this case, four images. (There are four because this particular data is from an instrument on the Hubble Space Telescope that had four CCD chips, and the data are recorded separately for each one.) The second HDU, called w0ck0101t_cvt.c0h.tab, is of type TableHDU. It is a table consisting of 194 entries. That's the metadata.\n\nWe can get a look at the metadata by printing out the header information associated with the table:\n\n\n```python\nhdulist[1].header\n```\n\n\n\n\n XTENSION= 'TABLE ' / FITS STANDARD \n BITPIX = 8 / 8-bits per 'pixels' \n NAXIS = 2 / Simple 2-D matrix \n NAXIS1 = 584 / No of characters per row \n NAXIS2 = 4 / The number of rows \n PCOUNT = 0 / No 'random' parameters \n GCOUNT = 1 / Only one group \n TFIELDS = 37 / Number of fields per row \n EXTNAME = 'w0ck0101t_cvt.c0h.tab' / Name of table \n TTYPE1 = 'CRVAL1 ' / \n TBCOL1 = 1 / \n TFORM1 = 'D25.16 ' / %25.16g \n TUNIT1 = ' ' / \n TTYPE2 = 'CRVAL2 ' / \n TBCOL2 = 27 / \n TFORM2 = 'D25.16 ' / %25.16g \n TUNIT2 = ' ' / \n TTYPE3 = 'CRVAL3 ' / \n TBCOL3 = 53 / \n TFORM3 = 'D25.16 ' / %25.16g \n TUNIT3 = ' ' / \n TTYPE4 = 'CRPIX1 ' / \n TBCOL4 = 79 / \n TFORM4 = 'E15.7 ' / %15.7g \n TUNIT4 = ' ' / \n TTYPE5 = 'CRPIX2 ' / \n TBCOL5 = 95 / \n TFORM5 = 'E15.7 ' / %15.7g \n TUNIT5 = ' ' / \n TTYPE6 = 'CD1_1 ' / \n TBCOL6 = 111 / \n TFORM6 = 'E15.7 ' / %15.7g \n TUNIT6 = ' ' / \n TTYPE7 = 'CD1_2 ' / \n TBCOL7 = 127 / \n TFORM7 = 'E15.7 ' / %15.7g \n TUNIT7 = ' ' / \n TTYPE8 = 'CD2_1 ' / \n TBCOL8 = 143 / \n TFORM8 = 'E15.7 ' / %15.7g \n TUNIT8 = ' ' / \n TTYPE9 = 'CD2_2 ' / \n TBCOL9 = 159 / \n TFORM9 = 'E15.7 ' / %15.7g \n TUNIT9 = ' ' / \n TTYPE10 = 'DATAMIN ' / \n TBCOL10 = 175 / \n TFORM10 = 'E15.7 ' / %15.7g \n TUNIT10 = ' ' / \n TTYPE11 = 'DATAMAX ' / \n TBCOL11 = 191 / \n TFORM11 = 'E15.7 ' / %15.7g \n TUNIT11 = ' ' / \n TTYPE12 = 'MIR_REVR' / \n TBCOL12 = 207 / \n TFORM12 = 'I11 ' / %11d \n TUNIT12 = ' ' / \n TTYPE13 = 'ORIENTAT' / \n TBCOL13 = 219 / \n TFORM13 = 'E15.7 ' / %15.7g \n TUNIT13 = ' ' / \n TTYPE14 = 'FILLCNT ' / \n TBCOL14 = 235 / \n TFORM14 = 'I11 ' / %11d \n TUNIT14 = ' ' / \n TTYPE15 = 'ERRCNT ' / \n TBCOL15 = 247 / \n TFORM15 = 'I11 ' / %11d \n TUNIT15 = ' ' / \n TTYPE16 = 'FPKTTIME' / \n TBCOL16 = 259 / \n TFORM16 = 'A24 ' / %-24s \n TUNIT16 = 'CH*24 ' / \n TTYPE17 = 'LPKTTIME' / \n TBCOL17 = 284 / \n TFORM17 = 'A24 ' / %-24s \n TUNIT17 = 'CH*24 ' / \n TTYPE18 = 'CTYPE1 ' / \n TBCOL18 = 309 / \n TFORM18 = 'A8 ' / %-8s \n TUNIT18 = 'CH*8 ' / \n TTYPE19 = 'CTYPE2 ' / \n TBCOL19 = 318 / \n TFORM19 = 'A8 ' / %-8s \n TUNIT19 = 'CH*8 ' / \n TTYPE20 = 'CTYPE3 ' / \n TBCOL20 = 327 / \n TFORM20 = 'A8 ' / %-8s \n TUNIT20 = 'CH*8 ' / \n TTYPE21 = 'DETECTOR' / \n TBCOL21 = 336 / \n TFORM21 = 'I11 ' / %11d \n TUNIT21 = ' ' / \n TTYPE22 = 'DEZERO ' / \n TBCOL22 = 348 / \n TFORM22 = 'E15.7 ' / %15.7g \n TUNIT22 = ' ' / \n TTYPE23 = 'GOODMIN ' / \n TBCOL23 = 364 / \n TFORM23 = 'E15.7 ' / %15.7g \n TUNIT23 = ' ' / \n TTYPE24 = 'GOODMAX ' / \n TBCOL24 = 380 / \n TFORM24 = 'E15.7 ' / %15.7g \n TUNIT24 = ' ' / \n TTYPE25 = 'DATAMEAN' / \n TBCOL25 = 396 / \n TFORM25 = 'E15.7 ' / %15.7g \n TUNIT25 = ' ' / \n TTYPE26 = 'GPIXELS ' / \n TBCOL26 = 412 / \n TFORM26 = 'I11 ' / %11d \n TUNIT26 = ' ' / \n TTYPE27 = 'SOFTERRS' / \n TBCOL27 = 424 / \n TFORM27 = 'I11 ' / %11d \n TUNIT27 = ' ' / \n TTYPE28 = 'CALIBDEF' / \n TBCOL28 = 436 / \n TFORM28 = 'I11 ' / %11d \n TUNIT28 = ' ' / \n TTYPE29 = 'STATICD ' / \n TBCOL29 = 448 / \n TFORM29 = 'I11 ' / %11d \n TUNIT29 = ' ' / \n TTYPE30 = 'ATODSAT ' / \n TBCOL30 = 460 / \n TFORM30 = 'I11 ' / %11d \n TUNIT30 = ' ' / \n TTYPE31 = 'DATALOST' / \n TBCOL31 = 472 / \n TFORM31 = 'I11 ' / %11d \n TUNIT31 = ' ' / \n TTYPE32 = 'BADPIXEL' / \n TBCOL32 = 484 / \n TFORM32 = 'I11 ' / %11d \n TUNIT32 = ' ' / \n TTYPE33 = 'PHOTMODE' / \n TBCOL33 = 496 / \n TFORM33 = 'A24 ' / %-24s \n TUNIT33 = 'CH*24 ' / \n TTYPE34 = 'PHOTFLAM' / \n TBCOL34 = 521 / \n TFORM34 = 'E15.7 ' / %15.7g \n TUNIT34 = ' ' / \n TTYPE35 = 'PHOTZPT ' / \n TBCOL35 = 537 / \n TFORM35 = 'E15.7 ' / %15.7g \n TUNIT35 = ' ' / \n TTYPE36 = 'PHOTPLAM' / \n TBCOL36 = 553 / \n TFORM36 = 'E15.7 ' / %15.7g \n TUNIT36 = ' ' / \n TTYPE37 = 'PHOTBW ' / \n TBCOL37 = 569 / \n TFORM37 = 'E15.7 ' / %15.7g \n TUNIT37 = ' ' / \n CRVAL1 = ' right ascension of reference pixel' / \n CRVAL2 = ' declination of reference pixel' / \n CRVAL3 = ' first packet time' / \n CRPIX1 = ' x-coordinate of reference pixel' / \n CRPIX2 = ' y-coordinate of reference pixel' / \n CD1_1 = ' partial of the right ascension w.r.t. x' / \n CD1_2 = ' partial of the right ascension w.r.t. y' / \n CD2_1 = ' partial of the declination w.r.t. x' / \n CD2_2 = ' partial of the declination w.r.t. y' / \n DATAMIN = ' minimum value of the data' / \n DATAMAX = ' maximum value of the data' / \n MIR_REVR= ' is the image mirror reversed?' / \n ORIENTAT= ' orientation of the image in degrees' / \n FILLCNT = ' number of segments containing fill' / \n ERRCNT = ' number of segments containing errors' / \n FPKTTIME= ' time of the first packet' / \n LPKTTIME= ' time of the last packet' / \n CTYPE1 = ' first coordinate type' / \n CTYPE2 = ' second coordinate type' / \n CTYPE3 = ' third coordinate type' / \n DETECTOR= ' CCD detector: WFC 1-4, PC 5-8' / \n DEZERO = ' Bias level from EED extended register' / \n GOODMIN = ' minumum value of the \"good\" pixels' / \n GOODMAX = ' maximum value of the \"good\" pixels' / \n DATAMEAN= ' mean value of the \"good\" pixels' / \n GPIXELS = ' number of \"good\" pixels (DQF=0)' / \n SOFTERRS= ' number of \"soft error\" pixels (DQF=1)' / \n CALIBDEF= ' number of \"calibration defect\" pixels (DQF=2)' / \n STATICD = ' number of \"static defect\" pixels (DQF=4)' / \n ATODSAT = ' number of \"AtoD saturated\" pixels (DQF=8)' / \n DATALOST= ' number of \"data lost\" pixels (DQF=16)' / \n BADPIXEL= ' number of \"generic bad\" pixels (DQF=32)' / \n PHOTMODE= ' Photometry mode' / \n PHOTFLAM= ' Inverse Sensitivity' / \n PHOTZPT = ' Zero point' / \n PHOTPLAM= ' Pivot wavelength' / \n PHOTBW = ' RMS bandwidth of the filter' / \n\n\n\nThe output here tells us about the entries in the table. We can access them using the attribute data, and asking about particular data values. For example, two of the entries are CRVAL1 and CRVAL2, which give the right ascension and declinations (celestial coordinates) of a reference pixel. We can print those out as follows:\n\n\n```python\nhdulist[1].data['CRVAL1']\n```\n\n\n\n\n array([291.02797853, 291.02797853, 291.02797853, 291.02797853])\n\n\n\n\n```python\nhdulist[1].data['CRVAL2']\n```\n\n\n\n\n array([-22.01122139, -22.01122139, -22.01122139, -22.01122139])\n\n\n\nThere are four entries because there are four images, though the reference pixel positions are the same for all of them in this example. This data says that the right ascension of the reference pixel is 291.02797853 degrees, and the declination is -22.01122139 degrees.\n\nThe image itself is stored in the first HDU, and we can use the imshow command to display it. Let's do that, using a logarithmic color scale:\n\n\n```python\nplt.clf()\nplt.imshow( np.log(hdulist[0].data[0,:,:]) )\n```\n\nNote that the [0,:,:] is to specify that we want to see the first of the four images. Also notice the warning; that occurred because some of the pixel values are negative due to instrument noise, and taking the log of a negative number if of course undefined. However, we don't have to worry about that.\n\nThis particular example is an image of Saturn, one of the first ever made by HST.\n\nWhen we're done with the FITS file, we close it via\n\n\n```python\nhdulist.close()\n```\n\n# Basic Statistics\n \n\n## Descriptive statistics\n\nWe now move to the second topic of today's class. Now that we know how to get data in and out of a python session, what can we do with it? We can do many things of course, but one of the most basic is to perform some statistical analysis of it.\n\nLet's start by making ourselves some data to play with. Since we want this to be like real data, we'll be sure to add some noise to it.\n\n\n```python\nnoise = np.random.normal( scale=0.25, size=len(x) )\ny = np.sin(x) + noise\nplt.clf()\nplt.plot( x, y, 'o' )\n```\n\nTo make the noise, we used the numpy function random.normal(). This picks random numbers from a normal (Gaussian) distribution with a dispersion sigma given by the keyword scale. The keyword size specifies how many random values to pick, and in this case we set it to as many elements as there were in x. Then we made a y value by setting it equal to sin(x) plus the noise. \n\nNote that there are other numpy random routines that will generate random numbers with different distributions.\n\nOnce we have this data, there are numerous numpy routines that we can use to analyze it. The functions below all come from the [numpy statistics package](http://docs.scipy.org/doc/numpy/reference/routines.statistics.html).\n\nWe've already encountered the amin and amax routines, which find the minimum and maximum value. We can also find the mean, median, and standard deviation just as easily:\n\n\n```python\nnp.mean(y)\n```\n\n\n\n\n 2.2357252709125184e-05\n\n\n\n\n```python\nnp.median(y)\n```\n\n\n\n\n -0.013349835524341103\n\n\n\n\n```python\nnp.std(y)\n```\n\n\n\n\n 0.7653375596903697\n\n\n\nWe can also find percentiles, i.e., the value below which p percent of the sample lies. For example,\n\n\n```python\nnp.percentile( y, [25,50,70] )\n```\n\n\n\n\n array([-0.64566341, -0.01334984, 0.57541746])\n\n\n\nThis gives the value below which 25%, 50%, and 75% of the values in y lie. Note that the 50th percentile is the same as the median.\n\nFinally, we can put the data in bins and count occurrences. The command to do this is called [histogram()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html), and it works as follows:\n\n\n\n```python\nhist, edges = np.histogram( y, range=(-1.5, 1.5), bins=30 )\n```\n\n\n```python\nedges\n```\n\n\n\n\n array([-1.5, -1.4, -1.3, -1.2, -1.1, -1. , -0.9, -0.8, -0.7, -0.6, -0.5,\n -0.4, -0.3, -0.2, -0.1, 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6,\n 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5])\n\n\n\n\n```python\nhist\n```\n\n\n\n\n array([ 4, 8, 16, 20, 22, 18, 22, 30, 36, 31, 29, 23, 22, 19, 20, 20, 15,\n 20, 18, 22, 32, 30, 30, 26, 40, 16, 14, 11, 10, 3])\n\n\n\nThe histogram command takes as input the data to be histogrammed, and then keywords specifying the range for the bins (in this case -1.5 to 1.5) and the number of bins (in this case 30, so that each bin will be 0.1 wide). One can also specify the edges of the bins manually. The output is a tuple of two items. The first is the histogram itself, which we've called hist, and the second is the values of the bin edges, which we've called edges. Upon return, hist contains the number of array elements in each bin.\n\nWe can plot this data as a bar plot, using the bar command in matplotlib.\n\n\n\n```python\nplt.clf()\nplt.bar( edges[:-1], hist, width=0.1 )\n```\n\n## Curve fitting\n\nWhat we've been doing thus far is descriptive statistics: taking a set of data and calculating various quantities that describe it. The final topic for today is model fitting: using a data set to derive a model for how the underlying phenomenon behaves. The simplest application of this is to fit curves through data. Before discussing how to do this in python, we must review the basic theory of curve fitting first.\n\nCurve fitting is a problem of finding the minimum of a function. Suppose that we have a set of measured data points (x1, y1), (x2, y2), (x3, y3), .... The data are not perfect. They consist of a measurement plus some amount of noise. We have some model of the data, which we can describe by a function y(x; p), where p is a set of one or more parameters describing the model. For example, we might wish to measure the strength of the surface gravitational field on a planet. We know that a point mass dropped in a constant gravitational field in vacuum will fall a distance\n\n\\begin{equation}\nd = \\frac{1}{2} g t^2\n\\end{equation}\n\nin a time $t$. To measure $g$, we can drop and object and record the distance $d$ it has travelled at a variety of times $t$. Given these measurements, we want to find the value of $g$ that we should infer from the data; here $g$ is the parameter to be fit given the ($t$, $d$) data.\n\nUnder fairly general assumptions about the nature of the error in the measurements, one can show that the the \"best\" fit is given by the value of $g$ that minimizes the squared distance between the model and the data. That is, we can define a function for the error by\n\n\\begin{equation}\ne^2(g) = \\sum \\left[ \\frac{1}{2} g t_i^2 - d_i \\right]^2\n\\end{equation}\n\nwhere the sum runs over all the measurements $i$. The best value of $g$ is the one for which $e^2(g)$ reaches its minimum value.\n\nMore generally, for a set of measurements (x1, y1), (x2, y2), (x3, y3), .... and a model y(x; **p**), we wish to find the set of values **p** that minimizes the multi-dimensional function\n\n\\begin{equation}\ne^2(\\mathbf{p}) = \\sum \\left[ y_i - y(x_i; \\mathbf{p}) \\right]^2.\n\\end{equation}\n\n\nIf the data points do not all have the same error, so that some are more reliable than others, this can be generalize to minimizing\n\n\\begin{equation}\ne^2(\\mathbf{p}) = \\sum \\frac{ \\left[ y_i - y(x_i; \\mathbf{p}) \\right]^2 }{ \\sigma_i^2},\n\\end{equation}\n\nwhere $\\sigma_i$ is the error on data point $i$.\n\nIn the case where $\\mathbf{p}$ is only a single parameter, this problem is fairly straightforward, although for sufficiently complex models it can still be tricky as the function may have more than one minimum, and our goal is to find the global minimum. In multiple dimensions the problem is considerably trickier, and the problem that there might be multiple local minima that we must sort through to find the global minimum is significantly worse.\n\nThe operation of attempting to find the best-fitting parameter is sufficiently common that numpy and scipy provide routines to do it. The simplest of these fits to a particular function $y(x; \\mathbf{p})$: a polynomial. That is, a function of the form\n\n\\begin{equation}\ny(x; \\mathbf{p}) = p_0 + p_1 x + p_2 x^2 + p_3 x^3 + ... + p_N x^N.\n\\end{equation}\n\nHere the value $N$ is the degree of the polynomial. The numpy routine that performs this operation is called [polyfit](http://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html). Here is an example of how it can be used:\n\n\n```python\nfit = np.polyfit( x, y, 1 )\nfit\n```\n\n\n\n\n array([-0.31065589, 0.97548184])\n\n\n\nThe first argument is the array of x values, the second is the array of y values, and the third is the degree of the polynomial to be fit. In this case we have fit a first-order polynomial, i.e., a straight line. The output is the array of polynomial coefficients, starting with the highest-order one and proceeding to the lowest. Thus polyfit has returned a best-fitting function\n\n\n```python\nfunction = 'y = {} + {} x '.format( fit[1], fit[0] )\nprint( function )\n```\n\n y = 0.9754818447791317 + -0.3106558877472683 x \n\n\nTo see how this compares to the data, we can do\n\n\n```python\nplt.clf()\nplt.plot( x, y, 'o' )\nplt.plot( x, fit[0]*x + fit[1], lw=5 )\n```\n\nClearly this is not a very good fit, which illustrates an important point: just because you can find a best fit, doesn't mean it's a good one\n\nWe can try a somewhat more complex function, say a 12th order polynomial, and see if that does better:\n\n\n```python\nfit_2 = np.polyfit( x, y, 12 )\ny_fit = np.zeros( len(x) )\nfor i in range( 13 ):\n y_fit = y_fit + fit_2[i]*x**(12-i) \n\nplt.clf()\nplt.plot( x, y, 'o' ) \nplt.plot( x, y_fit, lw=5 )\n```\n\nClearly this is a better fit.\n\nWe can also fit functional forms other than polynomials. For example, suppose we knew, perhaps based on some theoretical model, that the data we are looking at should follow a sin function, but one with unknown wavelength, phase, amplitude, and offset from zero. That is, we want to fit to a functional form\n\n\\begin{equation}\ny(x; \\mathbf{p}) = p_0 + p_1 sin[p_2 (x - p_3)].\n\\end{equation}\n\nThe scipy routine curve_fit provides this capability. We use this routine in two steps. First, we must define the function we want to fit:\n\n\n```python\ndef sinfunc(x, p0, p1, p2, p3):\n return p0 + p1*np.sin(p2*(x-p3))\n```\n\nThis function must take an argument x giving the value of the independent variable, and then some arbitrary number of additional parameters. Then we import the curve_fit function from the scipy.optimize module, and call it:\n\n\n```python\nfrom scipy.optimize import curve_fit\n\np, pcov = curve_fit( sinfunc, x, y )\n```\n\nThe curve_fit function returns a tuple of two values, which here we have stored to p and pcov. The first is the set of optimized parameters p. The second is what is called the [covariance matrix](http://en.wikipedia.org/wiki/Covariance_matrix), which is a 2d array the describes the variance in the parameter estimate.\n\n\n```python\np\n```\n\n\n\n\n array([ 2.12779163e-05, 1.02433660e+00, 9.99284812e-01, -1.15036741e-02])\n\n\n\nWe can see $\\mathbf{p}$ is a very good fit to what we put in: the offset is almost zero, the amplitude is almost one, the wavelength is almost $2 \\pi$ (corresponding to $p_2 = 1$), and the phase shift is almost zero. A plot of the fit looks very good too:\n\n\n```python\nplt.clf()\nplt.plot( x, y, 'o' ) \nplt.plot(x, sinfunc(x, p[0], p[1], p[2], p[3]), lw=5)\n```\n", "meta": {"hexsha": "cec1b83d4da922107e2e4c9b37bf93db54c95524", "size": 349603, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "bootcamp/Class4.ipynb", "max_stars_repo_name": "bvillasen/lamat2020", "max_stars_repo_head_hexsha": "7a8a792f47bfed7512679aa3f24c110afb62349f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bootcamp/Class4.ipynb", "max_issues_repo_name": "bvillasen/lamat2020", "max_issues_repo_head_hexsha": "7a8a792f47bfed7512679aa3f24c110afb62349f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bootcamp/Class4.ipynb", "max_forks_repo_name": "bvillasen/lamat2020", "max_forks_repo_head_hexsha": "7a8a792f47bfed7512679aa3f24c110afb62349f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 176.8351036925, "max_line_length": 101972, "alphanum_fraction": 0.8647122593, "converted": true, "num_tokens": 12279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3486451488696663, "lm_q2_score": 0.38121956625614994, "lm_q1q2_score": 0.13291035242940502}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: Pyro\n-----\n\n\n\n```python\nfrom jax import random\nimport jax\nimport jax.numpy as np\nimport numpyro as pyro\nimport numpyro.distributions as dist\n```\n\n\n```python\ncount_data = jax.numpy.array(count_data)\n```\n\n\n```python\ndef model(data):\n alpha = (1. / data.mean())\n lambda1 = pyro.sample(\"lambda_1\", dist.Exponential(rate=alpha))\n lambda2 = pyro.sample(\"lambda_2\", dist.Exponential(rate=alpha))\n\n tau = pyro.sample(\"tau\", dist.Uniform(0, 1))\n\n lambda12 = np.where(np.arange(len(data)) < tau * len(data), lambda1, lambda2)\n pyro.sample('obs', dist.Poisson(lambda12), obs=data)\n```\n\n\n```python\nnuts_kernel = pyro.infer.NUTS(model)\nposterior = pyro.infer.MCMC(nuts_kernel, num_samples=10000, num_warmup=5000) # 100x faster than pyro\nrng_key = random.PRNGKey(0)\nposterior.run(rng_key, count_data)\n```\n\n sample: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15000/15000 [00:26<00:00, 569.63it/s, 1023 steps of size 2.41e-03. acc. prob=0.75]\n\n\n\n```python\nlambda_1_samples = posterior.get_samples()['lambda_1']\nlambda_2_samples = posterior.get_samples()['lambda_2']\ntau_samples = posterior.get_samples()['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\ntau_samples = np.array((tau_samples * count_data.size + 1), dtype=np.int32)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day = jax.ops.index_update(expected_texts_per_day, day, (lambda_1_samples[ix].sum() + lambda_2_samples[~ix].sum()) / N)\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "0c55bdd60019116db224c53c869de93885f0ef71", "size": 289981, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_numpyro.ipynb", "max_stars_repo_name": "davinnovation/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "7280687a4cadb2d273a41f037aeb8d4619cc160a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_numpyro.ipynb", "max_issues_repo_name": "davinnovation/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "7280687a4cadb2d273a41f037aeb8d4619cc160a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_numpyro.ipynb", "max_forks_repo_name": "davinnovation/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "7280687a4cadb2d273a41f037aeb8d4619cc160a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 280.1748792271, "max_line_length": 86088, "alphanum_fraction": 0.9015763102, "converted": true, "num_tokens": 10732, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295350395727, "lm_q2_score": 0.2974699426047947, "lm_q1q2_score": 0.13253164521696256}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pylab as plt\nimport seaborn as sns\nsns.set_context('notebook')\n\nRANDOM_SEED = 20090425\n```\n\n---\n\n# Comparing Two Groups with a Continuous or Binary Outcome\n\nStatistical inference is a process of learning from incomplete or imperfect (error-contaminated) data. Can account for this \"imperfection\" using either a sampling model or a measurement error model.\n\n### Statistical hypothesis testing\n\nThe *de facto* standard for statistical inference is statistical hypothesis testing. The goal of hypothesis testing is to evaluate a **null hypothesis**. There are two possible outcomes:\n\n- reject the null hypothesis\n- fail to reject the null hypothesis\n\nRejection occurs when a chosen test statistic is higher than some pre-specified threshold valuel; non-rejection occurs otherwise.\n\n\n\nNotice that neither outcome says anything about the quantity of interest, the **research hypothesis**. \n\nSetting up a statistical test involves several subjective choices by the user that are rarely justified based on the problem or decision at hand:\n\n- statistical test to use\n- null hypothesis to test\n- significance level\n\nChoices are often based on arbitrary criteria, including \"statistical tradition\" (Johnson 1999). The resulting evidence is indirect, incomplete, and typically overstates the evidence against the null hypothesis (Goodman 1999).\n\nMost importantly to applied users, the results of statistical hypothesis tests are very easy to misinterpret. \n\n### Estimation \n\nInstead of testing, a more informative and effective approach for inference is based on **estimation** (be it frequentist or Bayesian). That is, rather than testing whether two groups are different, we instead pursue an estimate of *how different* they are, which is fundamentally more informative. \n\nAdditionally, we include an estimate of **uncertainty** associated with that difference which includes uncertainty due to our lack of knowledge of the model parameters (*epistemic uncertainty*) and uncertainty due to the inherent stochasticity of the system (*aleatory uncertainty*).\n\n# An Introduction to Bayesian Statistical Analysis\n\nThough many of you will have taken a statistics course or two during your undergraduate (or graduate education, most of those who have will likely not have had a course in *Bayesian* statistics. Most introductory courses, particularly for non-statisticians, still do not cover Bayesian methods at all. Even today, Bayesian courses (similarly to statistical computing courses!) are typically tacked onto the curriculum, rather than being integrated into the program.\n\nIn fact, Bayesian statistics is not just a particular method, or even a class of methods; it is an entirely **different paradigm** for doing statistical analysis.\n\n> Practical methods for making inferences from data using probability models for quantities we observe and about which we wish to learn.\n*-- Gelman et al. 2013*\n\nA Bayesian model is described by parameters, uncertainty in those parameters is described using probability distributions.\n\nAll conclusions from Bayesian statistical procedures are stated in terms of **probability statements**\n\n\n\nThis confers several benefits to the analyst, including:\n\n- ease of interpretation, summarization of uncertainty\n- can incorporate uncertainty in parent parameters\n- easy to calculate summary statistics\n\n### Bayesian vs Frequentist Statistics: *What's the difference?*\n\nAny statistical inferece paradigm, Bayesian or otherwise, involves at least the following: \n\n1. Some **unknown quantities** about which we are interested in learning or testing. We call these *parameters*.\n2. Some **data** which have been observed, and hopefully contain information about.\n3. One or more **models** that relate the data to the parameters, and is the instrument that is used to learn.\n\n\n\n### The Frequentist World View\n\n\n\n- The **data** that have been observed are considered **random**, because they are realizations of random processes, and hence will vary each time one goes to observe the system.\n- Model **parameters** are considered **fixed**. A parameter's true value is uknown and fixed, and so we *condition* on them.\n\nIn mathematical notation, this implies a (very) general model of the following form:\n\n
\n\\\\[f(y | \\theta)\\\\]\n
\n\nHere, the model \\\\(f\\\\) accepts data values \\\\(y\\\\) as an argument, conditional on particular values of \\\\(\\theta\\\\).\n\nFrequentist inference typically involves deriving **estimators** for the unknown parameters. Estimators are formulae that return estimates for particular estimands, as a function of data. They are selected based on some chosen optimality criterion, such as *unbiasedness*, *variance minimization*, or *efficiency*.\n\n> For example, lets say that we have collected some data on the prevalence of autism spectrum disorder (ASD) in some defined population. Our sample includes \\\\(n\\\\) sampled children, \\\\(y\\\\) of them having been diagnosed with autism. A frequentist estimator of the prevalence \\\\(p\\\\) is:\n\n>
\n> $$\\hat{p} = \\frac{y}{n}$$\n>
\n\n> Why this particular function? Because it can be shown to be unbiased and minimum-variance.\n\nIt is important to note that, in a frequentist world, new estimators need to be derived for every estimand that is introduced.\n\n### The Bayesian World View\n\n\n\n- Data are considered **fixed**. They used to be random, but once they were written into your lab notebook/spreadsheet/IPython notebook they do not change.\n- Model parameters themselves may not be random, but Bayesians use probability distribtutions to describe their uncertainty in parameter values, and are therefore treated as **random**. In some cases, it is useful to consider parameters as having been sampled from probability distributions.\n\nThis implies the following form:\n\n
\n\\\\[p(\\theta | y)\\\\]\n
\n\nThis formulation used to be referred to as ***inverse probability***, because it infers from observations to parameters, or from effects to causes.\n\nBayesians do not seek new estimators for every estimation problem they encounter. There is only one estimator for Bayesian inference: **Bayes' Formula**.\n\n## Bayes' Formula\n\nNow that we have some probability under our belt, we turn to Bayes' formula. While frequentist statistics uses different estimators for different problems, Bayes formula is the **only estimator** that Bayesians need to obtain estimates of unknown quantities that we care about. \n\n\n\nThe equation expresses how our belief about the value of \\\\(\\theta\\\\), as expressed by the **prior distribution** \\\\(P(\\theta)\\\\) is reallocated following the observation of the data \\\\(y\\\\).\n\nThe innocuous denominator \\\\(P(y)\\\\) usuallt cannot be computed directly, and is actually the expression in the numerator, integrated over all \\\\(\\theta\\\\):\n\n
\n\\\\[Pr(\\theta|y) = \\frac{Pr(y|\\theta)Pr(\\theta)}{\\int Pr(y|\\theta)Pr(\\theta) d\\theta}\\\\]\n
\n\nThe intractability of this integral is one of the factors that has contributed to the under-utilization of Bayesian methods by statisticians.\n\n### Priors\n\nOnce considered a controversial aspect of Bayesian analysis, the prior distribution characterizes what is known about an unknown quantity before observing the data from the present study. Thus, it represents the information state of that parameter. It can be used to reflect the information obtained in previous studies, to constrain the parameter to plausible values, or to represent the population of possible parameter values, of which the current study's parameter value can be considered a sample.\n\n### Likelihood functions\n\nThe likelihood represents the information in the observed data, and is used to update prior distributions to posterior distributions. This updating of belief is justified becuase of the **likelihood principle**, which states:\n\n> Following observation of \\\\(y\\\\), the likelihood \\\\(L(\\theta|y)\\\\) contains all experimental information from \\\\(y\\\\) about the unknown \\\\(\\theta\\\\).\n\nBayesian analysis satisfies the likelihood principle because the posterior distribution's dependence on the data is **only through the likelihood**. In comparison, most frequentist inference procedures violate the likelihood principle, because inference will depend on the design of the trial or experiment.\n\nRemember from the density estimation section that the likelihood is closely related to the probability density (or mass) function. The difference is that the likelihood varies the parameter while holding the observations constant, rather than *vice versa*.\n\n## Bayesian Inference, in 3 Easy Steps\n\n\n\nGelman et al. (2013) describe the process of conducting Bayesian statistical analysis in 3 steps.\n\n### Step 1: Specify a probability model\n\nAs was noted above, Bayesian statistics involves using probability models to solve problems. So, the first task is to *completely specify* the model in terms of probability distributions. This includes everything: unknown parameters, data, covariates, missing data, predictions. All must be assigned some probability density.\n\nThis step involves making choices.\n\n- what is the form of the sampling distribution of the data?\n- what form best describes our uncertainty in the unknown parameters?\n\n### Discrete Random Variables\n\n$$X = \\{0,1\\}$$\n\n$$Y = \\{\\ldots,-2,-1,0,1,2,\\ldots\\}$$\n\n**Probability Mass Function**: \n\nFor discrete $X$,\n\n$$Pr(X=x) = f(x|\\theta)$$\n\n\n\n***e.g. Poisson distribution***\n\nThe Poisson distribution models unbounded counts:\n\n
\n$$Pr(X=x)=\\frac{e^{-\\lambda}\\lambda^x}{x!}$$\n
\n\n* $X=\\{0,1,2,\\ldots\\}$\n* $\\lambda > 0$\n\n$$E(X) = \\text{Var}(X) = \\lambda$$\n\n\n```python\nfrom pymc3 import Poisson\n\nx = Poisson.dist(mu=1)\nsamples = x.random(size=10000)\n```\n\n\n```python\nsamples.mean()\n```\n\n\n\n\n 1.0073000000000001\n\n\n\n\n```python\nplt.hist(samples, bins=len(set(samples)));\n```\n\n### Continuous Random Variables\n\n$$X \\in [0,1]$$\n\n$$Y \\in (-\\infty, \\infty)$$\n\n**Probability Density Function**: \n\nFor continuous $X$,\n\n$$Pr(x \\le X \\le x + dx) = f(x|\\theta)dx \\, \\text{ as } \\, dx \\rightarrow 0$$\n\n\n\n***e.g. normal distribution***\n\n
\n$$f(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right]$$\n
\n\n* $X \\in \\mathbf{R}$\n* $\\mu \\in \\mathbf{R}$\n* $\\sigma>0$\n\n$$\\begin{align}E(X) &= \\mu \\cr\n\\text{Var}(X) &= \\sigma^2 \\end{align}$$\n\n\n```python\nfrom pymc3 import Normal\n\ny = Normal.dist(mu=-2, sd=4)\nsamples = y.random(size=10000)\n```\n\n\n```python\nsamples.mean()\n```\n\n\n\n\n -1.9495826503791522\n\n\n\n\n```python\nsamples.std()\n```\n\n\n\n\n 3.9997450711227618\n\n\n\n\n```python\nplt.hist(samples);\n```\n\n### Step 2: Calculate a posterior distribution\n\nThe mathematical form \\\\(p(\\theta | y)\\\\) that we associated with the Bayesian approach is referred to as a **posterior distribution**.\n\n> posterior /pos\u00b7ter\u00b7i\u00b7or/ (pos-t\u0113r\u00b4e-er) later in time; subsequent.\n\nWhy posterior? Because it tells us what we know about the unknown \\\\(\\theta\\\\) *after* having observed \\\\(y\\\\).\n\nThis posterior distribution is formulated as a function of the probability model that was specified in Step 1. Usually, we can write it down but we cannot calculate it analytically. In fact, the difficulty inherent in calculating the posterior distribution for most models of interest is perhaps the major contributing factor for the lack of widespread adoption of Bayesian methods for data analysis. Various strategies for doing so comprise this tutorial.\n\n**But**, once the posterior distribution is calculated, you get a lot for free:\n\n- point estimates\n- credible intervals\n- quantiles\n- predictions\n\n### Step 3: Check your model\n\nThough frequently ignored in practice, it is critical that the model and its outputs be assessed before using the outputs for inference. Models are specified based on assumptions that are largely unverifiable, so the least we can do is examine the output in detail, relative to the specified model and the data that were used to fit the model.\n\nSpecifically, we must ask:\n\n- does the model fit data?\n- are the conclusions reasonable?\n- are the outputs sensitive to changes in model structure?\n\n\n\n## Estimation for one group\n\nBefore we compare two groups using Bayesian analysis, let's start with an even simpler scenario: statistical inference for one group.\n\nFor this we will use Gelman et al.'s (2007) radon dataset. In this dataset the amount of the radioactive gas radon has been measured among different households in all counties of several states. Radon gas is known to be the highest cause of lung cancer in non-smokers. It is believed to be more strongly present in households containing a basement and to differ in amount present among types of soil.\n\n> the US EPA has set an action level of 4 pCi/L. At or above this level of radon, the EPA recommends you take corrective measures to reduce your exposure to radon gas.\n\n\n\nLet's import the dataset:\n\n\n```python\nradon = pd.read_csv('../data/radon.csv', index_col=0)\nradon.head()\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
idnumstatestate2stfipszipregiontypebldgfloorroombasement...pcterradjwtdupflagzipflagcntyfipscountyfipsUppmcounty_codelog_radon
05081.0MNMN27.0557355.01.01.03.0N...9.71146.4991901.00.01.0AITKIN27001.00.50205400.832909
15082.0MNMN27.0557485.01.00.04.0Y...14.5471.3662230.00.01.0AITKIN27001.00.50205400.832909
25083.0MNMN27.0557485.01.00.04.0Y...9.6433.3167180.00.01.0AITKIN27001.00.50205401.098612
35084.0MNMN27.0564695.01.00.04.0Y...24.3461.6236700.00.01.0AITKIN27001.00.50205400.095310
45085.0MNMN27.0550113.01.00.04.0Y...13.8433.3167180.00.03.0ANOKA27003.00.42856511.163151
\n

5 rows \u00d7 29 columns

\n
\n\n\n\nLet's focus on the (log) radon levels measured in a single county (Hennepin). \n\nSuppose we are interested in:\n\n- whether the mean log-radon value is greater than 4 pCi/L in Hennepin county\n- the probability that any randomly-chosen household in Hennepin county has a reading of greater than 4\n\n\n```python\nhennepin_radon = radon.query('county==\"HENNEPIN\"').log_radon\nsns.distplot(hennepin_radon)\n```\n\n\n```python\nhennepin_radon.shape\n```\n\n\n\n\n (105,)\n\n\n\n### The model\n\nRecall that the first step in Bayesian inference is specifying a **full probability model** for the problem.\n\nThis consists of:\n\n- a likelihood function(s) for the observations\n- priors for all unknown quantities\n\nThe measurements look approximately normal, so let's start by assuming a normal distribution as the sampling distribution (likelihood) for the data. \n\n$$y_i \\sim N(\\mu, \\sigma^2)$$\n\n(don't worry, we can evaluate this assumption)\n\nThis implies that we have 2 unknowns in the model; the mean and standard deviation of the distribution. \n\n#### Prior choice\n\nHow do we choose distributions to use as priors for these parameters? \n\nThere are several considerations:\n\n- discrete vs continuous values\n- the support of the variable\n- the available prior information\n\nWhile there may likely be prior information about the distribution of radon values, we will assume no prior knowledge, and specify a **diffuse** prior for each parameter.\n\nSince the mean can take any real value (since it is on the log scale), we will use another normal distribution here, and specify a large variance to allow the possibility of very large or very small values:\n\n$$\\mu \\sim N(0, 10^2)$$\n\nFor the standard deviation, we know that the true value must be positive (no negative variances!). I will choose a uniform prior bounded from below at zero and from above at a value that is sure to be higher than any plausible value the true standard deviation (on the log scale) could take.\n\n$$\\sigma \\sim U(0, 10)$$\n\nWe can encode these in a Python model, using the PyMC3 package, as follows:\n\n\n```python\nfrom pymc3 import Model, Uniform\n\nwith Model() as radon_model:\n \n \u03bc = Normal('\u03bc', mu=0, sd=10)\n \u03c3 = Uniform('\u03c3', 0, 10)\n```\n\n> ## Software\n> Today there is an array of software choices for Bayesians, including both open source software (*e.g.*, Stan, PyMC, JAGS, emcee) and commercial (*e.g.*, SAS, Stata). These examples can be replicated in any of these environments.\n\nAll that remains is to add the likelihood, which takes $\\mu$ and $\\sigma$ as parameters, and the log-radon values as the set of observations:\n\n\n```python\nwith radon_model:\n \n y = Normal('y', mu=\u03bc, sd=\u03c3, observed=hennepin_radon)\n```\n\nNow, we will fit the model using a numerical approach called **variational inference**. This will estimate the posterior distribution using an optimized approximation, and then draw samples from it.\n\n\n```python\nfrom pymc3 import fit\n\nwith radon_model:\n\n samples = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 117.86: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:00<00:00, 14385.86it/s]\n Finished [100%]: Average Loss = 117.86\n\n\n\n```python\nfrom pymc3 import plot_posterior\n\nplot_posterior(samples, varnames=['\u03bc'], ref_val=np.log(4), color='LightSeaGreen');\n```\n\nThe plot shows the posterior distribution of $\\mu$, along with an estimate of the 95% posterior **credible interval**. \n\nThe output\n\n 83.1% < 1.38629 < 16.9%%\n \ninforms us that the probability of $\\mu$ being less than $\\log(4)$ is 83.1%% and the corresponding probability of being greater than $\\log(4)$ is 16.9%.\n\n> The posterior probability that the mean level of household radon in Henneprin County is greater than 4 pCi/L is 0.17.\n\n### Prediction\n\nWhat is the probability that a given household has a log-radon measurement larger than one? To answer this, we make use of the **posterior predictive distribution**.\n\n$$p(z |y) = \\int_{\\theta} p(z |\\theta) p(\\theta | y) d\\theta$$\n\nwhere here $z$ is the predicted value and y is the data used to fit the model.\n\nWe can estimate this from the posterior samples of the parameters in the model.\n\n\n```python\nmus = samples['\u03bc']\nsigmas = samples['\u03c3']\n```\n\n\n```python\nradon_samples = Normal.dist(mus, sigmas).random()\n```\n\n\n```python\n(radon_samples > np.log(4)).mean()\n```\n\n\n\n\n 0.46999999999999997\n\n\n\n> The posterior probability that a randomly-selected household in Henneprin County contains radon levels in excess of 4 pCi/L is 0.48.\n\n### Model checking\n\nBut, ***how do we know this model is any good?***\n\nIts important to check the fit of the model, to see if its assumptions are reasonable. One way to do this is to perform **posterior predictive checks**. This involves generating simulated data using the model that you built, and comparing that data to the observed data.\n\nOne can choose a particular statistic to compare, such as tail probabilities or quartiles, but here it is useful to compare them graphically.\n\nWe already have these simulations from the previous exercise!\n\n\n```python\nsns.distplot(radon_samples, label='simulated')\nsns.distplot(hennepin_radon, label='observed')\nplt.legend()\n```\n\n### Prior sensitivity\n\nIts also important to check the sensitivity of your choice of priors to the resulting inference.\n\nHere is the same model, but with drastically different (though still uninformative) priors specified:\n\n\n```python\nfrom pymc3 import Flat, HalfCauchy\n\nwith Model() as prior_sensitivity:\n \n \u03bc = Flat('\u03bc')\n \u03c3 = HalfCauchy('\u03c3', 5)\n \n dist = Normal('dist', mu=\u03bc, sd=\u03c3, observed=hennepin_radon)\n \n sensitivity_samples = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 114.32: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:00<00:00, 14780.74it/s]\n Finished [100%]: Average Loss = 114.32\n\n\n\n```python\nplot_posterior(sensitivity_samples, varnames=['\u03bc'], ref_val=np.log(4), color='LightSeaGreen');\n```\n\nHere is the original model for comparison:\n\n\n```python\nplot_posterior(samples, varnames=['\u03bc'], ref_val=np.log(4), color='LightSeaGreen');\n```\n\n## Two Groups with Continiuous Outcome\n\nTo illustrate how this Bayesian estimation approach works in practice, we will use a fictitious example from Kruschke (2012) concerning the evaluation of a clinical trial for drug evaluation. The trial aims to evaluate the efficacy of a \"smart drug\" that is supposed to increase intelligence by comparing IQ scores of individuals in a treatment arm (those receiving the drug) to those in a control arm (those recieving a placebo). There are 47 individuals and 42 individuals in the treatment and control arms, respectively.\n\n\n```python\ndrug = pd.DataFrame(dict(iq=(101,100,102,104,102,97,105,105,98,101,100,123,105,103,100,95,102,106,\n 109,102,82,102,100,102,102,101,102,102,103,103,97,97,103,101,97,104,\n 96,103,124,101,101,100,101,101,104,100,101),\n group='drug'))\nplacebo = pd.DataFrame(dict(iq=(99,101,100,101,102,100,97,101,104,101,102,102,100,105,88,101,100,\n 104,100,100,100,101,102,103,97,101,101,100,101,99,101,100,100,\n 101,100,99,101,100,102,99,100,99),\n group='placebo'))\n\ntrial_data = pd.concat([drug, placebo], ignore_index=True)\ntrial_data.hist('iq', by='group');\n```\n\nSince there appear to be extreme (\"outlier\") values in the data, we will choose a Student-t distribution to describe the distributions of the scores in each group. This sampling distribution adds **robustness** to the analysis, as a T distribution is less sensitive to outlier observations, relative to a normal distribution. \n\nThe three-parameter Student-t distribution allows for the specification of a mean $\\mu$, a precision (inverse-variance) $\\lambda$ and a degrees-of-freedom parameter $\\nu$:\n\n$$f(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\frac{\\nu + 1}{2})}{\\Gamma(\\frac{\\nu}{2})} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{\\frac{1}{2}} \\left[1+\\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\frac{\\nu+1}{2}}$$\n \nthe degrees-of-freedom parameter essentially specifies the \"normality\" of the data, since larger values of $\\nu$ make the distribution converge to a normal distribution, while small values (close to zero) result in heavier tails.\n\nThus, the likelihood functions of our model are specified as follows:\n\n$$\\begin{align}\ny^{(drug)}_i &\\sim T(\\nu, \\mu_1, \\sigma_1) \\\\\ny^{(placebo)}_i &\\sim T(\\nu, \\mu_2, \\sigma_2)\n\\end{align}$$\n\nAs a simplifying assumption, we will assume that the degree of normality $\\nu$ is the same for both groups. \n\n### Exercise\n\nDraw 10000 samples from a Student-T distribution (`StudentT` in PyMC3) with parameter `nu=3` and compare the distribution of these values to a similar number of draws from a Normal distribution with parameters `mu=0` and `sd=1`.\n\n\n```python\nfrom pymc3 import StudentT\n\nt = StudentT.dist(nu=3).random(size=10000)\nn = Normal.dist(0, 1).random(size=10000)\n```\n\n\n```python\nsns.distplot(t, label='Student-T')\nsns.distplot(n, label='Normal')\nplt.legend()\nplt.xlim(-10,10);\n```\n\n\n### Prior choice\n\nSince the means are real-valued, we will apply normal priors. Since we know something about the population distribution of IQ values, we will center the priors at 100, and use a standard deviation that is more than wide enough to account for plausible deviations from this population mean:\n\n$$\\mu_k \\sim N(100, 10^2)$$\n\n\n```python\nwith Model() as drug_model:\n \n \u03bc_0 = Normal('\u03bc_0', 100, sd=10)\n \u03bc_1 = Normal('\u03bc_1', 100, sd=10)\n```\n\nSimilarly, we will use a uniform prior for the standard deviations, with an upper bound of 20.\n\n\n```python\nwith drug_model:\n \u03c3_0 = Uniform('\u03c3_0', lower=0, upper=20)\n \u03c3_1 = Uniform('\u03c3_1', lower=0, upper=20)\n```\n\nFor the degrees-of-freedom parameter $\\nu$, we will use an **exponential** distribution with a mean of 30; this allocates high prior probability over the regions of the parameter that describe the range from normal to heavy-tailed data under the Student-T distribution.\n\n\n```python\nfrom pymc3 import Exponential\n\nwith drug_model:\n \u03bd = Exponential('\u03bd_minus_one', 1/29.) + 1\n\n```\n\n\n```python\nsns.distplot(Exponential.dist(1/29).random(size=10000), kde=False);\n```\n\n\n```python\nfrom pymc3 import StudentT\n\nwith drug_model:\n\n drug_like = StudentT('drug_like', nu=\u03bd, mu=\u03bc_1, lam=\u03c3_1**-2, observed=drug.iq)\n placebo_like = StudentT('placebo_like', nu=\u03bd, mu=\u03bc_0, lam=\u03c3_0**-2, observed=placebo.iq)\n```\n\nNow that the model is fully specified, we can turn our attention to tracking the posterior quantities of interest. Namely, we can calculate the difference in means between the drug and placebo groups.\n\nAs a joint measure of the groups, we will also estimate the \"effect size\", which is the difference in means scaled by the pooled estimates of standard deviation. This quantity can be harder to interpret, since it is no longer in the same units as our data, but it is a function of all four estimated parameters.\n\n\n```python\nfrom pymc3 import Deterministic\n\nwith drug_model:\n \n diff_of_means = Deterministic('difference of means', \u03bc_1 - \u03bc_0)\n \n effect_size = Deterministic('effect size', \n diff_of_means / np.sqrt((\u03c3_1**2 + \u03c3_0**2) / 2))\n\n\n```\n\n\n```python\nwith drug_model:\n \n drug_trace = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 231.2: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:01<00:00, 7510.30it/s]\n Finished [100%]: Average Loss = 231.2\n\n\n\n```python\nplot_posterior(drug_trace[100:], \n varnames=['\u03bc_0', '\u03bc_1', '\u03c3_0', '\u03c3_1', '\u03bd_minus_one'],\n color='#87ceeb');\n```\n\n\n```python\nplot_posterior(drug_trace[100:], \n varnames=['difference of means', 'effect size'],\n ref_val=0,\n color='#87ceeb');\n```\n\n> The posterior probability that the mean IQ of subjects in the treatment group is greater than that of the control group is 0.99.\n\n### Exercise\n\nLoad the `nashville_precip.txt` dataset. Build a model to compare rainfall in January and July. \n\n- What's the probability that the expected rainfall in January is larger than in July?\n- What's the probability that January rainfall exceeds July rainfall in a given year?\n\n\n```python\nnash_precip = pd.read_table('../data/nashville_precip.txt', \n delimiter='\\s+', na_values='NA', index_col=0)\nnash_precip.head()\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
JanFebMarAprMayJunJulAugSepOctNovDec
Year
18712.764.585.014.133.302.981.582.360.951.312.131.65
18722.322.113.145.913.095.176.101.654.501.582.252.38
18732.967.144.113.596.314.204.632.361.814.284.365.94
18745.229.235.3611.841.492.872.653.523.122.636.124.19
18756.153.068.144.221.735.638.121.603.791.255.464.30
\n
\n\n\n\n\n```python\n# %load ../exercises/rainfall.py\n```\n\n## Two Groups with Binary Outcome\n\nNow that we have seen how to generalize normally-distributed data to another distribution, we are equipped to analyze other data types. Binary outcomes are common in clinical research: \n\n- survival/death\n- true/false\n- presence/absence\n- positive/negative\n\n> *Never, ever dichotomize continuous or ordinal variables prior to statistical analysis*\n\nIn practice, binary outcomes are encoded as ones (for event occurrences) and zeros (for non-occurrence). A single binary variable is distributed as a **Bernoulli** random variable:\n\n$$f(x \\mid p) = p^{x} (1-p)^{1-x}$$\n\nSuch events are sometimes reported as sums of individual events, such as the number of individuals in a group who test positive for a condition of interest. Sums of Bernoulli events are distributed as **binomial** random variables.\n\n$$f(x \\mid n, p) = \\binom{n}{x} p^x (1-p)^{n-x}$$\n\nThe parameter in both models is $p$, the probability of the occurrence of an event. In terms of inference, we are typically interested in whether $p$ is larger or smaller in one group relative to another.\n\nTo demonstrate the comparison of two groups with binary outcomes using Bayesian inference, we will use a sample pediatric dataset. Data on 671 infants with very low (<1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center. Of interest is the relationship between the outcome intra-ventricular hemorrhage (IVH) and predictor such as birth weight, gestational age, presence of pneumothorax and mode of delivery.\n\n\n\n\n```python\nvlbw = pd.read_csv('../data/vlbw.csv', index_col=0).dropna(axis=0, subset=['ivh', 'pneumo'])\nvlbw.head()\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
birthexithospstaylowphpltctracebwtgestinouttwn...ventpneumopdacldpvhivhipeyearsexdead
581.59300281.5989992.06.96999754.0black925.028.0born at Duke0.0...1.01.00.00.0definitedefiniteNaN81.594055female1
681.60199781.77100462.07.189999NaNwhite940.028.0born at Duke0.0...1.00.00.00.0absentabsentabsent81.602295female0
1381.68399881.85399662.07.179996182.0black1110.028.0born at Duke0.0...0.01.00.01.0absentabsentabsent81.684448male0
1481.68900381.87799869.07.419998361.0white1180.028.0born at Duke0.0...0.00.00.00.0absentabsentabsent81.689880male0
1681.69699981.95200493.07.239998255.0black770.026.0born at Duke0.0...1.00.00.01.0absentabsentabsent81.698120male0
\n

5 rows \u00d7 26 columns

\n
\n\n\n\nTo demonstrate binary data analysis, we will try to estimate the difference between the probability of an intra-ventricular hemorrhage for infants with a pneumothorax. \n\n\n```python\npd.crosstab(vlbw.ivh, vlbw.pneumo)\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pneumo0.01.0
ivh
absent35973
definite4530
possible64
\n
\n\n\n\nWe will create a binary outcome by combining `definite` and `possible` into a single outcome.\n\n\n```python\nivh = vlbw.ivh.isin(['definite', 'possible']).astype(int).values\nx = vlbw.pneumo.astype(int).values\n```\n\n### Prior choice\n\nWhat should we choose as a prior distribution for $p$?\n\nWe could stick with a normal distribution, but note that the value of $p$ is **constrained** by the laws of probability. Namely, we cannot have values smaller than zero nor larger than one. So, choosing a normal distribution will result in ascribing positive probability to unsupported values of the parameter. In many cases, this will still work in practice, but will be inefficient for calculating the posterior and will not accurately represent the prior information about the parameter.\n\nA common choice in this context is the **beta distribution**, a continuous distribution with 2 parameters and whose support is on the unit interval:\n\n$$ f(x \\mid \\alpha, \\beta) = \\frac{x^{\\alpha - 1} (1 - x)^{\\beta - 1}}{B(\\alpha, \\beta)}$$\n\n- Support: $x \\in (0, 1)$\n- Mean: $\\dfrac{\\alpha}{\\alpha + \\beta}$\n- Variance: $\\dfrac{\\alpha \\beta}{(\\alpha+\\beta)^2(\\alpha+\\beta+1)}$\n\n\n```python\nfrom pymc3 import Beta\n\nparams = (5, 1), (1, 3), (5, 5), (0.5, 0.5), (1, 1)\n\nfig, axes = plt.subplots(1, len(params), figsize=(14, 4), sharey=True)\nfor ax, (alpha, beta) in zip(axes, params):\n sns.distplot(Beta.dist(alpha, beta).random(size=10000), ax=ax, kde=False)\n ax.set_xlim(0, 1)\n ax.set_title(r'$\\alpha={0}, \\beta={1}$'.format(alpha, beta));\n```\n\nSo let's use a beta distribution to model our prior knowledge of the probabilities for both groups. Setting $\\alpha = \\beta = 1$ will result in a uniform distribution of prior mass:\n\n\n```python\nwith Model() as ivh_model:\n \n p = Beta('p', 1, 1, shape=2)\n```\n\nWe can now use `p` as the parameter of our Bernoulli likelihood. Here, `x` is a vector of zeros an ones, which will extract the approproate group probability for each subject:\n\n\n```python\nfrom pymc3 import Bernoulli\n\nwith ivh_model:\n \n bb_like = Bernoulli('bb_like', p=p[x], observed=ivh)\n```\n\nFinally, since we are interested in the difference between the probabilities, we will keep track of this difference:\n\n\n```python\nwith ivh_model:\n \n p_diff = Deterministic('p_diff', p[1] - p[0])\n```\n\n\n```python\nwith ivh_model:\n ivh_trace = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 226.28: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:00<00:00, 13352.85it/s]\n Finished [100%]: Average Loss = 226.28\n\n\n\n```python\nplot_posterior(ivh_trace[100:], varnames=['p'], color='#87ceeb');\n```\n\nWe can see that the probability that `p` is larger for the pneumothorax with probability one.\n\n\n```python\nplot_posterior(ivh_trace[100:], varnames=['p_diff'], ref_val=0, color='#87ceeb');\n```\n\n## References and Resources\n\n- Goodman, S. N. (1999). Toward evidence-based medical statistics. 1: The P value fallacy. Annals of Internal Medicine, 130(12), 995\u20131004. http://doi.org/10.7326/0003-4819-130-12-199906150-00008\n- Johnson, D. (1999). The insignificance of statistical significance testing. Journal of Wildlife Management, 63(3), 763\u2013772.\n- Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian Data Analysis, Third Edition. CRC Press.\n- Kruschke, J.K. *Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan.* 2015. Academic Press / Elsevier. \n- O'Shea M, Savitz D.A., Hage M.L., Feinstein K.A.: *Prenatal events and the risk of subependymal / intraventricular haemorrhage in very low birth weight neonates*. **Paediatric and Perinatal Epdiemiology** 1992;6:352-362\n", "meta": {"hexsha": "566220b722f82a83413f6e9f010946e89d5c4c12", "size": 377476, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/2. Basic Bayesian Inference.ipynb", "max_stars_repo_name": "gongyg1/stat-model", "max_stars_repo_head_hexsha": "5cbf977dfab4ffc26425405b50299e71e00685bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 163, "max_stars_repo_stars_event_min_datetime": "2017-04-17T17:43:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T13:44:34.000Z", "max_issues_repo_path": "notebooks/2. Basic Bayesian Inference.ipynb", "max_issues_repo_name": "directcsd/intro_stat_modeling_2017", "max_issues_repo_head_hexsha": "5cbf977dfab4ffc26425405b50299e71e00685bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/2. Basic Bayesian Inference.ipynb", "max_forks_repo_name": "directcsd/intro_stat_modeling_2017", "max_forks_repo_head_hexsha": "5cbf977dfab4ffc26425405b50299e71e00685bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 109, "max_forks_repo_forks_event_min_datetime": "2017-05-17T17:41:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T13:52:33.000Z", "avg_line_length": 182.6202225448, "max_line_length": 58474, "alphanum_fraction": 0.8735151374, "converted": true, "num_tokens": 11528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4493926344647596, "lm_q2_score": 0.2942149783515162, "lm_q1q2_score": 0.13221804422038008}} {"text": "# Introducci\u00f3n a Python\n\n## Introducci\u00f3n\n\n[Python](https://www.python.org/) es un lenguaje de prop\u00f3sito general sencillo y expresivo. Permite acceder c\u00f3modamente a una amplia colecci\u00f3n de bibliotecas \u00fatiles en todos los campos de la inform\u00e1tica. Su uso en ciencia y tecnolog\u00eda es cada vez mayor.\n\nEs un lenguaje interpretado, con tipos de datos din\u00e1micos y manejo autom\u00e1tico de memoria, que puede utilizarse tanto para escribir programas de forma tradicional como para experimentar en un entorno interactivo. Incluye las construcciones m\u00e1s importantes de programaci\u00f3n funcional y admite programaci\u00f3n orientada a objetos.\n\nLa sintaxis es simple e intuitiva pero hay que tener en cuenta algunas caracter\u00edsticas:\n\n\n- Los bloques de instrucciones de las construcciones condicionales, bucles y funciones se delimitan mediante la \"indentaci\u00f3n del c\u00f3digo\": no se utiliza \"end\" ni `{` `}`.\n\n- Los \u00edndices para acceder a los arrays o listas comienzan en 0 y acaban en tama\u00f1o-1. Las secuencias (*range*) usadas en bucles o *list comprehensions* no incluyen el l\u00edmite superior.\n\n- Algunas funciones tienen la sintaxis tradicional `f(x)`, `g(x,a)`, mientras que otras se expresan como `x.f()`, `x.g(a)`, etc., indicando que el \"objeto\" `x` se modifica de alguna forma.\n\n- Los arrays y las listas son \"mutables\": su asignaci\u00f3n a otra variable **no** crea una copia del objeto original sino una \"referencia\" a trav\u00e9s de la cual se puede modificar la estructura original.\n\n- Las funciones pueden leer directamente el valor de variables globales, pero para modificarlas hay que declararlas como `global`. La asignaci\u00f3n de variables dentro de una funci\u00f3n crea variables locales.\n\n- La \u00fanica forma de crear un \u00e1mbito de variables es definir una funci\u00f3n. Los \u00edndices de los bucles son visibles a la salida.\n\n### Instalaci\u00f3n\n\n[Anaconda](https://www.anaconda.com/distribution)\n\nSi partimos de la instalaci\u00f3n m\u00ednima [miniconda](https://conda.io/miniconda.html) necesitamos los siguiente paquetes:\n\n > conda install jupyter numpy scipy sympy matplotlib\n\nArrancamos el \"servidor\" de notebooks con\n\n > jupyter notebook\n\nLas celdas de c\u00f3digo se eval\u00faan pulsando May\u00fasculas-Entrar.\n\n## Tipos simples\n\nCadenas de caracteres:\n\n\n```python\ns = 'Hola' \n```\n\n\n```python\ns\n```\n\n\n\n\n 'Hola'\n\n\n\n\n```python\nprint(s)\n```\n\n Hola\n\n\n\n```python\ntype(s)\n```\n\n\n\n\n str\n\n\n\nSe admiten diferentes tipos de delimitadores y cadenas multil\u00ednea.\n\n\n```python\n\"Hola\" + ''' amigos!'''\n```\n\n\n\n\n 'Hola amigos!'\n\n\n\nVariables l\u00f3gicas:\n\n\n```python\nc = 3 < 4\n```\n\n\n```python\ntype(c)\n```\n\n\n\n\n bool\n\n\n\n\n```python\nc and (2==1+1) or not (3 != 5)\n```\n\n\n\n\n True\n\n\n\nN\u00fameros reales aproximados con coma flotante de doble precisi\u00f3n:\n\n\n```python\nx = 3.5\n```\n\n\n```python\ntype(x)\n```\n\n\n\n\n float\n\n\n\nLos enteros tienen tama\u00f1o ilimitado:\n\n\n```python\nx = 20\n```\n\n\n```python\ntype(x)\n```\n\n\n\n\n int\n\n\n\n\n```python\nx**x\n```\n\n\n\n\n 104857600000000000000000000\n\n\n\nVariable compleja:\n\n\n```python\n(1+1j)*(1-1j)\n```\n\n\n\n\n (2+0j)\n\n\n\n\n```python\nimport cmath\n\ncmath.sqrt(-1)\n```\n\n\n\n\n 1j\n\n\n\n## Control\n\nCondiciones:\n\n\n```python\nk = 7\n\nif k%2 == 0:\n print(k,\" es par\")\nelse:\n print(k,\" es impar\")\n print(\"me gustan los impares\")\n```\n\n 7 es impar\n me gustan los impares\n\n\nBucles:\n\n\n```python\nfor k in [1,2,3]:\n print(k)\n```\n\n 1\n 2\n 3\n\n\n\n```python\nfor k in range(5):\n print(k)\n```\n\n 0\n 1\n 2\n 3\n 4\n\n\n\n```python\nk = 1\np = 1\nwhile k < 5:\n p = p*k\n k = k+1\np\n```\n\n\n\n\n 24\n\n\n\n## Contenedores\n\n### Tuplas\n\n\n```python\nt = (2,'rojo')\n```\n\n\n```python\nt\n```\n\n\n\n\n (2, 'rojo')\n\n\n\n\n```python\nt[0]\n```\n\n\n\n\n 2\n\n\n\nSon inmutables.\n\n### Listas\n\n\n```python\nl = [1,-2,67,0,8,1,3]\n```\n\n\n```python\ntype(l)\n```\n\n\n\n\n list\n\n\n\nTambi\u00e9n admite elementos de diferentes tipos, incluyendo otras listas, tuplas, o cualquier otro tipo de datos, aunque lo normal es trabajar con listas homog\u00e9neas (con elementos del mismo tipo) cuyos elementos pueden procesarse todos de la misma manera usando un bucle.\n\nLa extracci\u00f3n de elementos (\"indexado\"), la longitud de la lista y la suma de sus elementos se consiguen exactamente igual que en las tuplas:\n\n\n```python\nl[2], len(l), sum(l)\n```\n\n\n\n\n (67, 7, 78)\n\n\n\nSin embargo, las listas se diferencian en una caracter\u00edstica fundamental. Son **mutables**: podemos a\u00f1adir o quitar elementos de ellas.\n\n\n```python\nl.append(28)\n\nl\n```\n\n\n\n\n [1, -2, 67, 0, 8, 1, 3, 28]\n\n\n\n\n```python\nl += [-2,4]\n\nl\n```\n\n\n\n\n [1, -2, 67, 0, 8, 1, 3, 28, -2, 4]\n\n\n\n\n```python\nl.remove(0)\n\nl\n```\n\n\n\n\n [1, -2, 67, 8, 1, 3, 28, -2, 4]\n\n\n\n\n```python\nl[2] = 7\n\nl\n```\n\n\n\n\n [1, -2, 7, 8, 1, 3, 28, -2, 4]\n\n\n\n\n```python\nl.pop()\n```\n\n\n\n\n 4\n\n\n\n\n```python\nl\n```\n\n\n\n\n [1, -2, 7, 8, 1, 3, 28, -2]\n\n\n\n\n```python\ndel l[2]\n```\n\n\n```python\nl\n```\n\n\n\n\n [1, -2, 8, 1, 3, 28, -2]\n\n\n\n\n```python\nl.insert(3,100)\n\nl\n```\n\n\n\n\n [1, -2, 8, 100, 1, 3, 28, -2]\n\n\n\n### Conjuntos\n\nEl tipo `set` trata de reproducir el concepto matem\u00e1tico de conjunto. Se construye con llaves y los elementos duplicados se eliminan autom\u00e1ticamente.\n\n\n```python\nC = {1,2,7,1,8,2,1}\nC\n```\n\n\n\n\n {1, 2, 7, 8}\n\n\n\nLas operaciones de conjuntos est\u00e1n disponibles con s\u00edmbolos o con \"m\u00e9todos\" (funciones en forma de sufijo) . Los detalles pueden encontrarse en la [documentaci\u00f3n](https://docs.python.org/3.6/library/stdtypes.html?highlight=set#set).\n\n\n```python\nC.union({0,8})\n```\n\n\n\n\n {0, 1, 2, 7, 8}\n\n\n\n\n```python\nC | {0,8}\n```\n\n\n\n\n {0, 1, 2, 7, 8}\n\n\n\n\n```python\nC & {5,2}\n```\n\n\n\n\n {2}\n\n\n\n\n```python\nC - {2,8,0,5}\n```\n\n\n\n\n {1, 7}\n\n\n\n\n```python\n5 in C\n```\n\n\n\n\n False\n\n\n\n\n```python\n{1,2} < {5,2,1}\n```\n\n\n\n\n True\n\n\n\n### Diccionarios\n\nEs un array asociativo (el \u00edndice puede ser cualquier tipo (inmutable)). Es una estructura muy utilizada en Python.\n\n\n```python\nd = {'lunes': 8, 'martes' : [1,2,3], 3: 5}\n```\n\n\n```python\nd['martes']\n```\n\n\n\n\n [1, 2, 3]\n\n\n\n\n```python\nd.keys()\n```\n\n\n\n\n dict_keys(['lunes', 'martes', 3])\n\n\n\n\n```python\nd.values()\n```\n\n\n\n\n dict_values([8, [1, 2, 3], 5])\n\n\n\n### Iteraci\u00f3n en contenedores\n\nSi queremos procesar todos los elementos de un contenedor podemos hacer un bucle y acceder a cada uno de ellos con la operaci\u00f3n de indexado.\n\n\n```python\nlista = [1,2,3,4,5]\n\nfor k in range(len(lista)):\n print(lista[k])\n```\n\n 1\n 2\n 3\n 4\n 5\n\n\nEsta construcci\u00f3n es tan com\u00fan que en Python podemos escribirla de forma mucho m\u00e1s natural:\n\n\n```python\nfor x in lista:\n print(x)\n```\n\n 1\n 2\n 3\n 4\n 5\n\n\nEsto funciona incluso en contenedores como `set` que no admiten el indexado. Los tipos contenedores se pueden \"recorrer\" directamente mediante un bucle `for`, visitando todos sus elementos.\n\n### Conversi\u00f3n\n\nEl nombre de un contenedor es a la vez una funci\u00f3n para construir un contenedor de ese tipo a partir de otro contenedor cualquiera.\n\n\n```python\nl = [4,2,2,3,3,3,3,1]\n\ntuple(l)\n```\n\n\n\n\n (4, 2, 2, 3, 3, 3, 3, 1)\n\n\n\n\n```python\nset(l)\n```\n\n\n\n\n {1, 2, 3, 4}\n\n\n\n\n```python\nlist({5,4,3})\n```\n\n\n\n\n [3, 4, 5]\n\n\n\n\n```python\nlist(range(10))\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n\n\nEsta caracter\u00edstica funciona con cualquier otro tipo, no solo con contenedores:\n\n\n```python\nfloat(5)\n```\n\n\n\n\n 5.0\n\n\n\n\n```python\nint('54')\n```\n\n\n\n\n 54\n\n\n\nSi la conversi\u00f3n no es posible se producir\u00e1 un error.\n\n### Subsecuencias\n\n\n```python\nl = list(range(20))\n\nl\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n\n\n\n\n```python\nl[:5]\n```\n\n\n\n\n [0, 1, 2, 3, 4]\n\n\n\n\n```python\nl[4:]\n```\n\n\n\n\n [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n\n\n\n\n```python\nl[-3:]\n```\n\n\n\n\n [17, 18, 19]\n\n\n\n\n```python\nl[5:10:2]\n```\n\n\n\n\n [5, 7, 9]\n\n\n\n\n```python\nl[::-1]\n```\n\n\n\n\n [19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]\n\n\n\n\n```python\nl[10:14] = [0,0,0]\n\nl\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 14, 15, 16, 17, 18, 19]\n\n\n\n### *List comprehensions*\n\nCuando utilizamos un bucle para recorrer una lista o realizar un gran n\u00famero de c\u00e1lculos los resultados intermedios se pueden imprimir si se desea, pero en cualquier caso al final se pierden.\n\nMuchas veces surge la necesidad de construir una lista (o cualquier otro tipo de contenedor) a partir de los elementos de otra. Una forma de programarlo es empezar con una lista vac\u00eda e iterar mediante un bucle a\u00f1adiendo elementos.\n\nSupongamos que queremos construir una lista con los 100 primeros n\u00fameros cuadrados $1,4,9,16,\\ldots,10000$. En principio parece razonable hacer lo siguiente:\n\n\n```python\nr = []\nfor k in range(1,101):\n r.append(k**2)\n\nprint(r)\n```\n\n [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089, 1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116, 2209, 2304, 2401, 2500, 2601, 2704, 2809, 2916, 3025, 3136, 3249, 3364, 3481, 3600, 3721, 3844, 3969, 4096, 4225, 4356, 4489, 4624, 4761, 4900, 5041, 5184, 5329, 5476, 5625, 5776, 5929, 6084, 6241, 6400, 6561, 6724, 6889, 7056, 7225, 7396, 7569, 7744, 7921, 8100, 8281, 8464, 8649, 8836, 9025, 9216, 9409, 9604, 9801, 10000]\n\n\nNo est\u00e1 mal, pero los lenguajes modernos proporcionan una herramienta mucho m\u00e1s elegante para expresar este tipo de c\u00e1lculos. Se conoce como [list comprehension](https://en.wikipedia.org/wiki/List_comprehension) y trata de imitar la notaci\u00f3n matem\u00e1tica para definir conjuntos:\n\n$$ r = \\{ k^2 \\; : \\; \\forall k \\in \\mathbb{N}, \\;1 \\leq k \\leq 100 \\} $$\n\n\n```python\nr = [ k**2 for k in range(1,101) ]\n\nprint(r)\n```\n\n [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089, 1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116, 2209, 2304, 2401, 2500, 2601, 2704, 2809, 2916, 3025, 3136, 3249, 3364, 3481, 3600, 3721, 3844, 3969, 4096, 4225, 4356, 4489, 4624, 4761, 4900, 5041, 5184, 5329, 5476, 5625, 5776, 5929, 6084, 6241, 6400, 6561, 6724, 6889, 7056, 7225, 7396, 7569, 7744, 7921, 8100, 8281, 8464, 8649, 8836, 9025, 9216, 9409, 9604, 9801, 10000]\n\n\n\n```python\n[ k for k in range(100) if k%7 == 0 ]\n```\n\n\n\n\n [0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98]\n\n\n\n\n```python\n[(a,b) for a in range(1,7) for b in range(1,7) if a + b >= 10 ]\n```\n\n\n\n\n [(4, 6), (5, 5), (5, 6), (6, 4), (6, 5), (6, 6)]\n\n\n\n\n```python\n{ a+b for a in range(1,7) for b in range(1,7) }\n```\n\n\n\n\n {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}\n\n\n\n\n```python\nsum([k**2 for k in range(100+1)])\n```\n\n\n\n\n 338350\n\n\n\n### Desestructuraci\u00f3n\n\nEn Python es posible asignar nombres a los elementos de una secuencia de forma muy natural.\n\nSupongamos que tenemos una tupla como la siguiente\n\n\n```python\nt = (3,4,5)\n```\n\ny queremos operar con sus elementos. Podemos acceder con un \u00edndice:\n\n\n```python\nt[1] + t[2]\n```\n\n\n\n\n 9\n\n\n\nNo hay ning\u00fan problema pero el acceso con \u00edndice se hace pesado si los elementos aparecen varias veces en el c\u00f3digo. En estos casos es mejor ponerles nombre. Podemos hacer\n\n\n```python\nb = t[1]\nc = t[2]\n\nb+c\n```\n\n\n\n\n 9\n\n\n\nSin embargo Python nos permite algo m\u00e1s elegante:\n\n\n```python\n_,b,c = t\n\nb+c\n```\n\n\n\n\n 9\n\n\n\n(El nombre `_` se suele usar cuando no necesitamos ese elemento.)\n\nUsando esta caracter\u00edstica podemos escribir varias asignaciones de una vez:\n\n\n```python\nx,y = 23,45\n```\n\nUn nombre con asterisco captura dentro de una lista todos los elementos restantes:\n\n\n```python\ns = 'Alberto'\n\nx, y, *z, w = s\n```\n\n\n```python\ny\n```\n\n\n\n\n 'l'\n\n\n\n\n```python\nz\n```\n\n\n\n\n ['b', 'e', 'r', 't']\n\n\n\nLa desestructuraci\u00f3n de argumentos es muy pr\u00e1ctica en combinaci\u00f3n con las *list comprehensions*:\n\n\n```python\nl = [(k,k**2) for k in range(5)]\nl\n```\n\n\n\n\n [(0, 0), (1, 1), (2, 4), (3, 9), (4, 16)]\n\n\n\n\n```python\n[a+b for a,b in l]\n```\n\n\n\n\n [0, 2, 6, 12, 20]\n\n\n\n## Funciones\n\n\n```python\ndef sp(n):\n r = n**2+n+41\n return r\n```\n\n\n```python\nsp(5)\n```\n\n\n\n\n 71\n\n\n\nSe pueden devolver varios resultados en una tupla:\n\n\n```python\nimport math\n\ndef ecsec(a,b,c):\n d = math.sqrt(b**2- 4*a*c)\n s1 = (-b+d)/2/a\n s2 = (-b-d)/2/a\n return (s1,s2)\n```\n\n\n```python\necsec(2,-6,4)\n```\n\n\n\n\n (2.0, 1.0)\n\n\n\nLos par\u00e9ntesis de la tupla son opcionales.\n\n\n```python\na,b = ecsec(1,-3,2)\n\nb\n```\n\n\n\n\n 1.0\n\n\n\nLas variables globales son visibles dentro de las funciones y las asignaciones crean variables locales (a menos que el nombre se declare `global`).\n\n\n```python\na = 5\n\nb = 8\n\ndef f(x):\n b = a+1\n return b\n\nprint(f(3))\nprint(b)\n```\n\n 6\n 8\n\n\n\n```python\na = 5\n\nb = 8\n\ndef f(x):\n global b\n b = a+1\n return b\n\nprint(f(3))\nprint(b)\n```\n\n 6\n 6\n\n\nArgumentos por omisi\u00f3n:\n\n\n```python\ndef incre(x,y=1):\n return x + y\n\nprint(incre(5))\nprint(incre(5,3))\n```\n\n 6\n 8\n\n\nArgumentos por nombre:\n\n\n```python\nincre(y=3, x=2)\n```\n\n\n\n\n 5\n\n\n\nDocumentaci\u00f3n:\n\n\n```python\n# ? sum\nhelp(sum)\n```\n\n Help on built-in function sum in module builtins:\n \n sum(iterable, start=0, /)\n Return the sum of a 'start' value (default: 0) plus an iterable of numbers\n \n When the iterable is empty, return the start value.\n This function is intended specifically for use with numeric values and may\n reject non-numeric types.\n \n\n\n\n```python\ndef fun(n):\n \"\"\"Una funci\u00f3n muy simple que calcula el triple de su argumento.\"\"\"\n return 3*n\n```\n\n\n```python\nhelp(fun)\n```\n\n Help on function fun in module __main__:\n \n fun(n)\n Una funci\u00f3n muy simple que calcula el triple de su argumento.\n \n\n\n### Bibliotecas\n\nLas funciones definidas en un archivo se pueden utilizar directamente haciendo un `import`. Existe una convenci\u00f3n para definir una funci\u00f3n `main` que se ejecuta cuando el archivo se arranca como programa y suele usarse para ejecutar tests.\n\n### Programaci\u00f3n funcional\n\nEn Python 3 las construcciones funcionales crean secuencias \"bajo demanda\".\n\n\n```python\nmap(sp,range(5))\n```\n\n\n\n\n \n\n\n\n\n```python\nfor k in map(sp,range(5)):\n print(k)\n```\n\n 41\n 43\n 47\n 53\n 61\n\n\n\n```python\nlist(map(sp,range(5)))\n```\n\n\n\n\n [41, 43, 47, 53, 61]\n\n\n\n\n```python\nlist(filter(lambda x: x%2 == 1, range(10)))\n```\n\n\n\n\n [1, 3, 5, 7, 9]\n\n\n\nEs poco frecuente usar expl\u00edcitamente map y filter, ya que su efecto se consigue de forma m\u00e1s c\u00f3moda con list comprehensions:\n\n\n```python\n[k**2 for k in range(10) if k >5 ]\n```\n\n\n\n\n [36, 49, 64, 81]\n\n\n\n\n```python\ndef divis(n):\n return [k for k in range(2,n) if n%k==0]\n```\n\n\n```python\ndivis(12)\n```\n\n\n\n\n [2, 3, 4, 6]\n\n\n\n\n```python\ndivis(1001)\n```\n\n\n\n\n [7, 11, 13, 77, 91, 143]\n\n\n\n\n```python\ndef perfect(n):\n return sum(divis(n)) + 1 == n\n```\n\n\n```python\nperfect(4)\n```\n\n\n\n\n False\n\n\n\n\n```python\nperfect(6)\n```\n\n\n\n\n True\n\n\n\n\n```python\ndef prime(n):\n return divis(n)==[]\n```\n\n\n```python\n[k for k in range(2,21) if prime(k)]\n```\n\n\n\n\n [2, 3, 5, 7, 11, 13, 17, 19]\n\n\n\n\n```python\nfrom functools import reduce\nimport operator\n\ndef product(l):\n return reduce(operator.mul,l,1)\n```\n\n\n```python\nproduct(range(1,10+1))\n```\n\n\n\n\n 3628800\n\n\n\nFunci\u00f3n que construye funciones:\n\n\n```python\ndef mkfun(y):\n return lambda x: x+y\n```\n\n\n```python\nf = mkfun(1)\ng = mkfun(5)\n\nprint(f(10))\nprint(g(10))\n```\n\n 11\n 15\n\n\n\n```python\nfs = list(map(mkfun,range(1,6)))\n\nprint(fs[0](10))\nprint(fs[4](10))\n```\n\n 11\n 15\n\n\n## Arrays\n\nGran parte del \u00e9xito de Python se debe a [numpy](http://www.numpy.org/).\n\n\n```python\nimport numpy as np\n```\n\nConstrucci\u00f3n a partir de listas (u otros contenedores):\n\n\n```python\nm = np.array([[5,3, 2,10],\n [2,0, 7, 0],\n [1,1,-3, 6]])\n```\n\n\n```python\nm[1,2]\n```\n\n\n\n\n 7\n\n\n\nInspecci\u00f3n de su tipo y estructura:\n\n\n```python\ntype(m)\n```\n\n\n\n\n numpy.ndarray\n\n\n\n\n```python\nm.dtype\n```\n\n\n\n\n dtype('int64')\n\n\n\n\n```python\nm.shape\n```\n\n\n\n\n (3, 4)\n\n\n\n\n```python\nm.ndim\n```\n\n\n\n\n 2\n\n\n\n\n```python\nlen(m)\n```\n\n\n\n\n 3\n\n\n\n\n```python\nm.size\n```\n\n\n\n\n 12\n\n\n\nLas operaciones elemento a elemento son autom\u00e1ticas:\n\n\n```python\n5*m + 2\n```\n\n\n\n\n array([[ 27, 17, 12, 52],\n [ 12, 2, 37, 2],\n [ 7, 7, -13, 32]])\n\n\n\nConstructores especiales:\n\n\n```python\nnp.zeros([2,3])\n```\n\n\n\n\n array([[0., 0., 0.],\n [0., 0., 0.]])\n\n\n\n\n```python\nnp.ones([4])\n```\n\n\n\n\n array([1., 1., 1., 1.])\n\n\n\n\n```python\nnp.linspace(0,5,11)\n```\n\n\n\n\n array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ])\n\n\n\n\n```python\nnp.arange(10)\n```\n\n\n\n\n array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\n\n```python\nnp.arange(1,10,0.5)\n```\n\n\n\n\n array([1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5, 6. , 6.5, 7. ,\n 7.5, 8. , 8.5, 9. , 9.5])\n\n\n\n\n```python\nnp.eye(7)\n```\n\n\n\n\n array([[1., 0., 0., 0., 0., 0., 0.],\n [0., 1., 0., 0., 0., 0., 0.],\n [0., 0., 1., 0., 0., 0., 0.],\n [0., 0., 0., 1., 0., 0., 0.],\n [0., 0., 0., 0., 1., 0., 0.],\n [0., 0., 0., 0., 0., 1., 0.],\n [0., 0., 0., 0., 0., 0., 1.]])\n\n\n\nIteraci\u00f3n, a lo largo de la primera dimensi\u00f3n:\n\n\n```python\nfor e in np.arange(4):\n print(e)\n```\n\n 0\n 1\n 2\n 3\n\n\n\n```python\nfor e in m:\n print(e)\n```\n\n [ 5 3 2 10]\n [2 0 7 0]\n [ 1 1 -3 6]\n\n\n\n```python\nsum(m)\n```\n\n\n\n\n array([ 8, 4, 6, 16])\n\n\n\n\n```python\nnp.sum(m,axis=1)\n```\n\n\n\n\n array([20, 9, 5])\n\n\n\nOperaciones matriciales:\n\n\n```python\nm.T\n```\n\n\n\n\n array([[ 5, 2, 1],\n [ 3, 0, 1],\n [ 2, 7, -3],\n [10, 0, 6]])\n\n\n\n\n```python\nv = np.array([3,2,-5,8])\n```\n\nEl producto de matrices, el producto escalar de vectores, y su generalizaci\u00f3n para arrays multidimensionales se expresa con el s\u00edmbolo `@` (que representa a la funci\u00f3n `dot`).\n\n\n```python\nm @ v\n```\n\n\n\n\n array([ 91, -29, 68])\n\n\n\n\n```python\nnp.diag([10,0,1]) @ m\n```\n\n\n\n\n array([[ 50, 30, 20, 100],\n [ 0, 0, 0, 0],\n [ 1, 1, -3, 6]])\n\n\n\nLas funciones matem\u00e1ticas est\u00e1n optimizadas para operar con arrays elemento a elemento:\n\n\n```python\nx = np.linspace(0,2*np.pi,30)\n\nx\n```\n\n\n\n\n array([0. , 0.21666156, 0.43332312, 0.64998469, 0.86664625,\n 1.08330781, 1.29996937, 1.51663094, 1.7332925 , 1.94995406,\n 2.16661562, 2.38327719, 2.59993875, 2.81660031, 3.03326187,\n 3.24992343, 3.466585 , 3.68324656, 3.89990812, 4.11656968,\n 4.33323125, 4.54989281, 4.76655437, 4.98321593, 5.1998775 ,\n 5.41653906, 5.63320062, 5.84986218, 6.06652374, 6.28318531])\n\n\n\n\n```python\ny = np.sin(x) + np.cos(2*x)\ny\n```\n\n\n\n\n array([ 1. , 1.12254586, 1.06727539, 0.87270255, 0.60038006,\n 0.32232498, 0.10669282, 0.00439546, 0.03917335, 0.20298123,\n 0.45755084, 0.74183837, 0.9839623 , 1.1153946 , 1.08473957,\n 0.86850154, 0.47679154, -0.04714542, -0.63356055, -1.19782715,\n -1.65497221, -1.93447969, -1.99267137, -1.82040717, -1.44469911,\n -0.92394405, -0.33764588, 0.22749718, 0.69260498, 1. ])\n\n\n\nReconfiguraci\u00f3n de los elementos:\n\n\n```python\nnp.arange(12)\n```\n\n\n\n\n array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])\n\n\n\n\n```python\nnp.arange(12).reshape(3,2,2)\n```\n\n\n\n\n array([[[ 0, 1],\n [ 2, 3]],\n \n [[ 4, 5],\n [ 6, 7]],\n \n [[ 8, 9],\n [10, 11]]])\n\n\n\n### Matrices por bloques\n\n\n```python\nnp.append(m,[[100,200,300,400],\n [0, 10, 0, 1] ],axis=0)\n```\n\n\n\n\n array([[ 5, 3, 2, 10],\n [ 2, 0, 7, 0],\n [ 1, 1, -3, 6],\n [100, 200, 300, 400],\n [ 0, 10, 0, 1]])\n\n\n\n\n```python\nnp.hstack([np.zeros([3,3]),np.ones([3,2])])\n```\n\n\n\n\n array([[0., 0., 0., 1., 1.],\n [0., 0., 0., 1., 1.],\n [0., 0., 0., 1., 1.]])\n\n\n\n\n```python\nnp.vstack([np.eye(3),5*np.ones([2,3])])\n```\n\n\n\n\n array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.],\n [5., 5., 5.],\n [5., 5., 5.]])\n\n\n\nnumpy proporciona un tipo especial `matrix` para los arrays de 2 dimensiones pero [se recomienda no usarlo](https://stackoverflow.com/questions/4151128/what-are-the-differences-between-numpy-arrays-and-matrices-which-one-should-i-u) o usarlo con cuidado.\n\n### Automatic broadcasting\n\nLas operaciones elemento a elemento argumentos con las mismas dimensiones. Pero si alguna dimensi\u00f3n es igual a uno, se sobreentiende que los elementos se replican en esa dimensi\u00f3n para coincidir con el otro array.\n\n\n```python\nm = np.array([[1, 2, 3, 4]\n ,[5, 6, 7, 8]\n ,[9,10,11,12]])\n```\n\n\n```python\nm + [[10],\n [20],\n [30]]\n```\n\n\n\n\n array([[11, 12, 13, 14],\n [25, 26, 27, 28],\n [39, 40, 41, 42]])\n\n\n\n\n```python\nm + [100,200,300,400]\n```\n\n\n\n\n array([[101, 202, 303, 404],\n [105, 206, 307, 408],\n [109, 210, 311, 412]])\n\n\n\n\n```python\nnp.array([[1,2,3,4]]) + np.array([[100],\n [200],\n [300]])\n```\n\n\n\n\n array([[101, 102, 103, 104],\n [201, 202, 203, 204],\n [301, 302, 303, 304]])\n\n\n\n### Slices\n\nExtracci\u00f3n de elementos y \"submatrices\" o \"subarrays\", seleccionando intervalos de filas, columnas, etc.:\n\n\n```python\nm = np.arange(42).reshape(6,7)\nm\n```\n\n\n\n\n array([[ 0, 1, 2, 3, 4, 5, 6],\n [ 7, 8, 9, 10, 11, 12, 13],\n [14, 15, 16, 17, 18, 19, 20],\n [21, 22, 23, 24, 25, 26, 27],\n [28, 29, 30, 31, 32, 33, 34],\n [35, 36, 37, 38, 39, 40, 41]])\n\n\n\n\n```python\nm[1,2]\n```\n\n\n\n\n 9\n\n\n\n\n```python\nm[2:5,1:4]\n```\n\n\n\n\n array([[15, 16, 17],\n [22, 23, 24],\n [29, 30, 31]])\n\n\n\n\n```python\nm[:3, 4:]\n```\n\n\n\n\n array([[ 4, 5, 6],\n [11, 12, 13],\n [18, 19, 20]])\n\n\n\n\n```python\nm[[1,0,0,2,1],:]\n```\n\n\n\n\n array([[ 7, 8, 9, 10, 11, 12, 13],\n [ 0, 1, 2, 3, 4, 5, 6],\n [ 0, 1, 2, 3, 4, 5, 6],\n [14, 15, 16, 17, 18, 19, 20],\n [ 7, 8, 9, 10, 11, 12, 13]])\n\n\n\nLos \u00edndices negativos indican que se empieza a contar desde el final.\n\n\n```python\n# las dos \u00faltimas columnas y todas las filas menos las tres \u00faltimas.\nm[:-3,-2:]\n```\n\n\n\n\n array([[ 5, 6],\n [12, 13],\n [19, 20]])\n\n\n\n\n```python\n# la pen\u00faltima columna\nm[:,-2]\n```\n\n\n\n\n array([ 5, 12, 19, 26, 33, 40])\n\n\n\n\n```python\n# la pen\u00faltima columna pero como array 2D (matriz), para que se vea como un vector columna\nm[:,[-2]]\n```\n\n\n\n\n array([[ 5],\n [12],\n [19],\n [26],\n [33],\n [40]])\n\n\n\n### Masks\n\nExtracci\u00f3n de elementos que cumplen una condici\u00f3n:\n\n\n```python\nn = np.arange(10)\n\nn\n```\n\n\n\n\n array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\n\n```python\nn < 5\n```\n\n\n\n\n array([ True, True, True, True, True, False, False, False, False,\n False])\n\n\n\n\n```python\nn[n<5]\n```\n\n\n\n\n array([0, 1, 2, 3, 4])\n\n\n\n\n```python\nk = np.arange(1,101)\n\n(k ** 2)[(k>10) & (k**3 < 2000)]\n```\n\n\n\n\n array([121, 144])\n\n\n\n### I/O\n\nLa funci\u00f3n `np.loadtxt` permite cargar los datos de los arrays a partir de ficheros de texto. Tambi\u00e9n es posible guardar y recuperar arrays en formato binario.\n\n## Gr\u00e1ficas\n\nUno de los paquetes gr\u00e1ficos m\u00e1s conocidos es `matplotlib`, que puede utilizarse con un interfaz muy parecido al de Matlab/Octave.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# para insertar los gr\u00e1ficos en el notebook\n%matplotlib inline\n\n# para generar ventanas independientes\n# %matplotlib qt\n# %matplotib tk\n```\n\n\n```python\nx=np.linspace(0,2*np.pi,200)\n```\n\n\n```python\nplt.plot(np.sin(x))\n```\n\n\n```python\nplt.plot(np.cos(x),np.sin(x)); plt.axis('equal');\n```\n\n\n```python\nplt.plot(x,np.sin(x), x,np.cos(x));\n```\n\n\n```python\nplt.plot(x,np.sin(x),color='red')\nplt.plot(x,np.sin(2*x),color='black')\nplt.plot([1,2.5],[-0.5,0],'.',markersize=15);\nplt.legend(['hola','fun','puntos']);\nplt.xlabel('x'); plt.ylabel('y'); plt.title('bonito plot'); plt.axis('tight');\n```\n\nEl gr\u00e1fico se puede exportar en el formato deseado:\n\n\n```python\n# plt.savefig('result.pdf') # o .svg, .png, .jpg, etc.\n```\n\n\n```python\nplt.plot(x,np.exp(x)); plt.axis([0,3,-1,5]);\n```\n\n\n```python\nfor k in [1,2,3]:\n plt.plot(x,np.sin(k*x))\nplt.grid()\n```\n\n\n```python\ndef espiral(n):\n t = np.linspace(0,n*2*np.pi,1000)\n r = 3 * t\n x = r * np.cos(t)\n y = r * np.sin(t)\n plt.plot(x,y)\n plt.axis('equal')\n plt.axis('off')\n\nespiral(4)\n```\n\n\n```python\nimport numpy.random as rnd\n\ndef randwalk(n,s):\n p = s*rnd.randn(n,2)\n r = np.cumsum(p,axis=0)\n x = r[:,0]\n y = r[:,1]\n plt.plot(x,y)\n plt.axis('equal');\n```\n\n\n```python\nplt.figure(figsize=(4,4))\nrandwalk(1000,1)\n```\n\n\n```python\nplt.figure(figsize=(8,8))\nx = np.linspace(0,6*np.pi,100);\n\nplt.subplot(2,2,1)\nplt.plot(x,np.sin(x),'r')\n\nplt.subplot(2,2,2)\nplt.plot(x,np.cos(x))\n\nplt.subplot(2,2,3)\nplt.plot(x,np.sin(2*x))\n\nplt.subplot(2,2,4)\nplt.plot(x,np.cos(2*x),'g');\n```\n\n\n```python\nx,y = np.mgrid[-3:3:0.2,-3:3:0.2]\n\nz = x**2-y**2-1\n```\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\n\nfig = plt.figure(figsize=(8,6))\nax = fig.add_subplot(111, projection='3d')\n\nax.plot_surface(x,y,x**2-y**2, cmap=cm.coolwarm, linewidth=0.5, rstride=2, cstride=2);\n```\n\n\n```python\nplt.figure(figsize=(6,6))\nplt.contour(x,y, z , colors=['k']);\nplt.axis('equal');\n```\n\n### Animaciones\n\n\n```python\n# si se produce un error:\n# conda install -c menpo ffmpeg\n\nfrom matplotlib import animation, rc\nfrom IPython.display import HTML\nrc('animation', html='html5')\n```\n\n\n```python\nx = np.linspace(0,2,100)\n\ndef wave(lam,freq,x,t):\n return 1*np.sin(2*np.pi*(x/lam - t*freq))\n```\n\n\n```python\nfig, ax = plt.subplots()\nplt.grid()\nplt.title('onda viajera')\nplt.xlabel('x');\nplt.close();\nax.set_xlim(( 0, 2))\nax.set_ylim((-1.1, 1.1))\n\nline1, = ax.plot([], [], '-')\n#line2, = ax.plot([], [], '.', markerSize=20)\n\nlam = 0.8\nfreq = 1/4\n\ndef animate(i):\n t = i/25\n line1.set_data(x,wave(lam,freq,x,t))\n #line2.set_data(1,f(lam,freq,1,t))\n return ()\n\nanimation.FuncAnimation(fig, animate, frames=100, interval=1000/25, blit=True)\n```\n\n\n\n\n\n\n\n\n### Data frames\n\nEl m\u00f3dulo `pandas` proporciona el tipo \"dataframe\", muy utilizado en an\u00e1lisis de datos. Permite leer conjuntos de datos almacenados en archivos que pueden estar incluso en una m\u00e1quina remota.\n\n\n```python\nimport pandas as pd\n\ndf = pd.read_table('https://robot.inf.um.es/material/data/ConstanteHubbleDatos-1.txt', sep='\\s+', comment='#')\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
V(km/s)RedshiftMagnitud
0182870.06099817.62
156910.01898315.00
2263820.08800018.59
359960.02000015.54
4192020.06405115.30
5236840.07900016.56
6117020.03903417.14
7172840.05765313.50
8134910.04500017.80
9105660.03524415.25
10147180.04909415.60
11134910.04500014.52
12163250.05445315.30
13206860.06900016.80
1418080.00603111.16
1576030.02536115.18
1610180.00339512.24
173210.00107113.00
1831060.01036012.49
1994260.03144214.53
2074640.02489715.21
21151430.05051217.40
224070.00135810.87
2372570.02420714.60
2491930.03066415.10
25121370.04048514.75
2642640.01422414.98
2743810.01461514.15
28224840.07500017.43
29151620.05057516.50
30300000.10100018.90
31129810.04330015.23
3288030.02936414.90
\n
\n\n\n\nPuede convertirse en un array normal:\n\n\n```python\nA = np.array(df)\nA\n```\n\n\n\n\n array([[1.8287e+04, 6.0998e-02, 1.7620e+01],\n [5.6910e+03, 1.8983e-02, 1.5000e+01],\n [2.6382e+04, 8.8000e-02, 1.8590e+01],\n [5.9960e+03, 2.0000e-02, 1.5540e+01],\n [1.9202e+04, 6.4051e-02, 1.5300e+01],\n [2.3684e+04, 7.9000e-02, 1.6560e+01],\n [1.1702e+04, 3.9034e-02, 1.7140e+01],\n [1.7284e+04, 5.7653e-02, 1.3500e+01],\n [1.3491e+04, 4.5000e-02, 1.7800e+01],\n [1.0566e+04, 3.5244e-02, 1.5250e+01],\n [1.4718e+04, 4.9094e-02, 1.5600e+01],\n [1.3491e+04, 4.5000e-02, 1.4520e+01],\n [1.6325e+04, 5.4453e-02, 1.5300e+01],\n [2.0686e+04, 6.9000e-02, 1.6800e+01],\n [1.8080e+03, 6.0310e-03, 1.1160e+01],\n [7.6030e+03, 2.5361e-02, 1.5180e+01],\n [1.0180e+03, 3.3950e-03, 1.2240e+01],\n [3.2100e+02, 1.0710e-03, 1.3000e+01],\n [3.1060e+03, 1.0360e-02, 1.2490e+01],\n [9.4260e+03, 3.1442e-02, 1.4530e+01],\n [7.4640e+03, 2.4897e-02, 1.5210e+01],\n [1.5143e+04, 5.0512e-02, 1.7400e+01],\n [4.0700e+02, 1.3580e-03, 1.0870e+01],\n [7.2570e+03, 2.4207e-02, 1.4600e+01],\n [9.1930e+03, 3.0664e-02, 1.5100e+01],\n [1.2137e+04, 4.0485e-02, 1.4750e+01],\n [4.2640e+03, 1.4224e-02, 1.4980e+01],\n [4.3810e+03, 1.4615e-02, 1.4150e+01],\n [2.2484e+04, 7.5000e-02, 1.7430e+01],\n [1.5162e+04, 5.0575e-02, 1.6500e+01],\n [3.0000e+04, 1.0100e-01, 1.8900e+01],\n [1.2981e+04, 4.3300e-02, 1.5230e+01],\n [8.8030e+03, 2.9364e-02, 1.4900e+01]])\n\n\n\n\n```python\nx = A[:,0]\ny = A[:,2]\n\n# x,_,y = A.T\n\nplt.plot(x,y,'.');\n```\n\n## C\u00e1lculo cient\u00edfico\n\n### N\u00fameros pseudoaleatorios y estad\u00edstica elemental\n\n`numpy` permite generar arrays de n\u00fameros pseudoaleatorios con diferentes tipos de distribuciones (uniforme, normal, etc.).\n\nTiene tambi\u00e9n funciones de estad\u00edstica descriptiva para calcular caracter\u00edsticas de conjuntos de datos tales como la media, mediana, desviaci\u00f3n t\u00edpica, m\u00e1ximo y m\u00ednimo, etc.\n\nComo ejemplo, podemos estudiar la distribuci\u00f3n de puntuaciones al lanzar 3 dados.\n\n\n```python\ndados = np.random.randint(1,6+1,(100,3))\ndados[:10]\n```\n\n\n\n\n array([[3, 6, 1],\n [6, 1, 1],\n [4, 4, 2],\n [4, 5, 2],\n [5, 5, 4],\n [2, 3, 2],\n [1, 6, 6],\n [4, 1, 2],\n [2, 3, 3],\n [2, 1, 1]])\n\n\n\n\n```python\ns = np.sum(dados,axis=1)\ns\n```\n\n\n\n\n array([10, 8, 10, 11, 14, 7, 13, 7, 8, 4, 14, 16, 10, 11, 14, 17, 8,\n 11, 9, 14, 17, 3, 14, 6, 14, 10, 7, 8, 12, 9, 9, 14, 11, 11,\n 8, 13, 12, 7, 12, 9, 9, 9, 7, 9, 12, 4, 13, 14, 17, 7, 9,\n 9, 11, 6, 13, 15, 11, 8, 14, 14, 12, 5, 9, 13, 13, 4, 8, 16,\n 13, 12, 11, 14, 7, 7, 7, 10, 10, 13, 11, 11, 10, 8, 11, 10, 7,\n 14, 13, 13, 10, 16, 6, 15, 9, 18, 11, 16, 7, 4, 7, 16])\n\n\n\n\n```python\nplt.hist(s,bins=np.arange(2,19)+0.5);\n```\n\n\n```python\ns.mean()\n```\n\n\n\n\n 10.6\n\n\n\n\n```python\ns.std()\n```\n\n\n\n\n 3.3763886032268267\n\n\n\n### Implementaci\u00f3n eficiente\n\nLas operaciones de `numpy` est\u00e1n \"optimizadas\" (escritas internamente en c\u00f3digo C eficiente).\n\n\n```python\nx = np.random.rand(10**8)\n```\n\n\n```python\nx\n```\n\n\n\n\n array([0.39520383, 0.36193568, 0.16302059, ..., 0.66497153, 0.07501657,\n 0.8121129 ])\n\n\n\n\n```python\n%%time\n\nnp.mean(x)\n```\n\n CPU times: user 129 ms, sys: 465 \u00b5s, total: 130 ms\n Wall time: 127 ms\n\n\n\n\n\n 0.500018403633832\n\n\n\n\n```python\n%%timeit\n\nnp.mean(x)\n```\n\n 85.5 ms \u00b1 8.76 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\n\n```python\n%%timeit\n\nx @ x\n```\n\n 87.1 ms \u00b1 6.98 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\nSi la misma operaci\u00f3n se realiza \"manualmente\" con instrucciones normales de Python requiere mucho m\u00e1s tiempo:\n\n\n```python\n%%time\n\ns = 0\nfor e in x:\n s += e\nprint(s/len(x))\n```\n\n 0.5000184036336627\n CPU times: user 17.2 s, sys: 0 ns, total: 17.2 s\n Wall time: 17.2 s\n\n\nPor tanto, si usamos los m\u00f3dulos apropiados los programas en Python no tienen por qu\u00e9 ser m\u00e1s lentos que los de otros lenguajes de programaci\u00f3n. Python es \"glue code\", un pegamento para combinar bibliotecas de funciones, escritas en cualquier lenguaje, que resuelven eficientemente problemas espec\u00edficos.\n\n### \u00c1lgebra lineal\n\nEl subm\u00f3dulo `linalg` ofrece las operaciones usuales de \u00e1lgebra lineal.\n\n\n```python\nimport scipy.linalg as la\n```\n\nPor ejemplo, podemos calcular f\u00e1cilmente el m\u00f3dulo de un vector:\n\n\n```python\nla.norm([1,2,3,4,5])\n```\n\n\n\n\n 7.416198487095663\n\n\n\no el determinante de una matriz:\n\n\n```python\nla.det([[1,2],\n [3,4]])\n```\n\n\n\n\n -2.0\n\n\n\nObserva que muchas de las funciones que trabajan con arrays admiten tambi\u00e9n otros contenedores como listas o tuplas, que son transformadas autom\u00e1ticamente en arrays.\n\nUn problema muy importante es la resoluci\u00f3n de sistemas de ecuaciones lineales. Si tenemos que resolver un sistema como\n\n$$\n\\begin{align*}\nx + 2y &= 3\\\\\n3x+4y &= 5\n\\end{align*}\n$$\n\nLo expresamos en forma matricial $AX=B$ y podemos resolverlo con la inversa de $A$, o directamente con `solve`.\n\n\n```python\nm = np.array([[1,2],\n [3,4]])\n```\n\n\n```python\nm\n```\n\n\n\n\n array([[1, 2],\n [3, 4]])\n\n\n\n\n```python\nla.inv(m)\n```\n\n\n\n\n array([[-2. , 1. ],\n [ 1.5, -0.5]])\n\n\n\n\n```python\nla.inv(m) @ np.array([3,5])\n```\n\n\n\n\n array([-1., 2.])\n\n\n\nEs mejor (m\u00e1s eficiente y num\u00e9ricamente estable) usar la funci\u00f3n `solve`:\n\n\n```python\nla.solve(m,[3,5])\n```\n\n\n\n\n array([-1., 2.])\n\n\n\nLa soluci\u00f3n se deber\u00eda mostrar como una columna, pero en Python los arrays de una dimensi\u00f3n se imprimen como una fila porque no siempre representan vectores matem\u00e1ticos. Si lo preferimos podemos usar matrices de una sola columna.\n\n\n```python\nx = la.solve(m,[[3],\n [5]])\n\nx\n```\n\n\n\n\n array([[-1.],\n [ 2.]])\n\n\n\n\n```python\nm @ x\n```\n\n\n\n\n array([[3.],\n [5.]])\n\n\n\nSi el lado derecho de la ecuaci\u00f3n matricial $A X = B$ es una matriz, la soluci\u00f3n $X$ tambi\u00e9n lo ser\u00e1.\n\n### Computaci\u00f3n matricial\n\nPython proporciona una [amplia colecci\u00f3n](https://docs.scipy.org/doc/scipy/reference/linalg.html) de funciones de \u00e1lgebra lineal num\u00e9rica.\n\n\n```python\nla.eigh([[1,2],\n [2,3]])\n```\n\n\n\n\n (array([-0.23606798, 4.23606798]), array([[-0.85065081, 0.52573111],\n [ 0.52573111, 0.85065081]]))\n\n\n\n### M\u00ednimos cuadrados\n\nComo ejemplo de uso de las herramientas de \u00e1lgebra lineal realizaremos el ajuste de un modelo polinomial a unas observaciones ficticias. Encontraremos la soluci\u00f3n de m\u00ednimo error cuadr\u00e1tico a un sistema de ecuaciones sobredeterminado.\n\nEn primer lugar generamos unos datos de prueba artificiales que simulan observaciones contaminadas con ruido de una funci\u00f3n no lineal.\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nx = np.linspace(0,2,30)\n\ny = np.sin(x) + 0.05*np.random.randn(x.size)\n\nplt.plot(x,y,'.');\n```\n\nVamos a ajustar un modelo del tipo $y = ax^2 + bx + c$. Los coeficientes desconocidos $a$, $b$ y $c$ se pueden obtener resolviendo un sistema de ecuaciones lineales.\n\nLa matriz de coeficientes tiene potencias de $x$ hasta el grado que nos interesa.\n\n\n```python\nA = np.vstack([x**2, x, np.ones(x.size)]).T\n\nA\n```\n\n\n\n\n array([[0. , 0. , 1. ],\n [0.00475624, 0.06896552, 1. ],\n [0.01902497, 0.13793103, 1. ],\n [0.04280618, 0.20689655, 1. ],\n [0.07609988, 0.27586207, 1. ],\n [0.11890606, 0.34482759, 1. ],\n [0.17122473, 0.4137931 , 1. ],\n [0.23305589, 0.48275862, 1. ],\n [0.30439952, 0.55172414, 1. ],\n [0.38525565, 0.62068966, 1. ],\n [0.47562426, 0.68965517, 1. ],\n [0.57550535, 0.75862069, 1. ],\n [0.68489893, 0.82758621, 1. ],\n [0.80380499, 0.89655172, 1. ],\n [0.93222354, 0.96551724, 1. ],\n [1.07015458, 1.03448276, 1. ],\n [1.2175981 , 1.10344828, 1. ],\n [1.3745541 , 1.17241379, 1. ],\n [1.54102259, 1.24137931, 1. ],\n [1.71700357, 1.31034483, 1. ],\n [1.90249703, 1.37931034, 1. ],\n [2.09750297, 1.44827586, 1. ],\n [2.3020214 , 1.51724138, 1. ],\n [2.51605232, 1.5862069 , 1. ],\n [2.73959572, 1.65517241, 1. ],\n [2.97265161, 1.72413793, 1. ],\n [3.21521998, 1.79310345, 1. ],\n [3.46730083, 1.86206897, 1. ],\n [3.72889417, 1.93103448, 1. ],\n [4. , 2. , 1. ]])\n\n\n\nEl lado derecho del sistema es directamente el vector con los valores de $y$, la variable independiente del modelo.\n\n\n```python\nB = np.array(y)\n\nB\n```\n\n\n\n\n array([-0.03376601, 0.1262018 , 0.10308713, 0.22008185, 0.22293447,\n 0.36197156, 0.33667892, 0.34638041, 0.51909713, 0.65396166,\n 0.60524298, 0.6565335 , 0.80283792, 0.80984674, 0.90612391,\n 0.86432539, 0.82420153, 0.8831923 , 0.96355321, 0.95857022,\n 1.00002704, 1.00481373, 0.99556629, 0.97193459, 0.9837597 ,\n 1.01496467, 0.95858499, 1.00338407, 0.92323694, 0.91704651])\n\n\n\nEl sistema que hay que resolver est\u00e1 sobredeterminado: tiene solo tres inc\u00f3gnitas y tantas ecuaciones como observaciones de la funci\u00f3n.\n\n$$A \\begin{bmatrix}a\\\\b\\\\c\\end{bmatrix}= B$$\n\nLa soluci\u00f3n de [m\u00ednimo error cuadr\u00e1tico](https://en.wikipedia.org/wiki/Least_squares) para los coeficientes del modelo se obtiene de manera directa:\n\n\n```python\nsol = la.lstsq(A,B)[0]\n\nsol\n```\n\n\n\n\n array([-0.39434454, 1.28360329, -0.05193268])\n\n\n\n\n```python\nye = A @ sol\n\nplt.plot(x,y,'.',x,ye,'r');\n```\n\nSe puede experimentar con polinomios de mayor o menor grado.\n\n### Soluci\u00f3n num\u00e9rica de ecuaciones no lineales\n\nResuelve \n\n$$x^4=16$$\n\n\n```python\nimport scipy as sci\n\nsci.roots([1,0,0,0,-16])\n```\n\n\n\n\n array([-2.00000000e+00+0.j, 1.66533454e-16+2.j, 1.66533454e-16-2.j,\n 2.00000000e+00+0.j])\n\n\n\nResuelve\n\n$$sin(x)+cos(2x)=0$$\n\n\n```python\nimport scipy.optimize as opt\n\nopt.fsolve(lambda x: sci.sin(x) + sci.cos(2*x), 0)\n```\n\n\n\n\n array([-0.52359878])\n\n\n\nResuelve\n\n$$\n\\begin{align*}\nx^2 - 3y &= 10\\\\\nsin(x)+y &= 5\n\\end{align*}\n$$\n\n\n```python\ndef fun(z):\n x,y = z\n return [ x**2 - 3*y - 10\n , sci.sin(x) + y - 5]\n\nopt.fsolve(fun,[0.1,-0.1])\n```\n\n\n\n\n array([5.2511881 , 5.85832548])\n\n\n\n### Minimizaci\u00f3n\n\nEncuentra $(x,y)$ que minimiza $(x-1)^2 + (y-2)^2-x+3y$\n\n\n```python\ndef fun(z):\n x,y = z\n return (x-1)**2 + (y-2)**2 - x + 3*y\n\nopt.minimize(fun,[0.1,-0.1])\n```\n\n\n\n\n fun: 2.500000000000014\n hess_inv: array([[ 0.57758622, -0.18103452],\n [-0.18103452, 0.92241375]])\n jac: array([ 0.00000000e+00, -2.38418579e-07])\n message: 'Optimization terminated successfully.'\n nfev: 12\n nit: 2\n njev: 3\n status: 0\n success: True\n x: array([1.49999999, 0.49999988])\n\n\n\n### Derivaci\u00f3n num\u00e9rica\n\nCalcula una aproximaci\u00f3n num\u00e9rica para $f'(2)$ cuando $f(x) = \\sin(2x)*\\exp(\\cos(x))$\n\n\n```python\nfrom scipy.misc import derivative\n\nderivative(lambda x: sci.sin(2*x)*sci.exp(sci.cos(x)),2,1E-6)\n```\n\n\n\n\n -0.40836700757052036\n\n\n\n\n```python\n(lambda x: (-np.sin(x)*np.sin(2*x) + 2*np.cos(2*x))*np.exp(np.cos(x)))(2)\n```\n\n\n\n\n -0.40836700756782335\n\n\n\n### Integraci\u00f3n num\u00e9rica\n\nCalcula una aproximaci\u00f3n num\u00e9rica a la integral definida\n\n$$\\int_0^1 \\frac{4}{1+x^2}dx$$\n\n\n```python\nfrom scipy.integrate import quad\n\nquad(lambda x: 4/(1+x**2),0,1)\n```\n\n\n\n\n (3.1415926535897936, 3.4878684980086326e-14)\n\n\n\n### ecuaciones diferenciales\n\nResuelve\n\n$$\\ddot{x}+0.95x+0.1\\dot{x}=0$$\n\npara $x(0)=10$, $\\dot{x}(0)=0, t\\in[0,20]$\n\n\n```python\nfrom scipy.integrate import odeint\n\ndef xdot(z,t):\n x,v = z\n return [v,-0.95*x-0.1*v]\n\nt = np.linspace(0,20,1000)\nr = odeint(xdot,[10,0],t)\n# plt.plot(r);\nplt.plot(t,r[:,0],t,r[:,1]);\n```\n\n\n```python\nplt.plot(r[:,0],r[:,1]);\n```\n\n### C\u00e1lculo simb\u00f3lico\n\n[sympy](http://www.sympy.org/en/index.html)\n\n\n```python\nimport sympy\n\nx = sympy.Symbol('x')\n```\n\n\n```python\nsympy.diff( sympy.sin(2*x**3) , x)\n```\n\n\n\n\n 6*x**2*cos(2*x**3)\n\n\n\n\n```python\nsympy.integrate(1/(1+x))\n```\n\n\n\n\n log(x + 1)\n\n\n\n## Varios\n\n### V\u00eddeos\n\n\n```python\nfrom IPython.display import YouTubeVideo\n```\n\n\n```python\nYouTubeVideo('p7bzE1E5PMY')\n```\n\n\n\n\n\n\n\n\n\n\n### xkcd\n\n\n```python\nplt.xkcd()\nplt.plot(np.sin(np.linspace(0, 10)))\nplt.title('Whoo Hoo!!!');\n```\n\n### Estilo del notebook\n\n\n```python\n# podemos \"tunearlo\"\n#from IPython.display import HTML\n#HTML(open('../css/nb1.css').read())\n```\n", "meta": {"hexsha": "084dc88e07ea2601408e89d65e4c341cd1d7a5b6", "size": 824352, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/python.ipynb", "max_stars_repo_name": "Mellandd/umucv", "max_stars_repo_head_hexsha": "bf2eda0fc9147f652be6cbcb3bfced2bed29faea", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/python.ipynb", "max_issues_repo_name": "Mellandd/umucv", "max_issues_repo_head_hexsha": "bf2eda0fc9147f652be6cbcb3bfced2bed29faea", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/python.ipynb", "max_forks_repo_name": "Mellandd/umucv", "max_forks_repo_head_hexsha": "bf2eda0fc9147f652be6cbcb3bfced2bed29faea", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 103.7442738485, "max_line_length": 86808, "alphanum_fraction": 0.8685476593, "converted": true, "num_tokens": 16596, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4493926344647597, "lm_q2_score": 0.29421497216298875, "lm_q1q2_score": 0.13221804143930146}} {"text": "\n\n\n# Week 43: Deep Learning: Recurrent Neural Networks and other Deep Learning Methods. Principal Component analysis\n**Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University\n\nDate: **Nov 2, 2021**\n\nCopyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license\n\n## Plans for week 43\n\n* Thursday: Summary of Convolutional Neural Networks from week 42 and Recurrent Neural Networks\n\n * [Video of Lecture](https://www.uio.no/studier/emner/matnat/fys/FYS-STK3155/h21/forelesningsvideoer/LectureOctober28.mp4?vrtx=view-as-webpage)\n\n* Friday: Recurrent Neural Networks and other Deep Learning methods such as Generalized Adversarial Neural Networks. Start discussing Principal component analysis\n\n * [Video of Lecture](https://www.uio.no/studier/emner/matnat/fys/FYS-STK3155/h21/forelesningsvideoer/LectureOctober29.mp4?vrtx=view-as-webpage)\n\n**Excellent lectures on CNNs and RNNs.**\n\n* [Video on Convolutional Neural Networks from MIT](https://www.youtube.com/watch?v=iaSUYvmCekI&ab_channel=AlexanderAmini)\n\n* [Video on Recurrent Neural Networks from MIT](https://www.youtube.com/watch?v=SEnXr6v2ifU&ab_channel=AlexanderAmini)\n\n* [Video on Deep Learning](https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi)\n\n**More resources.**\n\n* [IN5400 at UiO Lecture](https://www.uio.no/studier/emner/matnat/ifi/IN5400/v20/material/week10/in5400_2020_week10_recurrent_neural_network.pdf)\n\n* [CS231 at Stanford Lecture](https://www.youtube.com/watch?v=6niqTuYFZLQ&list=PLzUTmXVwsnXod6WNdg57Yc3zFx_f-RYsq&index=10&ab_channel=StanfordUniversitySchoolofEngineering)\n\n## Reading Recommendations\n\n* Goodfellow et al, chapter 10 on Recurrent NNs, chapters 11 and 12 on various practicalities around deep learning are also recommended.\n\n* Aurelien Geron, chapter 14 on RNNs.\n\n## Summary on Deep Learning Methods\n\nWe have studied fully connected neural networks (also called artifical nueral networks) and convolutional neural networks (CNNs).\n\nThe first type of deep learning networks work very well on homogeneous and structured input data while CCNs are normally tailored to recognizing images.\n\n## CNNs in brief\n\nIn summary:\n\n* A CNN architecture is in the simplest case a list of Layers that transform the image volume into an output volume (e.g. holding the class scores)\n\n* There are a few distinct types of Layers (e.g. CONV/FC/RELU/POOL are by far the most popular)\n\n* Each Layer accepts an input 3D volume and transforms it to an output 3D volume through a differentiable function\n\n* Each Layer may or may not have parameters (e.g. CONV/FC do, RELU/POOL don\u2019t)\n\n* Each Layer may or may not have additional hyperparameters (e.g. CONV/FC/POOL do, RELU doesn\u2019t)\n\nFor more material on convolutional networks, we strongly recommend\nthe course\n[IN5400 \u2013 Machine Learning for Image Analysis](https://www.uio.no/studier/emner/matnat/ifi/IN5400/index-eng.html)\nand the slides of [CS231](http://cs231n.github.io/convolutional-networks/) which is taught at Stanford University (consistently ranked as one of the top computer science programs in the world). [Michael Nielsen's book is a must read, in particular chapter 6 which deals with CNNs](http://neuralnetworksanddeeplearning.com/chap6.html).\n\nHowever, both standard feed forwards networks and CNNs perform well on data with unknown length.\n\nThis is where recurrent nueral networks (RNNs) come to our rescue.\n\n## Recurrent neural networks: Overarching view\n\nTill now our focus has been, including convolutional neural networks\nas well, on feedforward neural networks. The output or the activations\nflow only in one direction, from the input layer to the output layer.\n\nA recurrent neural network (RNN) looks very much like a feedforward\nneural network, except that it also has connections pointing\nbackward. \n\nRNNs are used to analyze time series data such as stock prices, and\ntell you when to buy or sell. In autonomous driving systems, they can\nanticipate car trajectories and help avoid accidents. More generally,\nthey can work on sequences of arbitrary lengths, rather than on\nfixed-sized inputs like all the nets we have discussed so far. For\nexample, they can take sentences, documents, or audio samples as\ninput, making them extremely useful for natural language processing\nsystems such as automatic translation and speech-to-text.\n\n## Set up of an RNN\n\nMore to text to be added\n\n## A simple example\n\n\n```python\n%matplotlib inline\n\n# Start importing packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow.keras import datasets, layers, models\nfrom tensorflow.keras.layers import Input\nfrom tensorflow.keras.models import Model, Sequential \nfrom tensorflow.keras.layers import Dense, SimpleRNN, LSTM, GRU\nfrom tensorflow.keras import optimizers \nfrom tensorflow.keras import regularizers \nfrom tensorflow.keras.utils import to_categorical \n\n\n\n# convert into dataset matrix\ndef convertToMatrix(data, step):\n X, Y =[], []\n for i in range(len(data)-step):\n d=i+step \n X.append(data[i:d,])\n Y.append(data[d,])\n return np.array(X), np.array(Y)\n\nstep = 4\nN = 1000 \nTp = 800 \n\nt=np.arange(0,N)\nx=np.sin(0.02*t)+2*np.random.rand(N)\ndf = pd.DataFrame(x)\ndf.head()\n\nplt.plot(df)\nplt.show()\n\nvalues=df.values\ntrain,test = values[0:Tp,:], values[Tp:N,:]\n\n# add step elements into train and test\ntest = np.append(test,np.repeat(test[-1,],step))\ntrain = np.append(train,np.repeat(train[-1,],step))\n \ntrainX,trainY =convertToMatrix(train,step)\ntestX,testY =convertToMatrix(test,step)\ntrainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))\ntestX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))\n\nmodel = Sequential()\nmodel.add(SimpleRNN(units=32, input_shape=(1,step), activation=\"relu\"))\nmodel.add(Dense(8, activation=\"relu\")) \nmodel.add(Dense(1))\nmodel.compile(loss='mean_squared_error', optimizer='rmsprop')\nmodel.summary()\n\nmodel.fit(trainX,trainY, epochs=100, batch_size=16, verbose=2)\ntrainPredict = model.predict(trainX)\ntestPredict= model.predict(testX)\npredicted=np.concatenate((trainPredict,testPredict),axis=0)\n\ntrainScore = model.evaluate(trainX, trainY, verbose=0)\nprint(trainScore)\n\nindex = df.index.values\nplt.plot(index,df)\nplt.plot(index,predicted)\nplt.axvline(df.index[Tp], c=\"r\")\nplt.show()\n```\n\n## An extrapolation example\n\nThe following code provides an example of how recurrent neural\nnetworks can be used to extrapolate to unknown values of physics data\nsets. Specifically, the data sets used in this program come from\na quantum mechanical many-body calculation of energies as functions of the number of particles.\n\n\n```python\n\n# For matrices and calculations\nimport numpy as np\n# For machine learning (backend for keras)\nimport tensorflow as tf\n# User-friendly machine learning library\n# Front end for TensorFlow\nimport tensorflow.keras\n# Different methods from Keras needed to create an RNN\n# This is not necessary but it shortened function calls \n# that need to be used in the code.\nfrom tensorflow.keras import datasets, layers, models\nfrom tensorflow.keras.layers import Input\nfrom tensorflow.keras import regularizers\nfrom tensorflow.keras.models import Model, Sequential\nfrom tensorflow.keras.layers import Dense, SimpleRNN, LSTM, GRU\n# For timing the code\nfrom timeit import default_timer as timer\n# For plotting\nimport matplotlib.pyplot as plt\n\n\n# The data set\ndatatype='VaryDimension'\nX_tot = np.arange(2, 42, 2)\ny_tot = np.array([-0.03077640549, -0.08336233266, -0.1446729567, -0.2116753732, -0.2830637392, -0.3581341341, -0.436462435, -0.5177783846,\n\t-0.6019067271, -0.6887363571, -0.7782028952, -0.8702784034, -0.9649652536, -1.062292565, -1.16231451, \n\t-1.265109911, -1.370782966, -1.479465113, -1.591317992, -1.70653767])\n```\n\n## Formatting the Data\n\nThe way the recurrent neural networks are trained in this program\ndiffers from how machine learning algorithms are usually trained.\nTypically a machine learning algorithm is trained by learning the\nrelationship between the x data and the y data. In this program, the\nrecurrent neural network will be trained to recognize the relationship\nin a sequence of y values. This is type of data formatting is\ntypically used time series forcasting, but it can also be used in any\nextrapolation (time series forecasting is just a specific type of\nextrapolation along the time axis). This method of data formatting\ndoes not use the x data and assumes that the y data are evenly spaced.\n\nFor a standard machine learning algorithm, the training data has the\nform of (x,y) so the machine learning algorithm learns to assiciate a\ny value with a given x value. This is useful when the test data has x\nvalues within the same range as the training data. However, for this\napplication, the x values of the test data are outside of the x values\nof the training data and the traditional method of training a machine\nlearning algorithm does not work as well. For this reason, the\nrecurrent neural network is trained on sequences of y values of the\nform ((y1, y2), y3), so that the network is concerned with learning\nthe pattern of the y data and not the relation between the x and y\ndata. As long as the pattern of y data outside of the training region\nstays relatively stable compared to what was inside the training\nregion, this method of training can produce accurate extrapolations to\ny values far removed from the training data set.\n\n\n\n\n\n\n\n\n\n```python\n# FORMAT_DATA\ndef format_data(data, length_of_sequence = 2): \n \"\"\"\n Inputs:\n data(a numpy array): the data that will be the inputs to the recurrent neural\n network\n length_of_sequence (an int): the number of elements in one iteration of the\n sequence patter. For a function approximator use length_of_sequence = 2.\n Returns:\n rnn_input (a 3D numpy array): the input data for the recurrent neural network. Its\n dimensions are length of data - length of sequence, length of sequence, \n dimnsion of data\n rnn_output (a numpy array): the training data for the neural network\n Formats data to be used in a recurrent neural network.\n \"\"\"\n\n X, Y = [], []\n for i in range(len(data)-length_of_sequence):\n # Get the next length_of_sequence elements\n a = data[i:i+length_of_sequence]\n # Get the element that immediately follows that\n b = data[i+length_of_sequence]\n # Reshape so that each data point is contained in its own array\n a = np.reshape (a, (len(a), 1))\n X.append(a)\n Y.append(b)\n rnn_input = np.array(X)\n rnn_output = np.array(Y)\n\n return rnn_input, rnn_output\n\n\n# ## Defining the Recurrent Neural Network Using Keras\n# \n# The following method defines a simple recurrent neural network in keras consisting of one input layer, one hidden layer, and one output layer.\n\ndef rnn(length_of_sequences, batch_size = None, stateful = False):\n \"\"\"\n Inputs:\n length_of_sequences (an int): the number of y values in \"x data\". This is determined\n when the data is formatted\n batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.\n stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.\n Returns:\n model (a Keras model): The recurrent neural network that is built and compiled by this\n method\n Builds and compiles a recurrent neural network with one hidden layer and returns the model.\n \"\"\"\n # Number of neurons in the input and output layers\n in_out_neurons = 1\n # Number of neurons in the hidden layer\n hidden_neurons = 200\n # Define the input layer\n inp = Input(batch_shape=(batch_size, \n length_of_sequences, \n in_out_neurons)) \n # Define the hidden layer as a simple RNN layer with a set number of neurons and add it to \n # the network immediately after the input layer\n rnn = SimpleRNN(hidden_neurons, \n return_sequences=False,\n stateful = stateful,\n name=\"RNN\")(inp)\n # Define the output layer as a dense neural network layer (standard neural network layer)\n #and add it to the network immediately after the hidden layer.\n dens = Dense(in_out_neurons,name=\"dense\")(rnn)\n # Create the machine learning model starting with the input layer and ending with the \n # output layer\n model = Model(inputs=[inp],outputs=[dens])\n # Compile the machine learning model using the mean squared error function as the loss \n # function and an Adams optimizer.\n model.compile(loss=\"mean_squared_error\", optimizer=\"adam\") \n return model\n```\n\n## Predicting New Points With A Trained Recurrent Neural Network\n\n\n```python\ndef test_rnn (x1, y_test, plot_min, plot_max):\n \"\"\"\n Inputs:\n x1 (a list or numpy array): The complete x component of the data set\n y_test (a list or numpy array): The complete y component of the data set\n plot_min (an int or float): the smallest x value used in the training data\n plot_max (an int or float): the largest x valye used in the training data\n Returns:\n None.\n Uses a trained recurrent neural network model to predict future points in the \n series. Computes the MSE of the predicted data set from the true data set, saves\n the predicted data set to a csv file, and plots the predicted and true data sets w\n while also displaying the data range used for training.\n \"\"\"\n # Add the training data as the first dim points in the predicted data array as these\n # are known values.\n y_pred = y_test[:dim].tolist()\n # Generate the first input to the trained recurrent neural network using the last two \n # points of the training data. Based on how the network was trained this means that it\n # will predict the first point in the data set after the training data. All of the \n # brackets are necessary for Tensorflow.\n next_input = np.array([[[y_test[dim-2]], [y_test[dim-1]]]])\n # Save the very last point in the training data set. This will be used later.\n last = [y_test[dim-1]]\n\n # Iterate until the complete data set is created.\n for i in range (dim, len(y_test)):\n # Predict the next point in the data set using the previous two points.\n next = model.predict(next_input)\n # Append just the number of the predicted data set\n y_pred.append(next[0][0])\n # Create the input that will be used to predict the next data point in the data set.\n next_input = np.array([[last, next[0]]], dtype=np.float64)\n last = next\n\n # Print the mean squared error between the known data set and the predicted data set.\n print('MSE: ', np.square(np.subtract(y_test, y_pred)).mean())\n # Save the predicted data set as a csv file for later use\n name = datatype + 'Predicted'+str(dim)+'.csv'\n np.savetxt(name, y_pred, delimiter=',')\n # Plot the known data set and the predicted data set. The red box represents the region that was used\n # for the training data.\n fig, ax = plt.subplots()\n ax.plot(x1, y_test, label=\"true\", linewidth=3)\n ax.plot(x1, y_pred, 'g-.',label=\"predicted\", linewidth=4)\n ax.legend()\n # Created a red region to represent the points used in the training data.\n ax.axvspan(plot_min, plot_max, alpha=0.25, color='red')\n plt.show()\n\n# Check to make sure the data set is complete\nassert len(X_tot) == len(y_tot)\n\n# This is the number of points that will be used in as the training data\ndim=12\n\n# Separate the training data from the whole data set\nX_train = X_tot[:dim]\ny_train = y_tot[:dim]\n\n\n# Generate the training data for the RNN, using a sequence of 2\nrnn_input, rnn_training = format_data(y_train, 2)\n\n\n# Create a recurrent neural network in Keras and produce a summary of the \n# machine learning model\nmodel = rnn(length_of_sequences = rnn_input.shape[1])\nmodel.summary()\n\n# Start the timer. Want to time training+testing\nstart = timer()\n# Fit the model using the training data genenerated above using 150 training iterations and a 5%\n# validation split. Setting verbose to True prints information about each training iteration.\nhist = model.fit(rnn_input, rnn_training, batch_size=None, epochs=150, \n verbose=True,validation_split=0.05)\n\nfor label in [\"loss\",\"val_loss\"]:\n plt.plot(hist.history[label],label=label)\n\nplt.ylabel(\"loss\")\nplt.xlabel(\"epoch\")\nplt.title(\"The final validation loss: {}\".format(hist.history[\"val_loss\"][-1]))\nplt.legend()\nplt.show()\n\n# Use the trained neural network to predict more points of the data set\ntest_rnn(X_tot, y_tot, X_tot[0], X_tot[dim-1])\n# Stop the timer and calculate the total time needed.\nend = timer()\nprint('Time: ', end-start)\n```\n\n## Other Things to Try\n\nChanging the size of the recurrent neural network and its parameters\ncan drastically change the results you get from the model. The below\ncode takes the simple recurrent neural network from above and adds a\nsecond hidden layer, changes the number of neurons in the hidden\nlayer, and explicitly declares the activation function of the hidden\nlayers to be a sigmoid function. The loss function and optimizer can\nalso be changed but are kept the same as the above network. These\nparameters can be tuned to provide the optimal result from the\nnetwork. For some ideas on how to improve the performance of a\n[recurrent neural network](https://danijar.com/tips-for-training-recurrent-neural-networks).\n\n\n```python\ndef rnn_2layers(length_of_sequences, batch_size = None, stateful = False):\n \"\"\"\n Inputs:\n length_of_sequences (an int): the number of y values in \"x data\". This is determined\n when the data is formatted\n batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.\n stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.\n Returns:\n model (a Keras model): The recurrent neural network that is built and compiled by this\n method\n Builds and compiles a recurrent neural network with two hidden layers and returns the model.\n \"\"\"\n # Number of neurons in the input and output layers\n in_out_neurons = 1\n # Number of neurons in the hidden layer, increased from the first network\n hidden_neurons = 500\n # Define the input layer\n inp = Input(batch_shape=(batch_size, \n length_of_sequences, \n in_out_neurons)) \n # Create two hidden layers instead of one hidden layer. Explicitly set the activation\n # function to be the sigmoid function (the default value is hyperbolic tangent)\n rnn1 = SimpleRNN(hidden_neurons, \n return_sequences=True, # This needs to be True if another hidden layer is to follow\n stateful = stateful, activation = 'sigmoid',\n name=\"RNN1\")(inp)\n rnn2 = SimpleRNN(hidden_neurons, \n return_sequences=False, activation = 'sigmoid',\n stateful = stateful,\n name=\"RNN2\")(rnn1)\n # Define the output layer as a dense neural network layer (standard neural network layer)\n #and add it to the network immediately after the hidden layer.\n dens = Dense(in_out_neurons,name=\"dense\")(rnn2)\n # Create the machine learning model starting with the input layer and ending with the \n # output layer\n model = Model(inputs=[inp],outputs=[dens])\n # Compile the machine learning model using the mean squared error function as the loss \n # function and an Adams optimizer.\n model.compile(loss=\"mean_squared_error\", optimizer=\"adam\") \n return model\n\n# Check to make sure the data set is complete\nassert len(X_tot) == len(y_tot)\n\n# This is the number of points that will be used in as the training data\ndim=12\n\n# Separate the training data from the whole data set\nX_train = X_tot[:dim]\ny_train = y_tot[:dim]\n\n\n# Generate the training data for the RNN, using a sequence of 2\nrnn_input, rnn_training = format_data(y_train, 2)\n\n\n# Create a recurrent neural network in Keras and produce a summary of the \n# machine learning model\nmodel = rnn_2layers(length_of_sequences = 2)\nmodel.summary()\n\n# Start the timer. Want to time training+testing\nstart = timer()\n# Fit the model using the training data genenerated above using 150 training iterations and a 5%\n# validation split. Setting verbose to True prints information about each training iteration.\nhist = model.fit(rnn_input, rnn_training, batch_size=None, epochs=150, \n verbose=True,validation_split=0.05)\n\n\n# This section plots the training loss and the validation loss as a function of training iteration.\n# This is not required for analyzing the couple cluster data but can help determine if the network is\n# being overtrained.\nfor label in [\"loss\",\"val_loss\"]:\n plt.plot(hist.history[label],label=label)\n\nplt.ylabel(\"loss\")\nplt.xlabel(\"epoch\")\nplt.title(\"The final validation loss: {}\".format(hist.history[\"val_loss\"][-1]))\nplt.legend()\nplt.show()\n\n# Use the trained neural network to predict more points of the data set\ntest_rnn(X_tot, y_tot, X_tot[0], X_tot[dim-1])\n# Stop the timer and calculate the total time needed.\nend = timer()\nprint('Time: ', end-start)\n```\n\n## Other Types of Recurrent Neural Networks\n\nBesides a simple recurrent neural network layer, there are two other\ncommonly used types of recurrent neural network layers: Long Short\nTerm Memory (LSTM) and Gated Recurrent Unit (GRU). For a short\nintroduction to these layers see \nand .\n\nThe first network created below is similar to the previous network,\nbut it replaces the SimpleRNN layers with LSTM layers. The second\nnetwork below has two hidden layers made up of GRUs, which are\npreceeded by two dense (feeddorward) neural network layers. These\ndense layers \"preprocess\" the data before it reaches the recurrent\nlayers. This architecture has been shown to improve the performance\nof recurrent neural networks (see the link above and also\n.\n\n\n```python\ndef lstm_2layers(length_of_sequences, batch_size = None, stateful = False):\n \"\"\"\n Inputs:\n length_of_sequences (an int): the number of y values in \"x data\". This is determined\n when the data is formatted\n batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.\n stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.\n Returns:\n model (a Keras model): The recurrent neural network that is built and compiled by this\n method\n Builds and compiles a recurrent neural network with two LSTM hidden layers and returns the model.\n \"\"\"\n # Number of neurons on the input/output layer and the number of neurons in the hidden layer\n in_out_neurons = 1\n hidden_neurons = 250\n # Input Layer\n inp = Input(batch_shape=(batch_size, \n length_of_sequences, \n in_out_neurons)) \n # Hidden layers (in this case they are LSTM layers instead if SimpleRNN layers)\n rnn= LSTM(hidden_neurons, \n return_sequences=True,\n stateful = stateful,\n name=\"RNN\", use_bias=True, activation='tanh')(inp)\n rnn1 = LSTM(hidden_neurons, \n return_sequences=False,\n stateful = stateful,\n name=\"RNN1\", use_bias=True, activation='tanh')(rnn)\n # Output layer\n dens = Dense(in_out_neurons,name=\"dense\")(rnn1)\n # Define the midel\n model = Model(inputs=[inp],outputs=[dens])\n # Compile the model\n model.compile(loss='mean_squared_error', optimizer='adam') \n # Return the model\n return model\n\ndef dnn2_gru2(length_of_sequences, batch_size = None, stateful = False):\n \"\"\"\n Inputs:\n length_of_sequences (an int): the number of y values in \"x data\". This is determined\n when the data is formatted\n batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.\n stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.\n Returns:\n model (a Keras model): The recurrent neural network that is built and compiled by this\n method\n Builds and compiles a recurrent neural network with four hidden layers (two dense followed by\n two GRU layers) and returns the model.\n \"\"\" \n # Number of neurons on the input/output layers and hidden layers\n in_out_neurons = 1\n hidden_neurons = 250\n # Input layer\n inp = Input(batch_shape=(batch_size, \n length_of_sequences, \n in_out_neurons)) \n # Hidden Dense (feedforward) layers\n dnn = Dense(hidden_neurons/2, activation='relu', name='dnn')(inp)\n dnn1 = Dense(hidden_neurons/2, activation='relu', name='dnn1')(dnn)\n # Hidden GRU layers\n rnn1 = GRU(hidden_neurons, \n return_sequences=True,\n stateful = stateful,\n name=\"RNN1\", use_bias=True)(dnn1)\n rnn = GRU(hidden_neurons, \n return_sequences=False,\n stateful = stateful,\n name=\"RNN\", use_bias=True)(rnn1)\n # Output layer\n dens = Dense(in_out_neurons,name=\"dense\")(rnn)\n # Define the model\n model = Model(inputs=[inp],outputs=[dens])\n # Compile the mdoel\n model.compile(loss='mean_squared_error', optimizer='adam') \n # Return the model\n return model\n\n# Check to make sure the data set is complete\nassert len(X_tot) == len(y_tot)\n\n# This is the number of points that will be used in as the training data\ndim=12\n\n# Separate the training data from the whole data set\nX_train = X_tot[:dim]\ny_train = y_tot[:dim]\n\n\n# Generate the training data for the RNN, using a sequence of 2\nrnn_input, rnn_training = format_data(y_train, 2)\n\n\n# Create a recurrent neural network in Keras and produce a summary of the \n# machine learning model\n# Change the method name to reflect which network you want to use\nmodel = dnn2_gru2(length_of_sequences = 2)\nmodel.summary()\n\n# Start the timer. Want to time training+testing\nstart = timer()\n# Fit the model using the training data genenerated above using 150 training iterations and a 5%\n# validation split. Setting verbose to True prints information about each training iteration.\nhist = model.fit(rnn_input, rnn_training, batch_size=None, epochs=150, \n verbose=True,validation_split=0.05)\n\n\n# This section plots the training loss and the validation loss as a function of training iteration.\n# This is not required for analyzing the couple cluster data but can help determine if the network is\n# being overtrained.\nfor label in [\"loss\",\"val_loss\"]:\n plt.plot(hist.history[label],label=label)\n\nplt.ylabel(\"loss\")\nplt.xlabel(\"epoch\")\nplt.title(\"The final validation loss: {}\".format(hist.history[\"val_loss\"][-1]))\nplt.legend()\nplt.show()\n\n# Use the trained neural network to predict more points of the data set\ntest_rnn(X_tot, y_tot, X_tot[0], X_tot[dim-1])\n# Stop the timer and calculate the total time needed.\nend = timer()\nprint('Time: ', end-start)\n\n\n# ### Training Recurrent Neural Networks in the Standard Way (i.e. learning the relationship between the X and Y data)\n# \n# Finally, comparing the performace of a recurrent neural network using the standard data formatting to the performance of the network with time sequence data formatting shows the benefit of this type of data formatting with extrapolation.\n\n# Check to make sure the data set is complete\nassert len(X_tot) == len(y_tot)\n\n# This is the number of points that will be used in as the training data\ndim=12\n\n# Separate the training data from the whole data set\nX_train = X_tot[:dim]\ny_train = y_tot[:dim]\n\n# Reshape the data for Keras specifications\nX_train = X_train.reshape((dim, 1))\ny_train = y_train.reshape((dim, 1))\n\n\n# Create a recurrent neural network in Keras and produce a summary of the \n# machine learning model\n# Set the sequence length to 1 for regular data formatting \nmodel = rnn(length_of_sequences = 1)\nmodel.summary()\n\n# Start the timer. Want to time training+testing\nstart = timer()\n# Fit the model using the training data genenerated above using 150 training iterations and a 5%\n# validation split. Setting verbose to True prints information about each training iteration.\nhist = model.fit(X_train, y_train, batch_size=None, epochs=150, \n verbose=True,validation_split=0.05)\n\n\n# This section plots the training loss and the validation loss as a function of training iteration.\n# This is not required for analyzing the couple cluster data but can help determine if the network is\n# being overtrained.\nfor label in [\"loss\",\"val_loss\"]:\n plt.plot(hist.history[label],label=label)\n\nplt.ylabel(\"loss\")\nplt.xlabel(\"epoch\")\nplt.title(\"The final validation loss: {}\".format(hist.history[\"val_loss\"][-1]))\nplt.legend()\nplt.show()\n\n# Use the trained neural network to predict the remaining data points\nX_pred = X_tot[dim:]\nX_pred = X_pred.reshape((len(X_pred), 1))\ny_model = model.predict(X_pred)\ny_pred = np.concatenate((y_tot[:dim], y_model.flatten()))\n\n# Plot the known data set and the predicted data set. The red box represents the region that was used\n# for the training data.\nfig, ax = plt.subplots()\nax.plot(X_tot, y_tot, label=\"true\", linewidth=3)\nax.plot(X_tot, y_pred, 'g-.',label=\"predicted\", linewidth=4)\nax.legend()\n# Created a red region to represent the points used in the training data.\nax.axvspan(X_tot[0], X_tot[dim], alpha=0.25, color='red')\nplt.show()\n\n# Stop the timer and calculate the total time needed.\nend = timer()\nprint('Time: ', end-start)\n```\n\n## Generative Models\n\n**Generative models** describe a class of statistical models that are a contrast\nto **discriminative models**. Informally we say that generative models can\ngenerate new data instances while discriminative models discriminate between\ndifferent kinds of data instances. A generative model could generate new photos\nof animals that look like 'real' animals while a discriminative model could tell\na dog from a cat. More formally, given a data set $x$ and a set of labels /\ntargets $y$. Generative models capture the joint probability $p(x, y)$, or\njust $p(x)$ if there are no labels, while discriminative models capture the\nconditional probability $p(y | x)$. Discriminative models generally try to draw\nboundaries in the data space (often high dimensional), while generative models\ntry to model how data is placed throughout the space.\n\n**Note**: this material is thanks to Linus Ekstr\u00f8m.\n\n## Generative Adversarial Networks\n\n**Generative Adversarial Networks** are a type of unsupervised machine learning\nalgorithm proposed by [Goodfellow et. al](https://arxiv.org/pdf/1406.2661.pdf)\nin 2014 (short and good article).\n\nThe simplest formulation of\nthe model is based on a game theoretic approach, *zero sum game*, where we pit\ntwo neural networks against one another. We define two rival networks, one\ngenerator $g$, and one discriminator $d$. The generator directly produces\nsamples\n\n\n
\n\n$$\n\\begin{equation}\n x = g(z; \\theta^{(g)})\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n## Discriminator\nThe discriminator attempts to distinguish between samples drawn from the\ntraining data and samples drawn from the generator. In other words, it tries to\ntell the difference between the fake data produced by $g$ and the actual data\nsamples we want to do prediction on. The discriminator outputs a probability\nvalue given by\n\n\n
\n\n$$\n\\begin{equation}\n d(x; \\theta^{(d)})\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nindicating the probability that $x$ is a real training example rather than a\nfake sample the generator has generated. The simplest way to formulate the\nlearning process in a generative adversarial network is a zero-sum game, in\nwhich a function\n\n\n
\n\n$$\n\\begin{equation}\n v(\\theta^{(g)}, \\theta^{(d)})\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\ndetermines the reward for the discriminator, while the generator gets the\nconjugate reward\n\n\n
\n\n$$\n\\begin{equation}\n -v(\\theta^{(g)}, \\theta^{(d)})\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\n## Learning Process\n\nDuring learning both of the networks maximize their own reward function, so that\nthe generator gets better and better at tricking the discriminator, while the\ndiscriminator gets better and better at telling the difference between the fake\nand real data. The generator and discriminator alternate on which one trains at\none time (i.e. for one epoch). In other words, we keep the generator constant\nand train the discriminator, then we keep the discriminator constant to train\nthe generator and repeat. It is this back and forth dynamic which lets GANs\ntackle otherwise intractable generative problems. As the generator improves with\n training, the discriminator's performance gets worse because it cannot easily\n tell the difference between real and fake. If the generator ends up succeeding\n perfectly, the the discriminator will do no better than random guessing i.e.\n 50\\%. This progression in the training poses a problem for the convergence\n criteria for GANs. The discriminator feedback gets less meaningful over time,\n if we continue training after this point then the generator is effectively\n training on junk data which can undo the learning up to that point. Therefore,\n we stop training when the discriminator starts outputting $1/2$ everywhere.\n\n## More about the Learning Process\n\nAt convergence we have\n\n\n
\n\n$$\n\\begin{equation}\n g^* = \\underset{g}{\\mathrm{argmin}}\\hspace{2pt}\n \\underset{d}{\\mathrm{max}}v(\\theta^{(g)}, \\theta^{(d)})\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nThe default choice for $v$ is\n\n\n
\n\n$$\n\\begin{equation}\n v(\\theta^{(g)}, \\theta^{(d)}) = \\mathbb{E}_{x\\sim p_\\mathrm{data}}\\log d(x)\n + \\mathbb{E}_{x\\sim p_\\mathrm{model}}\n \\log (1 - d(x))\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\nThe main motivation for the design of GANs is that the learning process requires\nneither approximate inference (variational autoencoders for example) nor\napproximation of a partition function. In the case where\n\n\n
\n\n$$\n\\begin{equation}\n \\underset{d}{\\mathrm{max}}v(\\theta^{(g)}, \\theta^{(d)})\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nis convex in $\\theta^{(g)} then the procedure is guaranteed to converge and is\nasymptotically consistent\n( [Seth Lloyd on QuGANs](https://arxiv.org/pdf/1804.09139.pdf) ).\n\n## Additional References\nThis is in\ngeneral not the case and it is possible to get situations where the training\nprocess never converges because the generator and discriminator chase one\nanother around in the parameter space indefinitely. A much deeper discussion on\nthe currently open research problem of GAN convergence is available\n[here](https://www.deeplearningbook.org/contents/generative_models.html). To\nanyone interested in learning more about GANs it is a highly recommended read.\nDirect quote: \"In this best-performing formulation, the generator aims to\nincrease the log probability that the discriminator makes a mistake, rather than\naiming to decrease the log probability that the discriminator makes the correct\nprediction.\" [Another interesting read](https://arxiv.org/abs/1701.00160)\n\n## Writing Our First Generative Adversarial Network\nLet us now move on to actually implementing a GAN in tensorflow. We will study\nthe performance of our GAN on the MNIST dataset. This code is based on and\nadapted from the\n[google tutorial](https://www.tensorflow.org/tutorials/generative/dcgan)\n\nFirst we import our libraries\n\n\n```python\nimport os\nimport time\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.utils import plot_model\n```\n\nNext we define our hyperparameters and import our data the usual way\n\n\n```python\nBUFFER_SIZE = 60000\nBATCH_SIZE = 256\nEPOCHS = 30\n\ndata = tf.keras.datasets.mnist.load_data()\n(train_images, train_labels), (test_images, test_labels) = data\ntrain_images = np.reshape(train_images, (train_images.shape[0],\n 28,\n 28,\n 1)).astype('float32')\n\n# we normalize between -1 and 1\ntrain_images = (train_images - 127.5) / 127.5\ntraining_dataset = tf.data.Dataset.from_tensor_slices(\n train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\n```\n\n## MNIST and GANs\n\nLet's have a quick look\n\n\n```python\nplt.imshow(train_images[0], cmap='Greys')\nplt.show()\n```\n\nNow we define our two models. This is where the 'magic' happens. There are a\nhuge amount of possible formulations for both models. A lot of engineering and\ntrial and error can be done here to try to produce better performing models. For\nmore advanced GANs this is by far the step where you can 'make or break' a\nmodel.\n\nWe start with the generator. As stated in the introductory text the generator\n$g$ upsamples from a random sample to the shape of what we want to predict. In\nour case we are trying to predict MNIST images ($28\\times 28$ pixels).\n\n\n```python\ndef generator_model():\n \"\"\"\n The generator uses upsampling layers tf.keras.layers.Conv2DTranspose() to\n produce an image from a random seed. We start with a Dense layer taking this\n random sample as an input and subsequently upsample through multiple\n convolutional layers.\n \"\"\"\n\n # we define our model\n model = tf.keras.Sequential()\n\n\n # adding our input layer. Dense means that every neuron is connected and\n # the input shape is the shape of our random noise. The units need to match\n # in some sense the upsampling strides to reach our desired output shape.\n # we are using 100 random numbers as our seed\n model.add(layers.Dense(units=7*7*BATCH_SIZE,\n use_bias=False,\n input_shape=(100, )))\n # we normalize the output form the Dense layer\n model.add(layers.BatchNormalization())\n # and add an activation function to our 'layer'. LeakyReLU avoids vanishing\n # gradient problem\n model.add(layers.LeakyReLU())\n model.add(layers.Reshape((7, 7, BATCH_SIZE)))\n assert model.output_shape == (None, 7, 7, BATCH_SIZE)\n # even though we just added four keras layers we think of everything above\n # as 'one' layer\n\n # next we add our upscaling convolutional layers\n model.add(layers.Conv2DTranspose(filters=128,\n kernel_size=(5, 5),\n strides=(1, 1),\n padding='same',\n use_bias=False))\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n assert model.output_shape == (None, 7, 7, 128)\n\n model.add(layers.Conv2DTranspose(filters=64,\n kernel_size=(5, 5),\n strides=(2, 2),\n padding='same',\n use_bias=False))\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n assert model.output_shape == (None, 14, 14, 64)\n\n model.add(layers.Conv2DTranspose(filters=1,\n kernel_size=(5, 5),\n strides=(2, 2),\n padding='same',\n use_bias=False,\n activation='tanh'))\n assert model.output_shape == (None, 28, 28, 1)\n\n return model\n```\n\nAnd there we have our 'simple' generator model. Now we move on to defining our\ndiscriminator model $d$, which is a convolutional neural network based image\nclassifier.\n\n\n```python\ndef discriminator_model():\n \"\"\"\n The discriminator is a convolutional neural network based image classifier\n \"\"\"\n\n # we define our model\n model = tf.keras.Sequential()\n model.add(layers.Conv2D(filters=64,\n kernel_size=(5, 5),\n strides=(2, 2),\n padding='same',\n input_shape=[28, 28, 1]))\n model.add(layers.LeakyReLU())\n # adding a dropout layer as you do in conv-nets\n model.add(layers.Dropout(0.3))\n\n\n model.add(layers.Conv2D(filters=128,\n kernel_size=(5, 5),\n strides=(2, 2),\n padding='same'))\n model.add(layers.LeakyReLU())\n # adding a dropout layer as you do in conv-nets\n model.add(layers.Dropout(0.3))\n\n model.add(layers.Flatten())\n model.add(layers.Dense(1))\n\n return model\n```\n\n## Other Models\nLet us take a look at our models. **Note**: double click images for bigger view.\n\n\n```python\ngenerator = generator_model()\nplot_model(generator, show_shapes=True, rankdir='LR')\n```\n\n\n```python\ndiscriminator = discriminator_model()\nplot_model(discriminator, show_shapes=True, rankdir='LR')\n```\n\nNext we need a few helper objects we will use in training\n\n\n```python\ncross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)\ngenerator_optimizer = tf.keras.optimizers.Adam(1e-4)\ndiscriminator_optimizer = tf.keras.optimizers.Adam(1e-4)\n```\n\nThe first object, *cross_entropy* is our loss function and the two others are\nour optimizers. Notice we use the same learning rate for both $g$ and $d$. This\nis because they need to improve their accuracy at approximately equal speeds to\nget convergence (not necessarily exactly equal). Now we define our loss\nfunctions\n\n\n```python\ndef generator_loss(fake_output):\n loss = cross_entropy(tf.ones_like(fake_output), fake_output)\n\n return loss\n```\n\n\n```python\ndef discriminator_loss(real_output, fake_output):\n real_loss = cross_entropy(tf.ones_like(real_output), real_output)\n fake_loss = cross_entropy(tf.zeros_liks(fake_output), fake_output)\n total_loss = real_loss + fake_loss\n\n return total_loss\n```\n\nNext we define a kind of seed to help us compare the learning process over\nmultiple training epochs.\n\n\n```python\nnoise_dimension = 100\nn_examples_to_generate = 16\nseed_images = tf.random.normal([n_examples_to_generate, noise_dimension])\n```\n\n## Training Step\n\nNow we have everything we need to define our training step, which we will apply\nfor every step in our training loop. Notice the @tf.function flag signifying\nthat the function is tensorflow 'compiled'. Removing this flag doubles the\ncomputation time.\n\n\n```python\n@tf.function\ndef train_step(images):\n noise = tf.random.normal([BATCH_SIZE, noise_dimension])\n\n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n generated_images = generator(noise, training=True)\n\n real_output = discriminator(images, training=True)\n fake_output = discriminator(generated_images, training=True)\n\n gen_loss = generator_loss(fake_output)\n disc_loss = discriminator_loss(real_output, fake_output)\n\n gradients_of_generator = gen_tape.gradient(gen_loss,\n generator.trainable_variables)\n gradients_of_discriminator = disc_tape.gradient(disc_loss,\n discriminator.trainable_variables)\n generator_optimizer.apply_gradients(zip(gradients_of_generator,\n generator.trainable_variables))\n discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,\n discriminator.trainable_variables))\n\n return gen_loss, disc_loss\n```\n\nNext we define a helper function to produce an output over our training epochs\nto see the predictive progression of our generator model. **Note**: I am including\nthis code here, but comment it out in the training loop.\n\n\n```python\ndef generate_and_save_images(model, epoch, test_input):\n # we're making inferences here\n predictions = model(test_input, training=False)\n\n fig = plt.figure(figsize=(4, 4))\n\n for i in range(predictions.shape[0]):\n plt.subplot(4, 4, i+1)\n plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')\n plt.axis('off')\n\n plt.savefig(f'./images_from_seed_images/image_at_epoch_{str(epoch).zfill(3)}.png')\n plt.close()\n #plt.show()\n```\n\n## Checkpoints\nSetting up checkpoints to periodically save our model during training so that\neverything is not lost even if the program were to somehow terminate while\ntraining.\n\n\n```python\n# Setting up checkpoints to save model during training\ncheckpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')\ncheckpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,\n discriminator_optimizer=discriminator_optimizer,\n generator=generator,\n discriminator=discriminator)\n```\n\nNow we define our training loop\n\n\n```python\ndef train(dataset, epochs):\n generator_loss_list = []\n discriminator_loss_list = []\n\n for epoch in range(epochs):\n start = time.time()\n\n for image_batch in dataset:\n gen_loss, disc_loss = train_step(image_batch)\n generator_loss_list.append(gen_loss.numpy())\n discriminator_loss_list.append(disc_loss.numpy())\n\n #generate_and_save_images(generator, epoch + 1, seed_images)\n\n if (epoch + 1) % 15 == 0:\n checkpoint.save(file_prefix=checkpoint_prefix)\n\n print(f'Time for epoch {epoch} is {time.time() - start}')\n\n #generate_and_save_images(generator, epochs, seed_images)\n\n loss_file = './data/lossfile.txt'\n with open(loss_file, 'w') as outfile:\n outfile.write(str(generator_loss_list))\n outfile.write('\\n')\n outfile.write('\\n')\n outfile.write(str(discriminator_loss_list))\n outfile.write('\\n')\n outfile.write('\\n')\n```\n\nTo train simply call this function. **Warning**: this might take a long time so\nthere is a folder of a pretrained network already included in the repository.\n\n\n```python\ntrain(train_dataset, EPOCHS)\n```\n\nAnd here is the result of training our model for 100 epochs\n\n\n\n\n\n```python\nfrom IPython.display import HTML\n_s = \"\"\"\n\n

\n\"\"\"\nHTML(_s)\n```\n\n\n\nNow to avoid having to train and everything, which will take a while depending\non your computer setup we now load in the model which produced the above gif.\n\n\n```python\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))\nrestored_generator = checkpoint.generator\nrestored_discriminator = checkpoint.discriminator\n\nprint(restored_generator)\nprint(restored_discriminator)\n```\n\n## Exploring the Latent Space\n\nWe have successfully loaded in our latest model. Let us now play around a bit\nand see what kind of things we can learn about this model. Our generator takes\nan array of 100 numbers. One idea can be to try to systematically change our\ninput. Let us try and see what we get\n\n\n```python\ndef generate_latent_points(number=100, scale_means=1, scale_stds=1):\n latent_dim = 100\n means = scale_means * tf.linspace(-1, 1, num=latent_dim)\n stds = scale_stds * tf.linspace(-1, 1, num=latent_dim)\n latent_space_value_range = tf.random.normal([number, latent_dim],\n means,\n stds,\n dtype=tf.float64)\n\n return latent_space_value_range\n\ndef generate_images(latent_points):\n # notice we set training to false because we are making inferences\n generated_images = restored_generator.predict(latent_points)\n\n return generated_images\n```\n\n\n```python\ndef plot_result(generated_images, number=100):\n # obviously this assumes sqrt number is an int\n fig, axs = plt.subplots(int(np.sqrt(number)), int(np.sqrt(number)),\n figsize=(10, 10))\n\n for i in range(int(np.sqrt(number))):\n for j in range(int(np.sqrt(number))):\n axs[i, j].imshow(generated_images[i*j], cmap='Greys')\n axs[i, j].axis('off')\n\n plt.show()\n```\n\n\n```python\ngenerated_images = generate_images(generate_latent_points())\nplot_result(generated_images)\n```\n\n## Getting Results\nWe see that the generator generates images that look like MNIST\nnumbers: $1, 4, 7, 9$. Let's try to tweak it a bit more to see if we are able\nto generate a similar plot where we generate every MNIST number. Let us now try\nto 'move' a bit around in the latent space. **Note**: decrease the plot number if\nthese following cells take too long to run on your computer.\n\n\n```python\nplot_number = 225\n\ngenerated_images = generate_images(generate_latent_points(number=plot_number,\n scale_means=5,\n scale_stds=1))\nplot_result(generated_images, number=plot_number)\n\ngenerated_images = generate_images(generate_latent_points(number=plot_number,\n scale_means=-5,\n scale_stds=1))\nplot_result(generated_images, number=plot_number)\n\ngenerated_images = generate_images(generate_latent_points(number=plot_number,\n scale_means=1,\n scale_stds=5))\nplot_result(generated_images, number=plot_number)\n```\n\nAgain, we have found something interesting. *Moving* around using our means\ntakes us from digit to digit, while *moving* around using our standard\ndeviations seem to increase the number of different digits! In the last image\nabove, we can barely make out every MNIST digit. Let us make on last plot using\nthis information by upping the standard deviation of our Gaussian noises.\n\n\n```python\nplot_number = 400\ngenerated_images = generate_images(generate_latent_points(number=plot_number,\n scale_means=1,\n scale_stds=10))\nplot_result(generated_images, number=plot_number)\n```\n\nA pretty cool result! We see that our generator indeed has learned a\ndistribution which qualitatively looks a whole lot like the MNIST dataset.\n\n## Interpolating Between MNIST Digits\nAnother interesting way to explore the latent space of our generator model is by\ninterpolating between the MNIST digits. This section is largely based on\n[this excellent blogpost](https://machinelearningmastery.com/how-to-interpolate-and-perform-vector-arithmetic-with-faces-using-a-generative-adversarial-network/)\nby Jason Brownlee.\n\nSo let us start by defining a function to interpolate between two points in the\nlatent space.\n\n\n```python\ndef interpolation(point_1, point_2, n_steps=10):\n ratios = np.linspace(0, 1, num=n_steps)\n vectors = []\n for i, ratio in enumerate(ratios):\n vectors.append(((1.0 - ratio) * point_1 + ratio * point_2))\n\n return tf.stack(vectors)\n```\n\nNow we have all we need to do our interpolation analysis.\n\n\n```python\nplot_number = 100\nlatent_points = generate_latent_points(number=plot_number)\nresults = None\nfor i in range(0, 2*np.sqrt(plot_number), 2):\n interpolated = interpolation(latent_points[i], latent_points[i+1])\n generated_images = generate_images(interpolated)\n\n if results is None:\n results = generated_images\n else:\n results = tf.stack((results, generated_images))\n\nplot_results(results, plot_number)\n```\n\n## Basic ideas of the Principal Component Analysis (PCA)\n\nThe principal component analysis deals with the problem of fitting a\nlow-dimensional affine subspace $S$ of dimension $d$ much smaller than\nthe total dimension $D$ of the problem at hand (our data\nset). Mathematically it can be formulated as a statistical problem or\na geometric problem. In our discussion of the theorem for the\nclassical PCA, we will stay with a statistical approach. \nHistorically, the PCA was first formulated in a statistical setting in order to estimate the principal component of a multivariate random variable.\n\nWe have a data set defined by a design/feature matrix $\\boldsymbol{X}$ (see below for its definition) \n* Each data point is determined by $p$ extrinsic (measurement) variables\n\n* We may want to ask the following question: Are there fewer intrinsic variables (say $d << p$) that still approximately describe the data?\n\n* If so, these intrinsic variables may tell us something important and finding these intrinsic variables is what dimension reduction methods do. \n\nA good read is for example [Vidal, Ma and Sastry](https://www.springer.com/gp/book/9780387878102).\n\n## Introducing the Covariance and Correlation functions\n\nBefore we discuss the PCA theorem, we need to remind ourselves about\nthe definition of the covariance and the correlation function. These are quantities \n\nSuppose we have defined two vectors\n$\\hat{x}$ and $\\hat{y}$ with $n$ elements each. The covariance matrix $\\boldsymbol{C}$ is defined as\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x},\\boldsymbol{y}] = \\begin{bmatrix} \\mathrm{cov}[\\boldsymbol{x},\\boldsymbol{x}] & \\mathrm{cov}[\\boldsymbol{x},\\boldsymbol{y}] \\\\\n \\mathrm{cov}[\\boldsymbol{y},\\boldsymbol{x}] & \\mathrm{cov}[\\boldsymbol{y},\\boldsymbol{y}] \\\\\n \\end{bmatrix},\n$$\n\nwhere for example\n\n$$\n\\mathrm{cov}[\\boldsymbol{x},\\boldsymbol{y}] =\\frac{1}{n} \\sum_{i=0}^{n-1}(x_i- \\overline{x})(y_i- \\overline{y}).\n$$\n\nWith this definition and recalling that the variance is defined as\n\n$$\n\\mathrm{var}[\\boldsymbol{x}]=\\frac{1}{n} \\sum_{i=0}^{n-1}(x_i- \\overline{x})^2,\n$$\n\nwe can rewrite the covariance matrix as\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x},\\boldsymbol{y}] = \\begin{bmatrix} \\mathrm{var}[\\boldsymbol{x}] & \\mathrm{cov}[\\boldsymbol{x},\\boldsymbol{y}] \\\\\n \\mathrm{cov}[\\boldsymbol{x},\\boldsymbol{y}] & \\mathrm{var}[\\boldsymbol{y}] \\\\\n \\end{bmatrix}.\n$$\n\n## More on the covariance\nThe covariance takes values between zero and infinity and may thus\nlead to problems with loss of numerical precision for particularly\nlarge values. It is common to scale the covariance matrix by\nintroducing instead the correlation matrix defined via the so-called\ncorrelation function\n\n$$\n\\mathrm{corr}[\\boldsymbol{x},\\boldsymbol{y}]=\\frac{\\mathrm{cov}[\\boldsymbol{x},\\boldsymbol{y}]}{\\sqrt{\\mathrm{var}[\\boldsymbol{x}] \\mathrm{var}[\\boldsymbol{y}]}}.\n$$\n\nThe correlation function is then given by values $\\mathrm{corr}[\\boldsymbol{x},\\boldsymbol{y}]\n\\in [-1,1]$. This avoids eventual problems with too large values. We\ncan then define the correlation matrix for the two vectors $\\boldsymbol{x}$\nand $\\boldsymbol{y}$ as\n\n$$\n\\boldsymbol{K}[\\boldsymbol{x},\\boldsymbol{y}] = \\begin{bmatrix} 1 & \\mathrm{corr}[\\boldsymbol{x},\\boldsymbol{y}] \\\\\n \\mathrm{corr}[\\boldsymbol{y},\\boldsymbol{x}] & 1 \\\\\n \\end{bmatrix},\n$$\n\nIn the above example this is the function we constructed using **pandas**.\n\n## Reminding ourselves about Linear Regression\nIn our derivation of the various regression algorithms like **Ordinary Least Squares** or **Ridge regression**\nwe defined the design/feature matrix $\\boldsymbol{X}$ as\n\n$$\n\\boldsymbol{X}=\\begin{bmatrix}\nx_{0,0} & x_{0,1} & x_{0,2}& \\dots & \\dots x_{0,p-1}\\\\\nx_{1,0} & x_{1,1} & x_{1,2}& \\dots & \\dots x_{1,p-1}\\\\\nx_{2,0} & x_{2,1} & x_{2,2}& \\dots & \\dots x_{2,p-1}\\\\\n\\dots & \\dots & \\dots & \\dots \\dots & \\dots \\\\\nx_{n-2,0} & x_{n-2,1} & x_{n-2,2}& \\dots & \\dots x_{n-2,p-1}\\\\\nx_{n-1,0} & x_{n-1,1} & x_{n-1,2}& \\dots & \\dots x_{n-1,p-1}\\\\\n\\end{bmatrix},\n$$\n\nwith $\\boldsymbol{X}\\in {\\mathbb{R}}^{n\\times p}$, with the predictors/features $p$ refering to the column numbers and the\nentries $n$ being the row elements.\nWe can rewrite the design/feature matrix in terms of its column vectors as\n\n$$\n\\boldsymbol{X}=\\begin{bmatrix} \\boldsymbol{x}_0 & \\boldsymbol{x}_1 & \\boldsymbol{x}_2 & \\dots & \\dots & \\boldsymbol{x}_{p-1}\\end{bmatrix},\n$$\n\nwith a given vector\n\n$$\n\\boldsymbol{x}_i^T = \\begin{bmatrix}x_{0,i} & x_{1,i} & x_{2,i}& \\dots & \\dots x_{n-1,i}\\end{bmatrix}.\n$$\n\n## Simple Example\nWith these definitions, we can now rewrite our $2\\times 2$\ncorrelation/covariance matrix in terms of a moe general design/feature\nmatrix $\\boldsymbol{X}\\in {\\mathbb{R}}^{n\\times p}$. This leads to a $p\\times p$\ncovariance matrix for the vectors $\\boldsymbol{x}_i$ with $i=0,1,\\dots,p-1$\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x}] = \\begin{bmatrix}\n\\mathrm{var}[\\boldsymbol{x}_0] & \\mathrm{cov}[\\boldsymbol{x}_0,\\boldsymbol{x}_1] & \\mathrm{cov}[\\boldsymbol{x}_0,\\boldsymbol{x}_2] & \\dots & \\dots & \\mathrm{cov}[\\boldsymbol{x}_0,\\boldsymbol{x}_{p-1}]\\\\\n\\mathrm{cov}[\\boldsymbol{x}_1,\\boldsymbol{x}_0] & \\mathrm{var}[\\boldsymbol{x}_1] & \\mathrm{cov}[\\boldsymbol{x}_1,\\boldsymbol{x}_2] & \\dots & \\dots & \\mathrm{cov}[\\boldsymbol{x}_1,\\boldsymbol{x}_{p-1}]\\\\\n\\mathrm{cov}[\\boldsymbol{x}_2,\\boldsymbol{x}_0] & \\mathrm{cov}[\\boldsymbol{x}_2,\\boldsymbol{x}_1] & \\mathrm{var}[\\boldsymbol{x}_2] & \\dots & \\dots & \\mathrm{cov}[\\boldsymbol{x}_2,\\boldsymbol{x}_{p-1}]\\\\\n\\dots & \\dots & \\dots & \\dots & \\dots & \\dots \\\\\n\\dots & \\dots & \\dots & \\dots & \\dots & \\dots \\\\\n\\mathrm{cov}[\\boldsymbol{x}_{p-1},\\boldsymbol{x}_0] & \\mathrm{cov}[\\boldsymbol{x}_{p-1},\\boldsymbol{x}_1] & \\mathrm{cov}[\\boldsymbol{x}_{p-1},\\boldsymbol{x}_{2}] & \\dots & \\dots & \\mathrm{var}[\\boldsymbol{x}_{p-1}]\\\\\n\\end{bmatrix},\n$$\n\n## The Correlation Matrix\n\nand the correlation matrix\n\n$$\n\\boldsymbol{K}[\\boldsymbol{x}] = \\begin{bmatrix}\n1 & \\mathrm{corr}[\\boldsymbol{x}_0,\\boldsymbol{x}_1] & \\mathrm{corr}[\\boldsymbol{x}_0,\\boldsymbol{x}_2] & \\dots & \\dots & \\mathrm{corr}[\\boldsymbol{x}_0,\\boldsymbol{x}_{p-1}]\\\\\n\\mathrm{corr}[\\boldsymbol{x}_1,\\boldsymbol{x}_0] & 1 & \\mathrm{corr}[\\boldsymbol{x}_1,\\boldsymbol{x}_2] & \\dots & \\dots & \\mathrm{corr}[\\boldsymbol{x}_1,\\boldsymbol{x}_{p-1}]\\\\\n\\mathrm{corr}[\\boldsymbol{x}_2,\\boldsymbol{x}_0] & \\mathrm{corr}[\\boldsymbol{x}_2,\\boldsymbol{x}_1] & 1 & \\dots & \\dots & \\mathrm{corr}[\\boldsymbol{x}_2,\\boldsymbol{x}_{p-1}]\\\\\n\\dots & \\dots & \\dots & \\dots & \\dots & \\dots \\\\\n\\dots & \\dots & \\dots & \\dots & \\dots & \\dots \\\\\n\\mathrm{corr}[\\boldsymbol{x}_{p-1},\\boldsymbol{x}_0] & \\mathrm{corr}[\\boldsymbol{x}_{p-1},\\boldsymbol{x}_1] & \\mathrm{corr}[\\boldsymbol{x}_{p-1},\\boldsymbol{x}_{2}] & \\dots & \\dots & 1\\\\\n\\end{bmatrix},\n$$\n\n## Numpy Functionality\n\nThe Numpy function **np.cov** calculates the covariance elements using\nthe factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have\nthe exact mean values. The following simple function uses the\n**np.vstack** function which takes each vector of dimension $1\\times n$\nand produces a $2\\times n$ matrix $\\boldsymbol{W}$\n\n$$\n\\boldsymbol{W}^T = \\begin{bmatrix} x_0 & y_0 \\\\\n x_1 & y_1 \\\\\n x_2 & y_2\\\\\n \\dots & \\dots \\\\\n x_{n-2} & y_{n-2}\\\\\n x_{n-1} & y_{n-1} & \n \\end{bmatrix},\n$$\n\nwhich in turn is converted into into the $2\\times 2$ covariance matrix\n$\\boldsymbol{C}$ via the Numpy function **np.cov()**. We note that we can also calculate\nthe mean value of each set of samples $\\boldsymbol{x}$ etc using the Numpy\nfunction **np.mean(x)**. We can also extract the eigenvalues of the\ncovariance matrix through the **np.linalg.eig()** function.\n\n\n```python\n# Importing various packages\nimport numpy as np\nn = 100\nx = np.random.normal(size=n)\nprint(np.mean(x))\ny = 4+3*x+np.random.normal(size=n)\nprint(np.mean(y))\nW = np.vstack((x, y))\nC = np.cov(W)\nprint(C)\n```\n\n## Correlation Matrix again\n\nThe previous example can be converted into the correlation matrix by\nsimply scaling the matrix elements with the variances. We should also\nsubtract the mean values for each column. This leads to the following\ncode which sets up the correlations matrix for the previous example in\na more brute force way. Here we scale the mean values for each column of the design matrix, calculate the relevant mean values and variances and then finally set up the $2\\times 2$ correlation matrix (since we have only two vectors).\n\n\n```python\nimport numpy as np\nn = 100\n# define two vectors \nx = np.random.random(size=n)\ny = 4+3*x+np.random.normal(size=n)\n#scaling the x and y vectors \nx = x - np.mean(x)\ny = y - np.mean(y)\nvariance_x = np.sum(x@x)/n\nvariance_y = np.sum(y@y)/n\nprint(variance_x)\nprint(variance_y)\ncov_xy = np.sum(x@y)/n\ncov_xx = np.sum(x@x)/n\ncov_yy = np.sum(y@y)/n\nC = np.zeros((2,2))\nC[0,0]= cov_xx/variance_x\nC[1,1]= cov_yy/variance_y\nC[0,1]= cov_xy/np.sqrt(variance_y*variance_x)\nC[1,0]= C[0,1]\nprint(C)\n```\n\nWe see that the matrix elements along the diagonal are one as they\nshould be and that the matrix is symmetric. Furthermore, diagonalizing\nthis matrix we easily see that it is a positive definite matrix.\n\nThe above procedure with **numpy** can be made more compact if we use **pandas**.\n\n## Using Pandas\n\nWe whow here how we can set up the correlation matrix using **pandas**, as done in this simple code\n\n\n```python\nimport numpy as np\nimport pandas as pd\nn = 10\nx = np.random.normal(size=n)\nx = x - np.mean(x)\ny = 4+3*x+np.random.normal(size=n)\ny = y - np.mean(y)\nX = (np.vstack((x, y))).T\nprint(X)\nXpd = pd.DataFrame(X)\nprint(Xpd)\ncorrelation_matrix = Xpd.corr()\nprint(correlation_matrix)\n```\n\n## And then the Franke Function\n\nWe expand this model to the Franke function discussed above.\n\n\n```python\n# Common imports\nimport numpy as np\nimport pandas as pd\n\n\ndef FrankeFunction(x,y):\n\tterm1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))\n\tterm2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))\n\tterm3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))\n\tterm4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)\n\treturn term1 + term2 + term3 + term4\n\n\ndef create_X(x, y, n ):\n\tif len(x.shape) > 1:\n\t\tx = np.ravel(x)\n\t\ty = np.ravel(y)\n\n\tN = len(x)\n\tl = int((n+1)*(n+2)/2)\t\t# Number of elements in beta\n\tX = np.ones((N,l))\n\n\tfor i in range(1,n+1):\n\t\tq = int((i)*(i+1)/2)\n\t\tfor k in range(i+1):\n\t\t\tX[:,q+k] = (x**(i-k))*(y**k)\n\n\treturn X\n\n\n# Making meshgrid of datapoints and compute Franke's function\nn = 4\nN = 100\nx = np.sort(np.random.uniform(0, 1, N))\ny = np.sort(np.random.uniform(0, 1, N))\nz = FrankeFunction(x, y)\nX = create_X(x, y, n=n) \n\nXpd = pd.DataFrame(X)\n# subtract the mean values and set up the covariance matrix\nXpd = Xpd - Xpd.mean()\ncovariance_matrix = Xpd.cov()\nprint(covariance_matrix)\n```\n\n 0 1 2 3 4 5 6 7 \\\n 0 0.0 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 \n 1 0.0 0.080122 0.084563 0.083357 0.083273 0.082845 0.076316 0.075428 \n 2 0.0 0.084563 0.089732 0.088853 0.088990 0.088723 0.081699 0.080870 \n 3 0.0 0.083357 0.088853 0.091631 0.092058 0.092059 0.087014 0.086309 \n 4 0.0 0.083273 0.088990 0.092058 0.092622 0.092744 0.087692 0.087067 \n 5 0.0 0.082845 0.088723 0.092059 0.092744 0.092976 0.087958 0.087411 \n 6 0.0 0.076316 0.081699 0.087014 0.087692 0.087958 0.084819 0.084319 \n 7 0.0 0.075428 0.080870 0.086309 0.087067 0.087411 0.084319 0.083882 \n 8 0.0 0.074485 0.079966 0.085511 0.086341 0.086754 0.083715 0.083338 \n 9 0.0 0.073513 0.079017 0.084652 0.085545 0.086023 0.083040 0.082719 \n 10 0.0 0.068473 0.073439 0.080118 0.080888 0.081283 0.079628 0.079273 \n 11 0.0 0.067511 0.072480 0.079180 0.080000 0.080445 0.078823 0.078516 \n 12 0.0 0.066557 0.071524 0.078234 0.079099 0.079591 0.078001 0.077740 \n 13 0.0 0.065618 0.070576 0.077290 0.078195 0.078731 0.077171 0.076953 \n 14 0.0 0.064697 0.069643 0.076353 0.077295 0.077870 0.076340 0.076163 \n \n 8 9 10 11 12 13 14 \n 0 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 \n 1 0.074485 0.073513 0.068473 0.067511 0.066557 0.065618 0.064697 \n 2 0.079966 0.079017 0.073439 0.072480 0.071524 0.070576 0.069643 \n 3 0.085511 0.084652 0.080118 0.079180 0.078234 0.077290 0.076353 \n 4 0.086341 0.085545 0.080888 0.080000 0.079099 0.078195 0.077295 \n 5 0.086754 0.086023 0.081283 0.080445 0.079591 0.078731 0.077870 \n 6 0.083715 0.083040 0.079628 0.078823 0.078001 0.077171 0.076340 \n 7 0.083338 0.082719 0.079273 0.078516 0.077740 0.076953 0.076163 \n 8 0.082852 0.082287 0.078817 0.078107 0.077375 0.076632 0.075882 \n 9 0.082287 0.081774 0.078288 0.077624 0.076936 0.076233 0.075524 \n 10 0.078817 0.078288 0.075877 0.075195 0.074494 0.073780 0.073062 \n 11 0.078107 0.077624 0.075195 0.074556 0.073894 0.073219 0.072537 \n 12 0.077375 0.076936 0.074494 0.073894 0.073271 0.072633 0.071988 \n 13 0.076632 0.076233 0.073780 0.073219 0.072633 0.072032 0.071421 \n 14 0.075882 0.075524 0.073062 0.072537 0.071988 0.071421 0.070843 \n\n\nWe note here that the covariance is zero for the first rows and\ncolumns since all matrix elements in the design matrix were set to one\n(we are fitting the function in terms of a polynomial of degree $n$). We would however not include the intercept\nand wee can simply\ndrop these elements and construct a correlation\nmatrix without them by centering our matrix elements by subtracting the mean of each column.\n\n## Lnks with the Design Matrix\n\nWe can rewrite the covariance matrix in a more compact form in terms of the design/feature matrix $\\boldsymbol{X}$ as\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x}] = \\frac{1}{n}\\boldsymbol{X}^T\\boldsymbol{X}= \\mathbb{E}[\\boldsymbol{X}^T\\boldsymbol{X}].\n$$\n\nTo see this let us simply look at a design matrix $\\boldsymbol{X}\\in {\\mathbb{R}}^{2\\times 2}$\n\n$$\n\\boldsymbol{X}=\\begin{bmatrix}\nx_{00} & x_{01}\\\\\nx_{10} & x_{11}\\\\\n\\end{bmatrix}=\\begin{bmatrix}\n\\boldsymbol{x}_{0} & \\boldsymbol{x}_{1}\\\\\n\\end{bmatrix}.\n$$\n\n## Computing the Expectation Values\n\nIf we then compute the expectation value\n\n$$\n\\mathbb{E}[\\boldsymbol{X}^T\\boldsymbol{X}] = \\frac{1}{n}\\boldsymbol{X}^T\\boldsymbol{X}=\\begin{bmatrix}\nx_{00}^2+x_{01}^2 & x_{00}x_{10}+x_{01}x_{11}\\\\\nx_{10}x_{00}+x_{11}x_{01} & x_{10}^2+x_{11}^2\\\\\n\\end{bmatrix},\n$$\n\nwhich is just\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x}_0,\\boldsymbol{x}_1] = \\boldsymbol{C}[\\boldsymbol{x}]=\\begin{bmatrix} \\mathrm{var}[\\boldsymbol{x}_0] & \\mathrm{cov}[\\boldsymbol{x}_0,\\boldsymbol{x}_1] \\\\\n \\mathrm{cov}[\\boldsymbol{x}_1,\\boldsymbol{x}_0] & \\mathrm{var}[\\boldsymbol{x}_1] \\\\\n \\end{bmatrix},\n$$\n\nwhere we wrote $$\\boldsymbol{C}[\\boldsymbol{x}_0,\\boldsymbol{x}_1] = \\boldsymbol{C}[\\boldsymbol{x}]$$ to indicate that this the covariance of the vectors $\\boldsymbol{x}$ of the design/feature matrix $\\boldsymbol{X}$.\n\nIt is easy to generalize this to a matrix $\\boldsymbol{X}\\in {\\mathbb{R}}^{n\\times p}$.\n\n## Towards the PCA theorem\n\nWe have that the covariance matrix (the correlation matrix involves a simple rescaling) is given as\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x}] = \\frac{1}{n}\\boldsymbol{X}^T\\boldsymbol{X}= \\mathbb{E}[\\boldsymbol{X}^T\\boldsymbol{X}].\n$$\n\nLet us now assume that we can perform a series of orthogonal transformations where we employ some orthogonal matrices $\\boldsymbol{S}$.\nThese matrices are defined as $\\boldsymbol{S}\\in {\\mathbb{R}}^{p\\times p}$ and obey the orthogonality requirements $\\boldsymbol{S}\\boldsymbol{S}^T=\\boldsymbol{S}^T\\boldsymbol{S}=\\boldsymbol{I}$. The matrix can be written out in terms of the column vectors $\\boldsymbol{s}_i$ as $\\boldsymbol{S}=[\\boldsymbol{s}_0,\\boldsymbol{s}_1,\\dots,\\boldsymbol{s}_{p-1}]$ and $\\boldsymbol{s}_i \\in {\\mathbb{R}}^{p}$.\n\nAssume also that there is a transformation $\\boldsymbol{S}^T\\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{S}=\\boldsymbol{C}[\\boldsymbol{y}]$ such that the new matrix $\\boldsymbol{C}[\\boldsymbol{y}]$ is diagonal with elements $[\\lambda_0,\\lambda_1,\\lambda_2,\\dots,\\lambda_{p-1}]$. \n\nThat is we have\n\n$$\n\\boldsymbol{C}[\\boldsymbol{y}] = \\mathbb{E}[\\boldsymbol{S}^T\\boldsymbol{X}^T\\boldsymbol{X}T\\boldsymbol{S}]=\\boldsymbol{S}^T\\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{S},\n$$\n\nsince the matrix $\\boldsymbol{S}$ is not a data dependent matrix. Multiplying with $\\boldsymbol{S}$ from the left we have\n\n$$\n\\boldsymbol{S}\\boldsymbol{C}[\\boldsymbol{y}] = \\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{S},\n$$\n\nand since $\\boldsymbol{C}[\\boldsymbol{y}]$ is diagonal we have for a given eigenvalue $i$ of the covariance matrix that\n\n$$\n\\boldsymbol{S}_i\\lambda_i = \\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{S}_i.\n$$\n\n## More on the PCA Theorem\n\nIn the derivation of the PCA theorem we will assume that the eigenvalues are ordered in descending order, that is\n$\\lambda_0 > \\lambda_1 > \\dots > \\lambda_{p-1}$. \n\nThe eigenvalues tell us then how much we need to stretch the\ncorresponding eigenvectors. Dimensions with large eigenvalues have\nthus large variations (large variance) and define therefore useful\ndimensions. The data points are more spread out in the direction of\nthese eigenvectors. Smaller eigenvalues mean on the other hand that\nthe corresponding eigenvectors are shrunk accordingly and the data\npoints are tightly bunched together and there is not much variation in\nthese specific directions. Hopefully then we could leave it out\ndimensions where the eigenvalues are very small. If $p$ is very large,\nwe could then aim at reducing $p$ to $l << p$ and handle only $l$\nfeatures/predictors.\n\n## The Algorithm before theorem\n\nHere's how we would proceed in setting up the algorithm for the PCA, see also discussion below here. \n* Set up the datapoints for the design/feature matrix $\\boldsymbol{X}$ with $\\boldsymbol{X}\\in {\\mathbb{R}}^{n\\times p}$, with the predictors/features $p$ referring to the column numbers and the entries $n$ being the row elements.\n\n$$\n\\boldsymbol{X}=\\begin{bmatrix}\nx_{0,0} & x_{0,1} & x_{0,2}& \\dots & \\dots x_{0,p-1}\\\\\nx_{1,0} & x_{1,1} & x_{1,2}& \\dots & \\dots x_{1,p-1}\\\\\nx_{2,0} & x_{2,1} & x_{2,2}& \\dots & \\dots x_{2,p-1}\\\\\n\\dots & \\dots & \\dots & \\dots \\dots & \\dots \\\\\nx_{n-2,0} & x_{n-2,1} & x_{n-2,2}& \\dots & \\dots x_{n-2,p-1}\\\\\nx_{n-1,0} & x_{n-1,1} & x_{n-1,2}& \\dots & \\dots x_{n-1,p-1}\\\\\n\\end{bmatrix},\n$$\n\n* Center the data by subtracting the mean value for each column. This leads to a new matrix $\\boldsymbol{X}\\rightarrow \\overline{\\boldsymbol{X}}$.\n\n* Compute then the covariance/correlation matrix $\\mathbb{E}[\\overline{\\boldsymbol{X}}^T\\overline{\\boldsymbol{X}}]$.\n\n* Find the eigenpairs of $\\boldsymbol{C}$ with eigenvalues $[\\lambda_0,\\lambda_1,\\dots,\\lambda_{p-1}]$ and eigenvectors $[\\boldsymbol{s}_0,\\boldsymbol{s}_1,\\dots,\\boldsymbol{s}_{p-1}]$.\n\n* Order the eigenvalue (and the eigenvectors accordingly) in order of decreasing eigenvalues.\n\n* Keep only those $l$ eigenvalues larger than a selected threshold value, discarding thus $p-l$ features since we expect small variations in the data here.\n\n## Writing our own PCA code\n\nWe will use a simple example first with two-dimensional data\ndrawn from a multivariate normal distribution with the following mean and covariance matrix (we have fixed these quantities but will play around with them below):\n\n$$\n\\mu = (-1,2) \\qquad \\Sigma = \\begin{bmatrix} 4 & 2 \\\\\n2 & 2\n\\end{bmatrix}\n$$\n\nNote that the mean refers to each column of data. \nWe will generate $n = 10000$ points $X = \\{ x_1, \\ldots, x_N \\}$ from\nthis distribution, and store them in the $1000 \\times 2$ matrix $\\boldsymbol{X}$. This is our design matrix where we have forced the covariance and mean values to take specific values.\n\n## Implementing it\nThe following Python code aids in setting up the data and writing out the design matrix.\nNote that the function **multivariate** returns also the covariance discussed above and that it is defined by dividing by $n-1$ instead of $n$.\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom IPython.display import display\nn = 10000\nmean = (-1, 2)\ncov = [[10, 0.02], [0.02, 0.05]]\nX = np.random.multivariate_normal(mean, cov, n)\n```\n\nNow we are going to implement the PCA algorithm. We will break it down into various substeps.\n\n## First Step\n\nThe first step of PCA is to compute the sample mean of the data and use it to center the data. Recall that the sample mean is\n\n$$\n\\mu_n = \\frac{1}{n} \\sum_{i=1}^n x_i\n$$\n\nand the mean-centered data $\\bar{X} = \\{ \\bar{x}_1, \\ldots, \\bar{x}_n \\}$ takes the form\n\n$$\n\\bar{x}_i = x_i - \\mu_n.\n$$\n\nWhen you are done with these steps, print out $\\mu_n$ to verify it is\nclose to $\\mu$ and plot your mean centered data to verify it is\ncentered at the origin! \nThe following code elements perform these operations using **pandas** or using our own functionality for doing so. The latter, using **numpy** is rather simple through the **mean()** function.\n\n\n```python\ndf = pd.DataFrame(X)\n# Pandas does the centering for us\ndf = df -df.mean()\n# we center it ourselves\nX_centered = X - X.mean(axis=0)\n```\n\n## Scaling\nAlternatively, we could use the functions we discussed\nearlier for scaling the data set. That is, we could have used the\n**StandardScaler** function in **Scikit-Learn**, a function which ensures\nthat for each feature/predictor we study the mean value is zero and\nthe variance is one (every column in the design/feature matrix). You\nwould then not get the same results, since we divide by the\nvariance. The diagonal covariance matrix elements will then be one,\nwhile the non-diagonal ones need to be divided by $2\\sqrt{2}$ for our\nspecific case.\n\n## Centered Data\n\nNow we are going to use the mean centered data to compute the sample covariance of the data by using the following equation\n\n$$\n\\Sigma_n = \\frac{1}{n-1} \\sum_{i=1}^n \\bar{x}_i^T \\bar{x}_i = \\frac{1}{n-1} \\sum_{i=1}^n (x_i - \\mu_n)^T (x_i - \\mu_n)\n$$\n\nwhere the data points $x_i \\in \\mathbb{R}^p$ (here in this example $p = 2$) are column vectors and $x^T$ is the transpose of $x$.\nWe can write our own code or simply use either the functionaly of **numpy** or that of **pandas**, as follows\n\n\n```python\nprint(df.cov())\nprint(np.cov(X_centered.T))\n```\n\n 0 1\n 0 9.876569 0.022155\n 1 0.022155 0.048984\n [[9.87656905 0.0221551 ]\n [0.0221551 0.04898395]]\n\n\nNote that the way we define the covariance matrix here has a factor $n-1$ instead of $n$. This is included in the **cov()** function by **numpy** and **pandas**. \nOur own code here is not very elegant and asks for obvious improvements. It is tailored to this specific $2\\times 2$ covariance matrix.\n\n\n```python\n# extract the relevant columns from the centered design matrix of dim n x 2\nx = X_centered[:,0]\ny = X_centered[:,1]\nCov = np.zeros((2,2))\nCov[0,1] = np.sum(x.T@y)/(n-1.0)\nCov[0,0] = np.sum(x.T@x)/(n-1.0)\nCov[1,1] = np.sum(y.T@y)/(n-1.0)\nCov[1,0]= Cov[0,1]\nprint(\"Centered covariance using own code\")\nprint(Cov)\nplt.plot(x, y, 'x')\nplt.axis('equal')\nplt.show()\n```\n\n## Exploring\n\nDepending on the number of points $n$, we will get results that are close to the covariance values defined above.\nThe plot shows how the data are clustered around a line with slope close to one. Is this expected? Try to change the covariance and the mean values. For example, try to make the variance of the first element much larger than that of the second diagonal element. Try also to shrink the covariance (the non-diagonal elements) and see how the data points are distributed.\n\n## Diagonalize the sample covariance matrix to obtain the principal components\n\nNow we are ready to solve for the principal components! To do so we\ndiagonalize the sample covariance matrix $\\Sigma$. We can use the\nfunction **np.linalg.eig** to do so. It will return the eigenvalues and\neigenvectors of $\\Sigma$. Once we have these we can perform the \nfollowing tasks:\n\n* We compute the percentage of the total variance captured by the first principal component\n\n* We plot the mean centered data and lines along the first and second principal components\n\n* Then we project the mean centered data onto the first and second principal components, and plot the projected data. \n\n* Finally, we approximate the data as\n\n$$\nx_i \\approx \\tilde{x}_i = \\mu_n + \\langle x_i, v_0 \\rangle v_0\n$$\n\nwhere $v_0$ is the first principal component.\n\n## Collecting all Steps\n\nCollecting all these steps we can write our own PCA function and\ncompare this with the functionality included in **Scikit-Learn**. \n\nThe code here outlines some of the elements we could include in the\nanalysis. Feel free to extend upon this in order to address the above\nquestions.\n\n\n```python\n# diagonalize and obtain eigenvalues, not necessarily sorted\nEigValues, EigVectors = np.linalg.eig(Cov)\n# sort eigenvectors and eigenvalues\n#permute = EigValues.argsort()\n#EigValues = EigValues[permute]\n#EigVectors = EigVectors[:,permute]\nprint(\"Eigenvalues of Covariance matrix\")\nfor i in range(2):\n print(EigValues[i])\nFirstEigvector = EigVectors[:,0]\nSecondEigvector = EigVectors[:,1]\nprint(\"First eigenvector\")\nprint(FirstEigvector)\nprint(\"Second eigenvector\")\nprint(SecondEigvector)\n#thereafter we do a PCA with Scikit-learn\nfrom sklearn.decomposition import PCA\npca = PCA(n_components = 2)\nX2Dsl = pca.fit_transform(X)\nprint(\"Eigenvector of largest eigenvalue\")\nprint(pca.components_.T[:, 0])\n```\n\n Eigenvalues of Covariance matrix\n 9.876618994986675\n 0.0489340085461999\n First eigenvector\n [0.99999746 0.00225436]\n Second eigenvector\n [-0.00225436 0.99999746]\n Eigenvector of largest eigenvalue\n [-0.99999746 -0.00225436]\n\n\nThis code does not contain all the above elements, but it shows how we can use **Scikit-Learn** to extract the eigenvector which corresponds to the largest eigenvalue. Try to address the questions we pose before the above code. Try also to change the values of the covariance matrix by making one of the diagonal elements much larger than the other. What do you observe then?\n\n## Classical PCA Theorem\n\nWe assume now that we have a design matrix $\\boldsymbol{X}$ which has been\ncentered as discussed above. For the sake of simplicity we skip the\noverline symbol. The matrix is defined in terms of the various column\nvectors $[\\boldsymbol{x}_0,\\boldsymbol{x}_1,\\dots, \\boldsymbol{x}_{p-1}]$ each with dimension\n$\\boldsymbol{x}\\in {\\mathbb{R}}^{n}$.\n\nThe PCA theorem states that minimizing the above reconstruction error\ncorresponds to setting $\\boldsymbol{W}=\\boldsymbol{S}$, the orthogonal matrix which\ndiagonalizes the empirical covariance(correlation) matrix. The optimal\nlow-dimensional encoding of the data is then given by a set of vectors\n$\\boldsymbol{z}_i$ with at most $l$ vectors, with $l << p$, defined by the\northogonal projection of the data onto the columns spanned by the\neigenvectors of the covariance(correlations matrix).\n\n## The PCA Theorem\n\nTo show the PCA theorem let us start with the assumption that there is one vector $\\boldsymbol{s}_0$ which corresponds to a solution which minimized the reconstruction error $J$. This is an orthogonal vector. It means that we now approximate the reconstruction error in terms of $\\boldsymbol{w}_0$ and $\\boldsymbol{z}_0$ as\n\nWe are almost there, we have obtained a relation between minimizing\nthe reconstruction error and the variance and the covariance\nmatrix. Minimizing the error is equivalent to maximizing the variance\nof the projected data.\n\nWe could trivially maximize the variance of the projection (and\nthereby minimize the error in the reconstruction function) by letting\nthe norm-2 of $\\boldsymbol{w}_0$ go to infinity. However, this norm since we\nwant the matrix $\\boldsymbol{W}$ to be an orthogonal matrix, is constrained by\n$\\vert\\vert \\boldsymbol{w}_0 \\vert\\vert_2^2=1$. Imposing this condition via a\nLagrange multiplier we can then in turn maximize\n\n$$\nJ(\\boldsymbol{w}_0)= \\boldsymbol{w}_0^T\\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{w}_0+\\lambda_0(1-\\boldsymbol{w}_0^T\\boldsymbol{w}_0).\n$$\n\nTaking the derivative with respect to $\\boldsymbol{w}_0$ we obtain\n\n$$\n\\frac{\\partial J(\\boldsymbol{w}_0)}{\\partial \\boldsymbol{w}_0}= 2\\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{w}_0-2\\lambda_0\\boldsymbol{w}_0=0,\n$$\n\nmeaning that\n\n$$\n\\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{w}_0=\\lambda_0\\boldsymbol{w}_0.\n$$\n\n**The direction that maximizes the variance (or minimizes the construction error) is an eigenvector of the covariance matrix**! If we left multiply with $\\boldsymbol{w}_0^T$ we have the variance of the projected data is\n\n$$\n\\boldsymbol{w}_0^T\\boldsymbol{C}[\\boldsymbol{x}]\\boldsymbol{w}_0=\\lambda_0.\n$$\n\nIf we want to maximize the variance (minimize the construction error)\nwe simply pick the eigenvector of the covariance matrix with the\nlargest eigenvalue. This establishes the link between the minimization\nof the reconstruction function $J$ in terms of an orthogonal matrix\nand the maximization of the variance and thereby the covariance of our\nobservations encoded in the design/feature matrix $\\boldsymbol{X}$.\n\nThe proof\nfor the other eigenvectors $\\boldsymbol{w}_1,\\boldsymbol{w}_2,\\dots$ can be\nestablished by applying the above arguments and using the fact that\nour basis of eigenvectors is orthogonal, see [Murphy chapter\n12.2](https://mitpress.mit.edu/books/machine-learning-1). The\ndiscussion in chapter 12.2 of Murphy's text has also a nice link with\nthe Singular Value Decomposition theorem. For categorical data, see\nchapter 12.4 and discussion therein.\n\nFor more details, see for example [Vidal, Ma and Sastry, chapter 2](https://www.springer.com/gp/book/9780387878102).\n\n## Geometric Interpretation and link with Singular Value Decomposition\n\nFor a detailed demonstration of the geometric interpretation, see [Vidal, Ma and Sastry, section 2.1.2](https://www.springer.com/gp/book/9780387878102).\n\nPrincipal Component Analysis (PCA) is by far the most popular dimensionality reduction algorithm.\nFirst it identifies the hyperplane that lies closest to the data, and then it projects the data onto it.\n\nThe following Python code uses NumPy\u2019s **svd()** function to obtain all the principal components of the\ntraining set, then extracts the first two principal components. First we center the data using either **pandas** or our own code\n\n\n```python\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display\nnp.random.seed(100)\n# setting up a 10 x 5 vanilla matrix \nrows = 10\ncols = 5\nX = np.random.randn(rows,cols)\ndf = pd.DataFrame(X)\n# Pandas does the centering for us\ndf = df -df.mean()\ndisplay(df)\n\n# we center it ourselves\nX_centered = X - X.mean(axis=0)\n# Then check the difference between pandas and our own set up\nprint(X_centered-df)\n#Now we do an SVD\nU, s, V = np.linalg.svd(X_centered)\nc1 = V.T[:, 0]\nc2 = V.T[:, 1]\nW2 = V.T[:, :2]\nX2D = X_centered.dot(W2)\nprint(X2D)\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
01234
0-1.5744650.2591531.1973700.1474000.649382
10.6895190.137652-1.0257090.210340-0.076938
2-0.2827270.351636-0.5392611.2166830.340782
30.070889-0.6148081.074067-0.038300-1.450257
41.7942821.458078-0.207545-0.442600-0.147420
51.1123830.6474731.4058900.073598-0.276263
60.397700-1.526744-0.7120181.2162900.418506
7-0.2806471.106095-1.646283-0.956563-1.564374
8-0.369139-0.7516990.051649-0.2131030.967809
9-1.557795-1.0668370.401842-1.2137431.138775
\n
\n\n\n 0 1 2 3 4\n 0 0.0 0.0 0.0 0.0 0.0\n 1 0.0 0.0 0.0 0.0 0.0\n 2 0.0 0.0 0.0 0.0 0.0\n 3 0.0 0.0 0.0 0.0 0.0\n 4 0.0 0.0 0.0 0.0 0.0\n 5 0.0 0.0 0.0 0.0 0.0\n 6 0.0 0.0 0.0 0.0 0.0\n 7 0.0 0.0 0.0 0.0 0.0\n 8 0.0 0.0 0.0 0.0 0.0\n 9 0.0 0.0 0.0 0.0 0.0\n [[-1.5378811 -0.94639099]\n [ 0.86145244 0.89288636]\n [-0.00445655 0.81633628]\n [ 0.07145103 -1.00433417]\n [ 2.03707133 -0.48476997]\n [ 0.72174172 -1.4557763 ]\n [-0.55854694 1.60673226]\n [ 1.6999536 0.43766686]\n [-1.10405456 0.31718909]\n [-2.18673098 -0.17953942]]\n\n\nPCA assumes that the dataset is centered around the origin. Scikit-Learn\u2019s PCA classes take care of centering\nthe data for you. However, if you implement PCA yourself (as in the preceding example), or if you use other libraries, don\u2019t\nforget to center the data first.\n\nOnce you have identified all the principal components, you can reduce the dimensionality of the dataset\ndown to $d$ dimensions by projecting it onto the hyperplane defined by the first $d$ principal components.\nSelecting this hyperplane ensures that the projection will preserve as much variance as possible.\n\n\n```python\nW2 = V.T[:, :2]\nX2D = X_centered.dot(W2)\n```\n\n## PCA and scikit-learn\n\nScikit-Learn\u2019s PCA class implements PCA using SVD decomposition just like we did before. The\nfollowing code applies PCA to reduce the dimensionality of the dataset down to two dimensions (note\nthat it automatically takes care of centering the data):\n\n\n```python\n#thereafter we do a PCA with Scikit-learn\nfrom sklearn.decomposition import PCA\npca = PCA(n_components = 2)\nX2D = pca.fit_transform(X)\nprint(X2D)\n```\n\n [[ 1.5378811 -0.94639099]\n [-0.86145244 0.89288636]\n [ 0.00445655 0.81633628]\n [-0.07145103 -1.00433417]\n [-2.03707133 -0.48476997]\n [-0.72174172 -1.4557763 ]\n [ 0.55854694 1.60673226]\n [-1.6999536 0.43766686]\n [ 1.10405456 0.31718909]\n [ 2.18673098 -0.17953942]]\n\n\nAfter fitting the PCA transformer to the dataset, you can access the principal components using the\ncomponents variable (note that it contains the PCs as horizontal vectors, so, for example, the first\nprincipal component is equal to\n\n\n```python\npca.components_.T[:, 0]\n```\n\n\n\n\n array([-0.62373464, -0.5303329 , 0.317367 , 0.01873344, 0.47815203])\n\n\n\nAnother very useful piece of information is the explained variance ratio of each principal component,\navailable via the $explained\\_variance\\_ratio$ variable. It indicates the proportion of the dataset\u2019s\nvariance that lies along the axis of each principal component.\n\n## Back to the Cancer Data\nWe can now repeat the above but applied to real data, in this case our breast cancer data.\nHere we compute performance scores on the training data using logistic regression.\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.model_selection import train_test_split \nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.linear_model import LogisticRegression\ncancer = load_breast_cancer()\n\nX_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)\n\nlogreg = LogisticRegression()\nlogreg.fit(X_train, y_train)\nprint(\"Train set accuracy from Logistic Regression: {:.2f}\".format(logreg.score(X_train,y_train)))\n# We scale the data\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n# Then perform again a log reg fit\nlogreg.fit(X_train_scaled, y_train)\nprint(\"Train set accuracy scaled data: {:.2f}\".format(logreg.score(X_train_scaled,y_train)))\n#thereafter we do a PCA with Scikit-learn\nfrom sklearn.decomposition import PCA\npca = PCA(n_components = 2)\nX2D_train = pca.fit_transform(X_train_scaled)\n# and finally compute the log reg fit and the score on the training data\t\nlogreg.fit(X2D_train,y_train)\nprint(\"Train set accuracy scaled and PCA data: {:.2f}\".format(logreg.score(X2D_train,y_train)))\n```\n\n Train set accuracy from Logistic Regression: 0.95\n Train set accuracy scaled data: 0.99\n Train set accuracy scaled and PCA data: 0.96\n\n\n /Users/mhjensen/Software/anaconda3/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:763: ConvergenceWarning: lbfgs failed to converge (status=1):\n STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n \n Increase the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\n Please also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n n_iter_i = _check_optimize_result(\n\n\nWe see that our training data after the PCA decomposition has a performance similar to the non-scaled data. \n\nInstead of arbitrarily choosing the number of dimensions to reduce down to, it is generally preferable to\nchoose the number of dimensions that add up to a sufficiently large portion of the variance (e.g., 95%).\nUnless, of course, you are reducing dimensionality for data visualization \u2014 in that case you will\ngenerally want to reduce the dimensionality down to 2 or 3.\nThe following code computes PCA without reducing dimensionality, then computes the minimum number\nof dimensions required to preserve 95% of the training set\u2019s variance:\n\n\n```python\npca = PCA()\npca.fit(X)\ncumsum = np.cumsum(pca.explained_variance_ratio_)\nd = np.argmax(cumsum >= 0.95) + 1\n```\n\nYou could then set $n\\_components=d$ and run PCA again. However, there is a much better option: instead\nof specifying the number of principal components you want to preserve, you can set $n\\_components$ to be\na float between 0.0 and 1.0, indicating the ratio of variance you wish to preserve:\n\n\n```python\npca = PCA(n_components=0.95)\nX_reduced = pca.fit_transform(X)\n```\n\n## Incremental PCA\n\nOne problem with the preceding implementation of PCA is that it requires the whole training set to fit in\nmemory in order for the SVD algorithm to run. Fortunately, Incremental PCA (IPCA) algorithms have\nbeen developed: you can split the training set into mini-batches and feed an IPCA algorithm one minibatch\nat a time. This is useful for large training sets, and also to apply PCA online (i.e., on the fly, as new\ninstances arrive).\n\n### Randomized PCA\n\nScikit-Learn offers yet another option to perform PCA, called Randomized PCA. This is a stochastic\nalgorithm that quickly finds an approximation of the first d principal components. Its computational\ncomplexity is $O(m \\times d^2)+O(d^3)$, instead of $O(m \\times n^2) + O(n^3)$, so it is dramatically faster than the\nprevious algorithms when $d$ is much smaller than $n$.\n\n### Kernel PCA\n\nThe kernel trick is a mathematical technique that implicitly maps instances into a\nvery high-dimensional space (called the feature space), enabling nonlinear classification and regression\nwith Support Vector Machines. Recall that a linear decision boundary in the high-dimensional feature\nspace corresponds to a complex nonlinear decision boundary in the original space.\nIt turns out that the same trick can be applied to PCA, making it possible to perform complex nonlinear\nprojections for dimensionality reduction. This is called Kernel PCA (kPCA). It is often good at\npreserving clusters of instances after projection, or sometimes even unrolling datasets that lie close to a\ntwisted manifold.\nFor example, the following code uses Scikit-Learn\u2019s KernelPCA class to perform kPCA with an\n\n\n```python\nfrom sklearn.decomposition import KernelPCA\nrbf_pca = KernelPCA(n_components = 2, kernel=\"rbf\", gamma=0.04)\nX_reduced = rbf_pca.fit_transform(X)\n```\n\n## Other techniques\n\nThere are many other dimensionality reduction techniques, several of which are available in Scikit-Learn.\n\nHere are some of the most popular:\n* **Multidimensional Scaling (MDS)** reduces dimensionality while trying to preserve the distances between the instances.\n\n* **Isomap** creates a graph by connecting each instance to its nearest neighbors, then reduces dimensionality while trying to preserve the geodesic distances between the instances.\n\n* **t-Distributed Stochastic Neighbor Embedding** (t-SNE) reduces dimensionality while trying to keep similar instances close and dissimilar instances apart. It is mostly used for visualization, in particular to visualize clusters of instances in high-dimensional space (e.g., to visualize the MNIST images in 2D).\n\n* Linear Discriminant Analysis (LDA) is actually a classification algorithm, but during training it learns the most discriminative axes between the classes, and these axes can then be used to define a hyperplane onto which to project the data. The benefit is that the projection will keep classes as far apart as possible, so LDA is a good technique to reduce dimensionality before running another classification algorithm such as a Support Vector Machine (SVM) classifier discussed in the SVM lectures.\n", "meta": {"hexsha": "a152420d57b0d42ea5322caa3d70f3cee814bc22", "size": 152681, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/week43.ipynb", "max_stars_repo_name": "adelezaini/MachineLearning", "max_stars_repo_head_hexsha": "dc3f34f5d509bed6a993705373c46be4da3f97db", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/week43.ipynb", "max_issues_repo_name": "adelezaini/MachineLearning", "max_issues_repo_head_hexsha": "dc3f34f5d509bed6a993705373c46be4da3f97db", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/week43.ipynb", "max_forks_repo_name": "adelezaini/MachineLearning", "max_forks_repo_head_hexsha": "dc3f34f5d509bed6a993705373c46be4da3f97db", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.1872133801, "max_line_length": 12316, "alphanum_fraction": 0.6083140666, "converted": true, "num_tokens": 25879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.29421497216298875, "lm_q2_score": 0.4493926344647597, "lm_q1q2_score": 0.13221804143930146}} {"text": "
\n
\n
\n

Natural Language Processing From Scratch

\n

Text Representation

\n

Bruno Gon\u00e7alves
\n www.data4sci.com
\n @bgoncalves, @data4sci

\n
\n\nIn this lesson we will see in some details how we can best represent text in our application. Let's start by importing the modules we will be using:\n\n\n```python\nimport string\nfrom collections import Counter\nfrom pprint import pprint\nimport gzip\nimport matplotlib.pyplot as plt \nimport numpy as np\n\n%matplotlib inline\n%load_ext watermark\n```\n\nList out the versions of all loaded libraries\n\n\n```python\n%watermark -n -v -m -g -iv\n```\n\n matplotlib 3.1.0\n numpy 1.16.2\n Mon Nov 11 2019 \n \n CPython 3.7.3\n IPython 6.2.1\n \n compiler : Clang 4.0.1 (tags/RELEASE_401/final)\n system : Darwin\n release : 18.7.0\n machine : x86_64\n processor : i386\n CPU cores : 8\n interpreter: 64bit\n Git hash : b9e7a934ea44d9018471e17a5808b247be788f1f\n\n\nSet the default style\n\n\n```python\nplt.style.use('./d4sci.mplstyle')\n```\n\nWe choose a well known nursery rhyme, that has the added distinction of having been the first audio ever recorded, to be the short snippet of text that we will use in our examples:\n\n\n```python\ntext = \"\"\"Mary had a little lamb, little lamb,\n little lamb. Mary had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, Mary went. Everywhere\n that Mary went,\n The lamb was sure to go\"\"\"\n```\n\n## Tokenization\n\nThe first step in any analysis is to tokenize the text. What this means is that we will extract all the individual words in the text. For the sake of simplicity, we will assume that our text is well formed and that our words are delimited either by white space or punctuation characters.\n\n\n```python\nprint(string.punctuation)\n```\n\n !\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~\n\n\n\n```python\ndef extract_words(text):\n temp = text.split() # Split the text on whitespace\n text_words = []\n\n for word in temp:\n # Remove any punctuation characters present in the beginning of the word\n while word[0] in string.punctuation:\n word = word[1:]\n\n # Remove any punctuation characters present in the end of the word\n while word[-1] in string.punctuation:\n word = word[:-1]\n\n # Append this word into our list of words.\n text_words.append(word.lower())\n \n return text_words\n```\n\nAfter this step we now have our text represented as an array of individual, lowercase, words:\n\n\n```python\ntext_words = extract_words(text)\nprint(text_words)\n```\n\n ['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb', 'mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went', 'everywhere', 'that', 'mary', 'went', 'the', 'lamb', 'was', 'sure', 'to', 'go']\n\n\nAs we saw during the video, this is a wasteful way to represent text. We can be much more efficient by representing each word by a number\n\n\n```python\nword_dict = {}\nword_list = []\nvocabulary_size = 0\ntext_tokens = []\n\nfor word in text_words:\n # If we are seeing this word for the first time, create an id for it and added it to our word dictionary\n if word not in word_dict:\n word_dict[word] = vocabulary_size\n word_list.append(word)\n vocabulary_size += 1\n \n # add the token corresponding to the current word to the tokenized text.\n text_tokens.append(word_dict[word])\n```\n\nWhen we were tokenizing our text, we also generated a dictionary **word_dict** that maps words to integers and a **word_list** that maps each integer to the corresponding word.\n\n\n```python\nprint(\"Word list:\", word_list, \"\\n\\n Word dictionary:\")\npprint(word_dict)\n```\n\n Word list: ['mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'went', 'the', 'sure', 'to', 'go'] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThese two datastructures already proved their usefulness when we converted our text to a list of tokens.\n\n\n```python\nprint(text_tokens)\n```\n\n [0, 1, 2, 3, 4, 3, 4, 3, 4, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0, 14, 0, 14, 0, 14, 12, 13, 0, 14, 15, 4, 7, 16, 17, 18]\n\n\nUnfortunately, while this representation is convenient for memory reasons it has some severe limitations. Perhaps the most important of which is the fact that computers naturally assume that numbers can be operated on mathematically (by addition, subtraction, etc) in a way that doesn't match our understanding of words.\n\n## One-hot encoding\n\nOne typical way of overcoming this difficulty is to represent each word by a one-hot encoded vector where every element is zero except the one corresponding to a specific word.\n\n\n```python\ndef one_hot(word, word_dict):\n \"\"\"\n Generate a one-hot encoded vector corresponding to *word*\n \"\"\"\n \n vector = np.zeros(len(word_dict))\n vector[word_dict[word]] = 1\n \n return vector\n```\n\nSo, for example, the word \"fleece\" would be represented by:\n\n\n```python\nfleece_hot = one_hot(\"fleece\", word_dict)\nprint(fleece_hot)\n```\n\n [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n\nThis vector has every element set to zero, except element 6, since:\n\n\n```python\nprint(word_dict[\"fleece\"])\nfleece_hot[6] == 1\n```\n\n 6\n\n\n\n\n\n True\n\n\n\n\n```python\nprint(fleece_hot.sum())\n```\n\n 1.0\n\n\n## Bag of words\n\nWe can now use the one-hot encoded vector for each word to produce a vector representation of our original text, by simply adding up all the one-hot encoded vectors:\n\n\n```python\ntext_vector1 = np.zeros(vocabulary_size)\n\nfor word in text_words:\n hot_word = one_hot(word, word_dict)\n text_vector1 += hot_word\n \nprint(text_vector1)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nIn practice, we can also easily skip the encoding step at the word level by using the *word_dict* defined above:\n\n\n```python\ntext_vector = np.zeros(vocabulary_size)\n\nfor word in text_words:\n text_vector[word_dict[word]] += 1\n \nprint(text_vector)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nNaturally, this approach is completely equivalent to the previous one and has the added advantage of being more efficient in terms of both speed and memory requirements.\n\nThis is known as the __bag of words__ representation of the text. It should be noted that these vectors simply contains the number of times each word appears in our document, so we can easily tell that the word *mary* appears exactly 6 times in our little nursery rhyme.\n\n\n```python\ntext_vector[word_dict[\"mary\"]]\n```\n\n\n\n\n 6.0\n\n\n\nA more pythonic (and efficient) way of producing the same result is to use the standard __Counter__ module:\n\n\n```python\ntext_words\n```\n\n\n\n\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'little',\n 'lamb',\n 'little',\n 'lamb',\n 'mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow',\n 'and',\n 'everywhere',\n 'that',\n 'mary',\n 'went',\n 'mary',\n 'went',\n 'mary',\n 'went',\n 'everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']\n\n\n\n\n```python\nword_counts = Counter(text_words)\npprint(word_counts)\n```\n\n Counter({'mary': 6,\n 'lamb': 5,\n 'little': 4,\n 'went': 4,\n 'had': 2,\n 'a': 2,\n 'was': 2,\n 'everywhere': 2,\n 'that': 2,\n 'whose': 1,\n 'fleece': 1,\n 'white': 1,\n 'as': 1,\n 'snow': 1,\n 'and': 1,\n 'the': 1,\n 'sure': 1,\n 'to': 1,\n 'go': 1})\n\n\nFrom which we can easily generate the __text_vector__ and __word_dict__ data structures:\n\n\n```python\nitems = list(word_counts.items())\n\n# Extract word dictionary and vector representation\nword_dict2 = dict([[items[i][0], i] for i in range(len(items))])\ntext_vector2 = [items[i][1] for i in range(len(items))]\n```\n\n\n```python\nword_counts['mary']\n```\n\n\n\n\n 6\n\n\n\nAnd let's take a look at them:\n\n\n```python\ntext_vector\n```\n\n\n\n\n array([6., 2., 2., 4., 5., 1., 1., 2., 1., 1., 1., 1., 2., 2., 4., 1., 1.,\n 1., 1.])\n\n\n\n\n```python\nprint(\"Text vector:\", text_vector2, \"\\n\\nWord dictionary:\")\npprint(word_dict2)\n```\n\n Text vector: [6, 2, 2, 4, 5, 1, 1, 2, 1, 1, 1, 1, 2, 2, 4, 1, 1, 1, 1] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThe results using this approach are slightly different than the previous ones, because the words are mapped to different integer ids but the corresponding values are the same:\n\n\n```python\nfor word in word_dict.keys():\n if text_vector[word_dict[word]] != text_vector2[word_dict2[word]]:\n print(\"Error!\")\n```\n\nAs expected, there are no differences!\n\n## Term Frequency\n\nThe bag of words vector representation introduced above relies simply on the frequency of occurence of each word. Following a long tradition of giving fancy names to simple ideas, this is known as __Term Frequency__.\n\nIntuitively, we expect the the frequency with which a given word is mentioned should correspond to the relevance of that word for the piece of text we are considering. For example, **Mary** is a pretty important word in our little nursery rhyme and indeed it is the one that occurs the most often:\n\n\n```python\nsorted(items, key=lambda x:x[1], reverse=True)\n```\n\n\n\n\n [('mary', 6),\n ('lamb', 5),\n ('little', 4),\n ('went', 4),\n ('had', 2),\n ('a', 2),\n ('was', 2),\n ('everywhere', 2),\n ('that', 2),\n ('whose', 1),\n ('fleece', 1),\n ('white', 1),\n ('as', 1),\n ('snow', 1),\n ('and', 1),\n ('the', 1),\n ('sure', 1),\n ('to', 1),\n ('go', 1)]\n\n\n\nHowever, it's hard to draw conclusions from such a small piece of text. Let us consider a significantly larger piece of text, the first 100 MB of the english Wikipedia from: http://mattmahoney.net/dc/textdata. For the sake of convenience, text8.gz has been included in this repository in the **data/** directory. We start by loading it's contents into memory as an array of words:\n\n\n```python\ndata = []\n\nfor line in gzip.open(\"data/text8.gz\", 'rt'):\n data.extend(line.strip().split())\n```\n\nNow let's take a look at the first 50 words in this large corpus:\n\n\n```python\ndata[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'as',\n 'a',\n 'term',\n 'of',\n 'abuse',\n 'first',\n 'used',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'the',\n 'diggers',\n 'of',\n 'the',\n 'english',\n 'revolution',\n 'and',\n 'the',\n 'sans',\n 'culottes',\n 'of',\n 'the',\n 'french',\n 'revolution',\n 'whilst',\n 'the',\n 'term',\n 'is',\n 'still',\n 'used',\n 'in',\n 'a',\n 'pejorative',\n 'way',\n 'to',\n 'describe',\n 'any',\n 'act',\n 'that',\n 'used',\n 'violent',\n 'means',\n 'to',\n 'destroy',\n 'the']\n\n\n\nAnd the top 10 most common words\n\n\n```python\ncounts = Counter(data)\n\nsorted_counts = sorted(list(counts.items()), key=lambda x:x[1], reverse=True)\n\nfor word, count in sorted_counts[:10]:\n print(word, count)\n```\n\n the 1061396\n of 593677\n and 416629\n one 411764\n in 372201\n a 325873\n to 316376\n zero 264975\n nine 250430\n two 192644\n\n\nSurprisingly, we find that the most common words are not particularly meaningful. Indeed, this is a common occurence in Natural Language Processing. The most frequent words are typically auxiliaries required due to gramatical rules.\n\nOn the other hand, there is also a large number of words that occur very infrequently as can be easily seen by glancing at the word freqency distribution.\n\n\n```python\ndist = Counter(counts.values())\ndist = list(dist.items())\ndist.sort(key=lambda x:x[0])\ndist = np.array(dist)\n\nnorm = np.dot(dist.T[0], dist.T[1])\n\nplt.loglog(dist.T[0], dist.T[1]/norm)\nplt.xlabel(\"count\")\nplt.ylabel(\"P(count)\")\nplt.title(\"Word frequency distribution\")\nplt.gcf().set_size_inches(11, 8)\n```\n\n## Stopwords\n\nOne common technique to simplify NLP tasks is to remove what are known as Stopwords, words that are very frequent but not meaningful. If we simply remove the most common 100 words, we significantly reduce the amount of data we have to consider while losing little information.\n\n\n```python\nstopwords = set([word for word, count in sorted_counts[:100]])\n\nclean_data = []\n\nfor word in data:\n if word not in stopwords:\n clean_data.append(word)\n\nprint(\"Original size:\", len(data))\nprint(\"Clean size:\", len(clean_data))\nprint(\"Reduction:\", 1-len(clean_data)/len(data))\n```\n\n Original size: 17005207\n Clean size: 9006229\n Reduction: 0.470384041782026\n\n\n\n```python\nclean_data[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'term',\n 'abuse',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'diggers',\n 'english',\n 'revolution',\n 'sans',\n 'culottes',\n 'french',\n 'revolution',\n 'whilst',\n 'term',\n 'still',\n 'pejorative',\n 'way',\n 'describe',\n 'any',\n 'act',\n 'violent',\n 'means',\n 'destroy',\n 'organization',\n 'society',\n 'taken',\n 'positive',\n 'label',\n 'self',\n 'defined',\n 'anarchists',\n 'word',\n 'anarchism',\n 'derived',\n 'greek',\n 'without',\n 'archons',\n 'ruler',\n 'chief',\n 'king',\n 'anarchism',\n 'political',\n 'philosophy',\n 'belief',\n 'rulers']\n\n\n\nWow, our dataset size was reduced almost in half!\n\nIn practice, we don't simply remove the most common words in our corpus but rather a manually curate list of stopwords. Lists for dozens of languages and applications can easily be found online.\n\n## Term Frequency/Inverse Document Frequency\n\nOne way of determining of the relative importance of a word is to see how often it appears across multiple documents. Words that are relevant to a specific topic are more likely to appear in documents about that topic and much less in documents about other topics. On the other hand, less meaningful words (like **the**) will be common across documents about any subject.\n\nTo measure the document frequency of a word we will need to have multiple documents. For the sake of simplicity, we will treat each sentence of our nursery rhyme as an individual document:\n\n\n```python\nprint(text)\n```\n\n Mary had a little lamb, little lamb,\n little lamb. Mary had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, Mary went. Everywhere\n that Mary went,\n The lamb was sure to go\n\n\n\n```python\ncorpus_text = text.split('.')\ncorpus_words = []\n\nfor document in corpus_text:\n doc_words = extract_words(document)\n corpus_words.append(doc_words)\n```\n\nNow our corpus is represented as a list of word lists, where each list is just the word representation of the corresponding sentence:\n\n\n```python\npprint(corpus_words)\n```\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\nLet us now calculate the number of documents in which each word appears:\n\n\n```python\ndocument_count = {}\n\nfor document in corpus_words:\n word_set = set(document)\n \n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n\npprint(document_count)\n```\n\n {'a': 2,\n 'and': 1,\n 'as': 1,\n 'everywhere': 2,\n 'fleece': 1,\n 'go': 1,\n 'had': 2,\n 'lamb': 3,\n 'little': 2,\n 'mary': 4,\n 'snow': 1,\n 'sure': 1,\n 'that': 2,\n 'the': 1,\n 'to': 1,\n 'was': 2,\n 'went': 2,\n 'white': 1,\n 'whose': 1}\n\n\nAs we can see, the word __Mary__ appears in all 4 of our documents, making it useless when it comes to distinguish between the different sentences. On the other hand, words like __white__ which appear in only one document are very discriminative. Using this approach we can define a new quantity, the ___Inverse Document Frequency__ that tells us how frequent a word is across the documents in a specific corpus:\n\n\n```python\ndef inv_doc_freq(corpus_words):\n number_docs = len(corpus_words)\n \n document_count = {}\n\n for document in corpus_words:\n word_set = set(document)\n\n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n \n IDF = {}\n \n for word in document_count:\n IDF[word] = np.log(number_docs/document_count[word])\n \n return IDF\n```\n\nWhere we followed the convention of using the logarithm of the inverse document frequency. This has the numerical advantage of avoiding to have to handle small fractional numbers. \n\nWe can easily see that the IDF gives a smaller weight to the most common words and a higher weight to the less frequent:\n\n\n```python\ncorpus_words\n```\n\n\n\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\n\n\n```python\nIDF = inv_doc_freq(corpus_words)\n\npprint(IDF)\n```\n\n {'a': 0.6931471805599453,\n 'and': 1.3862943611198906,\n 'as': 1.3862943611198906,\n 'everywhere': 0.6931471805599453,\n 'fleece': 1.3862943611198906,\n 'go': 1.3862943611198906,\n 'had': 0.6931471805599453,\n 'lamb': 0.28768207245178085,\n 'little': 0.6931471805599453,\n 'mary': 0.0,\n 'snow': 1.3862943611198906,\n 'sure': 1.3862943611198906,\n 'that': 0.6931471805599453,\n 'the': 1.3862943611198906,\n 'to': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'went': 0.6931471805599453,\n 'white': 1.3862943611198906,\n 'whose': 1.3862943611198906}\n\n\nAs expected **Mary** has the smallest weight of all words 0, meaning that it is effectively removed from the dataset. You can consider this as a way of implicitly identify and remove stopwords. In case you do want to keep even the words that appear in every document, you can just add a 1. to the argument of the logarithm above:\n\n\\begin{equation}\n\\log\\left[1+\\frac{N_d}{N_d\\left(w\\right)}\\right]\n\\end{equation}\n\nWhen we multiply the term frequency of each word by it's inverse document frequency, we have a good way of quantifying how relevant a word is to understand the meaning of a specific document.\n\n\n```python\ndef tf_idf(corpus_words):\n IDF = inv_doc_freq(corpus_words)\n \n TFIDF = []\n \n for document in corpus_words:\n TFIDF.append(Counter(document))\n \n for document in TFIDF:\n for word in document:\n document[word] = document[word]*IDF[word]\n \n return TFIDF\n```\n\n\n```python\ntf_idf(corpus_words)\n```\n\n\n\n\n [Counter({'a': 0.6931471805599453,\n 'had': 0.6931471805599453,\n 'lamb': 0.8630462173553426,\n 'little': 2.0794415416798357,\n 'mary': 0.0}),\n Counter({'a': 0.6931471805599453,\n 'as': 1.3862943611198906,\n 'fleece': 1.3862943611198906,\n 'had': 0.6931471805599453,\n 'lamb': 0.28768207245178085,\n 'little': 0.6931471805599453,\n 'mary': 0.0,\n 'snow': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'white': 1.3862943611198906,\n 'whose': 1.3862943611198906}),\n Counter({'and': 1.3862943611198906,\n 'everywhere': 0.6931471805599453,\n 'mary': 0.0,\n 'that': 0.6931471805599453,\n 'went': 2.0794415416798357}),\n Counter({'everywhere': 0.6931471805599453,\n 'go': 1.3862943611198906,\n 'lamb': 0.28768207245178085,\n 'mary': 0.0,\n 'sure': 1.3862943611198906,\n 'that': 0.6931471805599453,\n 'the': 1.3862943611198906,\n 'to': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'went': 0.6931471805599453})]\n\n\n\nNow we finally have a vector representation of each of our documents that takes the informational contributions of each word into account. Each of these vectors provides us with a unique representation of each document, in the context (corpus) in which it occurs, making it posssible to define the similarity of two documents, etc.\n\n## Porter Stemmer\n\nThere is still, however, one issue with our approach to representing text. Since we treat each word as a unique token and completely independently from all others, for large documents we will end up with many variations of the same word such as verb conjugations, the corresponding adverbs and nouns, etc. \n\nOne way around this difficulty is to use stemming algorithm to reduce words to their root (or stem) version. The most famous Stemming algorithm is known as the **Porter Stemmer** and was introduced by Martin Porter in 1980 [Program 14, 130 (1980)](https://dl.acm.org/citation.cfm?id=275705)\n\nThe algorithm starts by defining consonants (C) and vowels (V):\n\n\n```python\nV = set('aeiouy')\nC = set('bcdfghjklmnpqrstvwxz')\n```\n\nThe stem of a word is what is left of that word after a speficic ending has been removed. A function to do this is easy to implement:\n\n\n```python\ndef get_stem(suffix, word):\n \"\"\"\n Extract the stem of a word\n \"\"\"\n \n if word.lower().endswith(suffix.lower()): # Case insensitive comparison\n return word[:-len(suffix)]\n\n return None\n```\n\nIt also defines words (or stems) to be sequences of vowels and consonants of the form:\n\n\\begin{equation}\n[C](VC)^m[V]\n\\end{equation}\n\nwhere $m$ is called the **measure** of the word and [] represent optional sections. \n\n\n```python\ndef measure(orig_word):\n \"\"\"\n Calculate the \"measure\" m of a word or stem, according to the Porter Stemmer algorthim\n \"\"\"\n \n word = orig_word.lower()\n\n optV = False\n optC = False\n VC = False\n m = 0\n\n pos = 0\n\n # We can think of this implementation as a simple finite state machine\n # looks for sequences of vowels or consonants depending of the state\n # in which it's in, while keeping track of how many VC sequences it\n # has encountered.\n # The presence of the optional V and C portions is recorded in the\n # optV and optC booleans.\n \n # We're at the initial state.\n # gobble up all the optional consonants at the beginning of the word\n while pos < len(word) and word[pos] in C:\n pos += 1\n optC = True\n\n while pos < len(word):\n # Now we know that the next state must be a vowel\n while pos < len(word) and word[pos] in V:\n pos += 1\n optV = True\n\n # Followed by a consonant\n while pos < len(word) and word[pos] in C:\n pos += 1\n optV = False\n \n # If a consonant was found, then we matched VC\n # so we should increment m by one. Otherwise, \n # optV remained true and we simply had a dangling\n # V sequence.\n if not optV:\n m += 1\n\n return m\n```\n\nLet's consider a simple example. The word __crepusculars__ should have measure 4:\n\n[cr] (ep) (usc) (ul) (ars)\n\nand indeed it does.\n\n\n```python\nword = \"crepusculars\"\nprint(measure(word))\n```\n\n 4\n\n\nThe Porter algorithm sequentially applies a series of transformation rules over a series of 5 steps (step 1 is divided in 3 substeps and step 5 in 2). The rules are only applied if a certain condition is true. \n\nIn addition to possibily specifying a requirement on the measure of a word, conditions can make use of different boolean functions as well: \n\n\n```python\ndef ends_with(char, stem):\n \"\"\"\n Checks the ending of the word\n \"\"\"\n return stem[-1] == char\n\ndef double_consonant(stem):\n \"\"\"\n Checks the ending of a word for a double consonant\n \"\"\"\n if len(stem) < 2:\n return False\n\n if stem[-1] in C and stem[-2] == stem[-1]:\n return True\n\n return False\n\ndef contains_vowel(stem):\n \"\"\"\n Checks if a word contains a vowel or not\n \"\"\"\n return len(set(stem) & V) > 0 \n```\n\nFinally, we define a function to apply a specific rule to a word or stem:\n\n\n```python\ndef apply_rule(condition, suffix, replacement, word):\n \"\"\"\n Apply Porter Stemmer rule.\n if \"condition\" is True replace \"suffix\" by \"replacement\" in \"word\"\n \"\"\"\n \n stem = get_stem(suffix, word)\n\n if stem is not None and condition is True:\n # Remove the suffix\n word = stem\n\n # Add the replacement suffix, if any\n if replacement is not None:\n word += replacement\n\n return word\n```\n\nNow we can see how rules can be applied. For example, this rule, from step 1b is successfully applied to __pastered__:\n\n\n```python\nword = \"plastered\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n True\n\n\n\nWhile try applying the same rule to **bled** will fail to pass the condition resulting in no change.\n\n\n```python\nword = \"bled\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'bled'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'bl'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n False\n\n\n\nFor a more complex example, we have, in Step 4:\n\n\n```python\nword = \"adoption\"\nsuffix = \"ion\"\nstem = get_stem(suffix, word)\napply_rule(measure(stem) > 1 and (ends_with(\"s\", stem) or ends_with(\"t\", stem)), suffix, None, word)\n```\n\n\n\n\n 'adopt'\n\n\n\n\n```python\nends_with(\"t\", stem)\n```\n\n\n\n\n True\n\n\n\n\n```python\nmeasure(stem)\n```\n\n\n\n\n 2\n\n\n\nIn total, the Porter Stemmer algorithm (for the English language) applies several dozen rules (see https://tartarus.org/martin/PorterStemmer/def.txt for a complete list). Implementing all of them is both tedious and error prone, so we abstain from providing a full implementation of the algorithm here. High quality implementations can be found in all major NLP libraries such as [NLTK](http://www.nltk.org/howto/stem.html).\n\nThe dificulties of defining matching rules to arbitrary text cannot be fully resolved without the use of Regular Expressions (typically implemented as Finite State Machines like our __measure__ implementation above), a more advanced topic that is beyond the scope of this course.\n\n
\n \n
\n", "meta": {"hexsha": "b154e73c16bd590dc044e723c01bea98b968f208", "size": 247301, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1. Text Representation.ipynb", "max_stars_repo_name": "sajag1986/NLP", "max_stars_repo_head_hexsha": "8c3b24b7cfb0371a17c86c11dece1a06155ce164", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1. Text Representation.ipynb", "max_issues_repo_name": "sajag1986/NLP", "max_issues_repo_head_hexsha": "8c3b24b7cfb0371a17c86c11dece1a06155ce164", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1. Text Representation.ipynb", "max_forks_repo_name": "sajag1986/NLP", "max_forks_repo_head_hexsha": "8c3b24b7cfb0371a17c86c11dece1a06155ce164", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 128.268153527, "max_line_length": 197868, "alphanum_fraction": 0.8747154278, "converted": true, "num_tokens": 7837, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43014736319616964, "lm_q2_score": 0.3073580232098525, "lm_q1q2_score": 0.13220924324090516}} {"text": "# Homework 1\n(c) 2017 Justin Bois. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT).\n\n\n```python\nimport numpy as np\nimport pandas as pd\n\nimport bokeh.io\nimport bokeh.plotting\n\nfrom bokeh.models import Legend\nfrom bokeh.plotting import figure, show, output_file\n\nbokeh.io.output_notebook()\n```\n\n\n\n
\n \n Loading BokehJS ...\n
\n\n\n\n\n## Problem 1.3 (Microtubule catastrophes)\nFor this exercise, we will analyze the paper by Gardner, Zanic, et al., Depolymerizing kinesins Kip3 and MCAK shape cellular microtubule architecture by differential control of catastrophe, *Cell*, 147, 1092-1103, 2011\n
\n
\nIn this paper, the authors investigated the dynamics of microtubule catastrophe, the switching of a microtubule from a growing to a shrinking state. In particular, they were interested in the time between the start of growth of a microtubule and the catastrophe event. They monitored microtubules by using tubulin (the monomer that comprises a microtubule) that was labeled with a fluorescent marker. As a control to make sure that fluorescent labels and exposure to laser light did not affect the microtubule dynamics, they performed a similar experiment using differential interference contrast (DIC) microscopy. They measured the time until catastrophe with labeled and unlabeled tubulin.\n
\n
\nWe will use their data to generate the plot which is similar to the Fig. 2a of their paper.\n
\n
\nFirst, let's load the data into a Pandas `DataFrame`\n\n\n```python\n# Read the data from the data file into a DataFrame\ndf = pd.read_csv('data/gardner_et_al_2011_time_to_catastrophe_dic.csv', comment='#')\n\n# Let's take a look\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
time to catastrophe with labeled tubulin (s)time to catastrophe with unlabeled tubulin (s)
0470355.0
11415425.0
2130540.0
3280265.0
45501815.0
565160.0
6330370.0
7325460.0
8340190.0
995130.0
1055460.0
11360140.0
12220295.0
13225445.0
14320210.0
1560360.0
16210575.0
17155320.0
18875180.0
191000180.0
20475615.0
21295145.0
22245870.0
23415420.0
24305460.0
25320265.0
26370150.0
27505770.0
28118075.0
2944040.0
.........
181350NaN
182155NaN
1831095NaN
184465NaN
185215NaN
186365NaN
187605NaN
188520NaN
189280NaN
190610NaN
191765NaN
192290NaN
193240NaN
194180NaN
195225NaN
1961030NaN
197155NaN
198215NaN
199705NaN
200285NaN
201135NaN
202465NaN
20380NaN
204370NaN
205480NaN
206195NaN
207705NaN
208300NaN
209605NaN
210600NaN
\n

211 rows \u00d7 2 columns

\n
\n\n\n\nThe data above are not tidy. Let's remember the three rules for the tidy data:\n* Each variable forms a column.\n* Each observation forms a separate row.\n* Each type of observational unit forms a separate table.\n\nTo tidy these data, we should drop `NaN`s. We could not use these values for analysis anyways, as we need to match control measurements (unlabeled) with the measurements of interest (labeled)\n\n\n```python\n# Drop NaN values\ndf = df.dropna()\n\n# Let's look at it\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
time to catastrophe with labeled tubulin (s)time to catastrophe with unlabeled tubulin (s)
0470355.0
11415425.0
2130540.0
3280265.0
45501815.0
\n
\n\n\n\nApparently, the `NaN` datapoints have been removed.\n
\n
\nIn the Fig. 2a of their paper, Gardner, Zanic, et al. have the empirical cumulative distribution function (ECDF). We will try to reconstruct this plot. First, we need to write a function `ecdf_vals(data)` which takes a one-dimensional Numpy array (or Pandas `Series`; same construction will work) of data anad returns the `x` and `y` values for plotting the ECDF. The definition of ECDF is\n\\begin{align}\nECDF(x) = fraction \\ of \\ data \\ points \\leq x\n\\end{align}\n\n\n```python\ndef ecdf_vals(data):\n \"\"\"Function returns the x and y values for the plotting of ECDF.\n Input: data (Numpy array or Pandas Series)\n Output: a pair of Numpy arrays (xaxis data and yaxis data).\"\"\"\n x = np.sort(data)\n y = np.arange(1, len(data)+1)/len(data)\n \n return x, y\n```\n\n\n```python\n# First let's rename the columns to have shorter names\nrename_dict = {'time to catastrophe with labeled tubulin (s)' : 'tc_lab',\n 'time to catastrophe with unlabeled tubulin (s)' : 'tc_unlab'}\n\ndf = df.rename(columns=rename_dict)\n\n# Let's look at it\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
tc_labtc_unlab
0470355.0
11415425.0
2130540.0
3280265.0
45501815.0
\n
\n\n\n\n\n```python\n# Select labeled tubulin data\nd_lab = df['tc_lab']\n\n# Select unlabeled tubulin data\nd_unlab = df['tc_unlab']\n\n# Apply the ECDF functions to both labeled and unlabeled data\nxlab, ylab = ecdf_vals(d_lab)\nxunlab, yunlab = ecdf_vals(d_unlab)\n\n# Now use bokeh to the ECDFs on the single graph\n# Set up the plot\nf = bokeh.plotting.figure(plot_height=300,\n plot_width=500,\n x_axis_label='time to catastrophe (s)',\n y_axis_label='ECDF')\n\n# Add a scatter plot\nf1 = f.circle(xlab, ylab, color='red')\nf2 = f.circle(xunlab, yunlab, color='green')\n\n# Make a legend object\nlegend = Legend(items=[\n ('labeled tubulin', [f1]),\n ('unlabeled tubulin', [f2])\n ])\n# Add legend\nf.add_layout(legend)\nf.legend.location = 'bottom_right'\n\n# Add axis labels\nf.xaxis.axis_label = 'time to catastrophe (s)'\nf.yaxis.axis_label = 'ECDF'\n\n\n\nbokeh.io.show(f)\n```\n\n\n\n
\n
\n
\n\n\n\n\nAlthough the ECDFs are often plotted as the scatter graphs, it is not a typical convention. Given the definition of the ECDF, it is defined for *all* values of *x* along the real *x-axis*. So, formally, the ECDF should be plotted as a line.\n
\n
\nNow, we will write a function `plot_ecdf_formal(data)` that takes a one-dimensional Numpy array of dat and returns a Bokeh figure with the ECDF plotted as a line.\n\n\n```python\ndef plot_ecdf_formal(data, data_legend='data', plot_height=300, plot_width=500, \n x_axis_label='data', y_axis_label='ECDF'):\n \"\"\"Returns the figure with the line plot of the ECDF data.\"\"\"\n # Compute the ECDF\n x, y = ecdf_vals(data)\n \n # Make the Bokeh figure\n f = bokeh.plotting.figure(plot_height=plot_height,\n plot_width=plot_width,\n x_axis_label=x_axis_label,\n y_axis_label=y_axis_label)\n \n # Make a line plot\n f1 = f.line(x,y, color='red')\n\n # Make a legend object\n legend = Legend(items=[\n (data_legend, [f1])\n ])\n # Add legend\n f.add_layout(legend)\n f.legend.location = 'bottom_right'\n\n # Add axis labels\n f.xaxis.axis_label = x_axis_label\n f.yaxis.axis_label = y_axis_label\n\n bokeh.io.show(f)\n \n return None\n```\n\nLet's try to reproduce our ECDF plot for the data for labeled tubulin.\n\n\n```python\nplot_ecdf_formal(d_lab, data_legend='labeled tubulin', \n x_axis_label='time to catastrophe (s)')\n```\n\n\n\n
\n
\n
\n\n\n\n\nThe plot looks good!\n\n## Bayes's Theorem and statistical inference\nLet's do a quick recap of the *Bayes's theorem* and apply it to the data before we will talk about the marginalization. We start from the product rule, where we consider three events, $A$, $B$, and $C$ and their probabilities of occuring. Product rule states that\n\\begin{align}\nP(A, B \\mid C) = P(A \\mid B,C) \\, P(B \\mid C) \\quad \\textbf{(product rule)}\n\\end{align}\nThis says that the probability of $A$ *and* $B$ occuring, given that $C$ happened is equal to probability of $A$ occuring given that $B$ and $C$ happened, multiplied by the probability of $B$ occuring, given that $C$ happened.\n
\n
\nNow we can add some meaning to the events $A$, $B$, and $C$. How about this:\n* $ A = H_{i} $ is the hypothesis (or parameter value) that we are testing.\n* $ B = D $ is the measured data.\n* $ C = I $ is all the other information we know. \n
\nNow we can rewrite the product rule as\n\\begin{align}\nP(H_{i}, D \\mid I) = P(H_{i} \\mid D,I) \\, P(D \\mid I)\n\\end{align}\nThe *and* operation is commutative. It means that $P(H_{i}, D \\mid I) = P(D, H_{i} \\mid I)$. It means that we can express these equations as \n\\begin{align}\nP(H_{i} \\mid D,I) \\, P(D \\mid I) = P(D \\mid H_{i},I) \\, P(H_{i} \\mid I)\n\\end{align}\nAfter rearranging, we get\n\\begin{align}\nP(H_{i} \\mid D,I) = \\frac{P(D \\mid H_{i},I) \\, P(H_{i} \\mid I)}{P(D \\mid I)} \n\\quad \\textbf{(Bayes's theorem)}\n\\end{align}\nThis is exactly what we wanted - all the quantities on the right side have meaning and they describe the probability that our hypothesis is true, given the data and the other information we have. Let's assign the names to the terms in the equation and rewrite the equation as follows:\n\\begin{align}\nposterior = \\frac{likelihood \\times prior}{evidence}\n\\end{align}\n\n*Note: The Bayes's Theorem is a statement about the probability, which is valid for both Bayesian and frequentist interpretation of the probability.*\n
\n
\nLet's go through each term:
\n* **The prior probability.** This represents the plausibility of the hypothesis $H_{i}$ given everything we know *before* we did the experiment to get the data.
\n
\n* **The likelihood.** The likelihood describes how likely it is to obtain the data, *given that the hypothesis $H_{i}$ is true.* It also contains information about our expectations from data, given our measurement method. For instance, instrument noise, its model, and the other external circumstances.\n
\n
\n* **The evidence.** It can be computed from the likelihood and prior and is also called the *marginal likelihood*.\n
\n
\n* **The posterior probability.** And this is what we want - how plausible is the hypothesis, given that we have measured some new data? It is calculated directly from the likelihood and prior.\n\n## Marginalization\nWe said that the evidence can be computed from the likelihood and the prior. To see this, let's apply the sum rule to the posterior probability:\n
\n\\begin{align}\nP(H_{j} \\mid D,I) + P(\\bar{H_{j}} \\mid D, I) = 1 \\\\[1em]\nP(H_{j} \\mid D,I) + \\sum_{i \\neq j}{P(H_{i} \\mid D,I)} = 1 \\\\\n\\sum_{i}{P(H_{i} \\mid D,I)} = 1\n\\end{align}\nfor some hypothesis $H_{j}$. We can now apply the Bayes's Theorem as \n
\n\\begin{align}\n\\sum_{i}{P(H_{i} \\mid D,I)} = \\sum_{i}{\\frac{P(D \\mid H_{i}, I) \\, P(H_{i} \\mid I)}{P(D \\mid I)}} = 1 \\\\[1em]\n\\sum_{i}{P(H_{i} \\mid D,I)} = \\frac{1}{{P(D \\mid I)}} \\sum_{i}{P(D \\mid H_{i}, I) \\, P(H_{i} \\mid I)} = 1 \\\\[1em]\n\\end{align}\nTherefore, we can compute the *evidence* by summing over the products of likelihoods and priors:\n
\n\\begin{align}\nP(D \\mid I) = \\sum_{i}{P(D \\mid H_{i}, I) \\, P(H_{i} \\mid I)} \\\\[1em]\n\\end{align}\nThis process of elimination of a variable (in this case the hypotheses) from a probability by summing is called *marginalization*.
\nHow to grasp the marginalization? I think that we can imagine that we are trying to look through the plausible \"Hypothesis space\" on all the possible hypotheses, taking into account the measured data and the prior information to assess the \"influence\" and the implications of each hypothesis for the observed experimental data, given the prior information.\n\n## Probability distribution of a marginalized parameter\nLet's say we have a statistical model with two continuous parameters, that is, $\\theta = (\\theta_{1}, \\theta{2})$. A statement of Bayes's theorem in this case is\n\\begin{align}\nP(\\theta \\mid D,I) = \\frac{P(D \\mid \\theta,I) \\, P(\\theta \\mid I)}{P(D \\mid I)}\n\\end{align}\nThis can be explicitly written as \n\\begin{align}\nP(\\theta_{1}, \\theta_{2} \\mid D,I) = \\frac{P(D \\mid \\theta_{1}, \\theta_{2},I) \\, P(\\theta_{1}, \\theta_{2} \\mid I)}{P(D \\mid I)}\n\\end{align}\nLet's try to marginalize the statistical model by $\\theta_{2}$ to obtain $P(\\theta_{1} \\mid D,I)$:\nConsidering that we have continuous hypothesis space, we can write\n\\begin{align}\nP(\\theta_{1} \\mid D,I) = \\int{ \\frac{P(D \\mid \\theta_{1}, \\theta_{2},I) \\, P(\\theta_{1}, \\theta_{2} \\mid I)}{P(D \\mid I)} d\\theta_{2}}\n\\end{align}\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "e6e5d237d1a333aebf7ee46c51f55bc4cb0c9574", "size": 81208, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Homework 1.ipynb", "max_stars_repo_name": "MiroGasparek/DataAnalysis_intro", "max_stars_repo_head_hexsha": "585757815d28661a5e3ccea8f03f9b7c12bc0888", "max_stars_repo_licenses": ["CC-BY-4.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework 1.ipynb", "max_issues_repo_name": "MiroGasparek/DataAnalysis_intro", "max_issues_repo_head_hexsha": "585757815d28661a5e3ccea8f03f9b7c12bc0888", "max_issues_repo_licenses": ["CC-BY-4.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework 1.ipynb", "max_forks_repo_name": "MiroGasparek/DataAnalysis_intro", "max_forks_repo_head_hexsha": "585757815d28661a5e3ccea8f03f9b7c12bc0888", "max_forks_repo_licenses": ["CC-BY-4.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.2326768782, "max_line_length": 13450, "alphanum_fraction": 0.4911461925, "converted": true, "num_tokens": 5984, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4804786780479071, "lm_q2_score": 0.2751297297667525, "lm_q1q2_score": 0.13219396885000714}} {"text": "# **Save this file as studentid1_studentid2_lab#.ipynb**\n(Your student-id is the number shown on your student card.)\n\nE.g. if you work with 3 people, the notebook should be named:\n12301230_3434343_1238938934_lab1.ipynb.\n\n**This will be parsed by a regexp, so please double check your filename.**\n\nBefore you turn this problem in, please make sure everything runs correctly. First, **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All).\n\n**Make sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your names and email adresses below.**\n\n\n\n\n```python\nNAME = \"Pascal Esser\"\nNAME2 = \"Jana Leible\"\nNAME3 = \"Tom de Bruijn\"\nEMAIL = \"pascal.esser@student.uva.nl\"\nEMAIL2 = \"jana.leible@web.de\"\nEMAIL3 = \"tommdebruijn@gmail.com\"\n```\n\n---\n\n# Lab 2: Classification\n\n### Machine Learning 1, September 2017\n\nNotes on implementation:\n\n* You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.\n* Please write your answers right below the questions.\n* Among the first lines of your notebook should be \"%pylab inline\". This imports all required modules, and your plots will appear inline.\n* Use the provided test cells to check if your answers are correct\n* **Make sure your output and plots are correct before handing in your assignment with Kernel -> Restart & Run All**\n\n$\\newcommand{\\bx}{\\mathbf{x}}$\n$\\newcommand{\\bw}{\\mathbf{w}}$\n$\\newcommand{\\bt}{\\mathbf{t}}$\n$\\newcommand{\\by}{\\mathbf{y}}$\n$\\newcommand{\\bm}{\\mathbf{m}}$\n$\\newcommand{\\bb}{\\mathbf{b}}$\n$\\newcommand{\\bS}{\\mathbf{S}}$\n$\\newcommand{\\ba}{\\mathbf{a}}$\n$\\newcommand{\\bz}{\\mathbf{z}}$\n$\\newcommand{\\bv}{\\mathbf{v}}$\n$\\newcommand{\\bq}{\\mathbf{q}}$\n$\\newcommand{\\bp}{\\mathbf{p}}$\n$\\newcommand{\\bh}{\\mathbf{h}}$\n$\\newcommand{\\bI}{\\mathbf{I}}$\n$\\newcommand{\\bX}{\\mathbf{X}}$\n$\\newcommand{\\bT}{\\mathbf{T}}$\n$\\newcommand{\\bPhi}{\\mathbf{\\Phi}}$\n$\\newcommand{\\bW}{\\mathbf{W}}$\n$\\newcommand{\\bV}{\\mathbf{V}}$\n\n\n```python\n%pylab inline\nplt.rcParams[\"figure.figsize\"] = [9,5]\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n# Part 1. Multiclass logistic regression\n\nScenario: you have a friend with one big problem: she's completely blind. You decided to help her: she has a special smartphone for blind people, and you are going to develop a mobile phone app that can do _machine vision_ using the mobile camera: converting a picture (from the camera) to the meaning of the image. You decide to start with an app that can read handwritten digits, i.e. convert an image of handwritten digits to text (e.g. it would enable her to read precious handwritten phone numbers).\n\nA key building block for such an app would be a function `predict_digit(x)` that returns the digit class of an image patch $\\bx$. Since hand-coding this function is highly non-trivial, you decide to solve this problem using machine learning, such that the internal parameters of this function are automatically learned using machine learning techniques.\n\nThe dataset you're going to use for this is the MNIST handwritten digits dataset (`http://yann.lecun.com/exdb/mnist/`). You can download the data with scikit learn, and load it as follows:\n\n\n```python\nfrom sklearn.datasets import fetch_mldata\n# Fetch the data\nmnist = fetch_mldata('MNIST original')\ndata, target = mnist.data, mnist.target.astype('int')\n# Shuffle\nindices = np.arange(len(data))\nnp.random.seed(123)\nnp.random.shuffle(indices)\ndata, target = data[indices].astype('float32'), target[indices]\n\n# Normalize the data between 0.0 and 1.0:\ndata /= 255. \n\n# Split\nx_train, x_valid, x_test = data[:50000], data[50000:60000], data[60000: 70000]\nt_train, t_valid, t_test = target[:50000], target[50000:60000], target[60000: 70000]\n```\n\nMNIST consists of small 28 by 28 pixel images of written digits (0-9). We split the dataset into a training, validation and testing arrays. The variables `x_train`, `x_valid` and `x_test` are $N \\times M$ matrices, where $N$ is the number of datapoints in the respective set, and $M = 28^2 = 784$ is the dimensionality of the data. The second set of variables `t_train`, `t_valid` and `t_test` contain the corresponding $N$-dimensional vector of integers, containing the true class labels.\n\nHere's a visualisation of the first 8 digits of the trainingset:\n\n\n```python\ndef plot_digits(data, num_cols, targets=None, shape=(28,28)):\n num_digits = data.shape[0]\n num_rows = int(num_digits/num_cols)\n for i in range(num_digits):\n plt.subplot(num_rows, num_cols, i+1)\n plt.imshow(data[i].reshape(shape), interpolation='none', cmap='Greys')\n if targets is not None:\n plt.title(int(targets[i]))\n plt.colorbar()\n plt.axis('off')\n plt.tight_layout()\n plt.show()\n \nplot_digits(x_train[0:40000:5000], num_cols=4, targets=t_train[0:40000:5000])\n```\n\nIn _multiclass_ logistic regression, the conditional probability of class label $j$ given the image $\\bx$ for some datapoint is given by:\n\n$ \\log p(t = j \\;|\\; \\bx, \\bb, \\bW) = \\log q_j - \\log Z$\n\nwhere $\\log q_j = \\bw_j^T \\bx + b_j$ (the log of the unnormalized probability of the class $j$), and $Z = \\sum_k q_k$ is the normalizing factor. $\\bw_j$ is the $j$-th column of $\\bW$ (a matrix of size $784 \\times 10$) corresponding to the class label, $b_j$ is the $j$-th element of $\\bb$.\n\nGiven an input image, the multiclass logistic regression model first computes the intermediate vector $\\log \\bq$ (of size $10 \\times 1$), using $\\log q_j = \\bw_j^T \\bx + b_j$, containing the unnormalized log-probabilities per class. \n\nThe unnormalized probabilities are then normalized by $Z$ such that $\\sum_j p_j = \\sum_j \\exp(\\log p_j) = 1$. This is done by $\\log p_j = \\log q_j - \\log Z$ where $Z = \\sum_i \\exp(\\log q_i)$. This is known as the _softmax_ transformation, and is also used as a last layer of many classifcation neural network models, to ensure that the output of the network is a normalized distribution, regardless of the values of second-to-last layer ($\\log \\bq$)\n\n**Warning**: when computing $\\log Z$, you are likely to encounter numerical problems. Save yourself countless hours of debugging and learn the [log-sum-exp trick](https://hips.seas.harvard.edu/blog/2013/01/09/computing-log-sum-exp/ \"Title\").\n\nThe network's output $\\log \\bp$ of size $10 \\times 1$ then contains the conditional log-probabilities $\\log p(t = j \\;|\\; \\bx, \\bb, \\bW)$ for each digit class $j$. In summary, the computations are done in this order:\n\n$\\bx \\rightarrow \\log \\bq \\rightarrow Z \\rightarrow \\log \\bp$\n\nGiven some dataset with $N$ independent, identically distributed datapoints, the log-likelihood is given by:\n\n$ \\mathcal{L}(\\bb, \\bW) = \\sum_{n=1}^N \\mathcal{L}^{(n)}$\n\nwhere we use $\\mathcal{L}^{(n)}$ to denote the partial log-likelihood evaluated over a single datapoint. It is important to see that the log-probability of the class label $t^{(n)}$ given the image, is given by the $t^{(n)}$-th element of the network's output $\\log \\bp$, denoted by $\\log p_{t^{(n)}}$:\n\n$\\mathcal{L}^{(n)} = \\log p(t = t^{(n)} \\;|\\; \\bx = \\bx^{(n)}, \\bb, \\bW) = \\log p_{t^{(n)}} = \\log q_{t^{(n)}} - \\log Z^{(n)}$\n\nwhere $\\bx^{(n)}$ and $t^{(n)}$ are the input (image) and class label (integer) of the $n$-th datapoint, and $Z^{(n)}$ is the normalizing constant for the distribution over $t^{(n)}$.\n\n\n## 1.1 Gradient-based stochastic optimization\n### 1.1.1 Derive gradient equations (20 points)\n\nDerive the equations for computing the (first) partial derivatives of the log-likelihood w.r.t. all the parameters, evaluated at a _single_ datapoint $n$.\n\nYou should start deriving the equations for $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$ for each $j$. For clarity, we'll use the shorthand $\\delta^q_j = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$.\n\nFor $j = t^{(n)}$:\n$\n\\delta^q_j\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log p_j}\n\\frac{\\partial \\log p_j}{\\partial \\log q_j}\n+ \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log Z}\n\\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j} \n= 1 \\cdot 1 - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n= 1 - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n$\n\nFor $j \\neq t^{(n)}$:\n$\n\\delta^q_j\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log Z}\n\\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j} \n= - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n$\n\nComplete the above derivations for $\\delta^q_j$ by furtherly developing $\\frac{\\partial \\log Z}{\\partial Z}$ and $\\frac{\\partial Z}{\\partial \\log q_j}$. Both are quite simple. For these it doesn't matter whether $j = t^{(n)}$ or not.\n\n\n\n$\n\\frac{\\partial\\log Z}{\\partial Z} = \\frac{1}{Z}\n$\n$\n\\frac{\\partial Z}{\\partial \\log q_i }=\\frac{\\partial\\sum_k q_k}{\\partial \\log q_i}= \\frac{p_i}{\\partial \\log q_i}=\\frac{\\partial\\exp(\\log q_i)}{\\partial \\log q_j} = \\exp(\\log q_i)\n$\n\nFor $j = t^{(n)}$:\n\\begin{align}\n\\delta^q_j\n&=1-\\frac{1}{Z}\\exp(\\log q_i)\n\\end{align}\nFor $j \\neq t^{(n)}$:\n\\begin{align}\n\\delta^q_j\n&= -\\frac{1}{Z}\\exp(\\log q_i)\n\\end{align}\n\n\n\nGiven your equations for computing the gradients $\\delta^q_j$ it should be quite straightforward to derive the equations for the gradients of the parameters of the model, $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}}$ and $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j}$. The gradients for the biases $\\bb$ are given by:\n\n$\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j}\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\n\\frac{\\partial \\log q_j}{\\partial b_j}\n= \\delta^q_j\n\\cdot 1\n= \\delta^q_j\n$\n\nThe equation above gives the derivative of $\\mathcal{L}^{(n)}$ w.r.t. a single element of $\\bb$, so the vector $\\nabla_\\bb \\mathcal{L}^{(n)}$ with all derivatives of $\\mathcal{L}^{(n)}$ w.r.t. the bias parameters $\\bb$ is: \n\n$\n\\nabla_\\bb \\mathcal{L}^{(n)} = \\mathbf{\\delta}^q\n$\n\nwhere $\\mathbf{\\delta}^q$ denotes the vector of size $10 \\times 1$ with elements $\\mathbf{\\delta}_j^q$.\n\nThe (not fully developed) equation for computing the derivative of $\\mathcal{L}^{(n)}$ w.r.t. a single element $W_{ij}$ of $\\bW$ is:\n\n$\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}} =\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\n\\frac{\\partial \\log q_j}{\\partial W_{ij}}\n= \\mathbf{\\delta}_j^q\n\\frac{\\partial \\log q_j}{\\partial W_{ij}}\n$\n\nWhat is $\\frac{\\partial \\log q_j}{\\partial W_{ij}}$? Complete the equation above.\n\nIf you want, you can give the resulting equation in vector format ($\\nabla_{\\bw_j} \\mathcal{L}^{(n)} = ...$), like we did for $\\nabla_\\bb \\mathcal{L}^{(n)}$.\n\n\\begin{equation}\n\\log q_i = \\bm w_i^T \\bm x = b_i +\\sum \\bm W_{ij} \\bm x_i\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial\\log q_i}{\\partial \\bm W_{ij}} =\\bm x_i\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial \\bm W_{ij}} =\\delta^q_j\\bm x_i\n\\end{equation}\n\n### 1.1.2 Implement gradient computations (10 points)\n\nImplement the gradient calculations you derived in the previous question. Write a function `logreg_gradient(x, t, w, b)` that returns the gradients $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ (for each $j$) and $\\nabla_{\\bb} \\mathcal{L}^{(n)}$, i.e. the first partial derivatives of the log-likelihood w.r.t. the parameters $\\bW$ and $\\bb$, evaluated at a single datapoint (`x`, `t`).\nThe computation will contain roughly the following intermediate variables:\n\n$\n\\log \\bq \\rightarrow Z \\rightarrow \\log \\bp\\,,\\, \\mathbf{\\delta}^q\n$\n\nfollowed by computation of the gradient vectors $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ (contained in a $784 \\times 10$ matrix) and $\\nabla_{\\bb} \\mathcal{L}^{(n)}$ (a $10 \\times 1$ vector).\n\nFor maximum points, ensure the function is numerically stable.\n\n\n\n```python\n# 1.1.2 Compute gradient of log p(t|x;w,b) wrt w and b\ndef logreg_gradient(x, t, w, b):\n \n log_q = w.T.dot(x.T).squeeze() + b\n\n # calculate 'a' from the log sum trick\n a = np.max(log_q)\n # log sum trick\n log_Z = a + np.log(np.sum(np.exp(log_q - a)))\n Z = np.exp(log_Z)\n \n logp = log_q - log_Z\n \n # compute derivertives\n dL_db = -np.exp(log_q) / Z\n dL_db[t] += 1\n dL_dw = dL_db[:, np.newaxis].dot(x).squeeze().T\n \n return logp[t].squeeze(), dL_dw, dL_db.squeeze()\n```\n\n\n```python\nnp.random.seed(123)\n# scalar, 10 X 768 matrix, 10 X 1 vector\nw = np.random.normal(size=(28*28,10), scale=0.001)\n# w = np.zeros((784,10))\nb = np.zeros((10,))\n\n# test gradients, train on 1 sample\nlogpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)\n\n\n\nprint(\"Test gradient on one point\")\nprint(\"Likelihood:\\t\", logpt)\nprint(\"\\nGrad_W_ij\\t\",grad_w.shape,\"matrix\")\nprint(\"Grad_W_ij[0,152:158]=\\t\", grad_w[152:158,0])\nprint(\"\\nGrad_B_i shape\\t\",grad_b.shape,\"vector\")\nprint(\"Grad_B_i=\\t\", grad_b.T)\nprint(\"i in {0,...,9}; j in M\")\n\nassert logpt.shape == (), logpt.shape\nassert grad_w.shape == (784, 10), grad_w.shape\nassert grad_b.shape == (10,), grad_b.shape\n\n\n\n```\n\n Test gradient on one point\n Likelihood:\t -2.2959726720744777\n \n Grad_W_ij\t (784, 10) matrix\n Grad_W_ij[0,152:158]=\t [-0.04518971 -0.06758809 -0.07819784 -0.09077237 -0.07584012 -0.06365855]\n \n Grad_B_i shape\t (10,) vector\n Grad_B_i=\t [-0.10020327 -0.09977827 -0.1003198 0.89933657 -0.10037941 -0.10072863\n -0.09982729 -0.09928672 -0.09949324 -0.09931994]\n i in {0,...,9}; j in M\n\n\n\n```python\n# It's always good to check your gradient implementations with finite difference checking:\n# Scipy provides the check_grad function, which requires flat input variables.\n# So we write two helper functions that provide can compute the gradient and output with 'flat' weights:\nfrom scipy.optimize import check_grad\n\nnp.random.seed(123)\n# scalar, 10 X 768 matrix, 10 X 1 vector\nw = np.random.normal(size=(28*28,10), scale=0.001)\n# w = np.zeros((784,10))\nb = np.zeros((10,))\n\ndef func(w):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)\n return logpt\ndef grad(w):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)\n return grad_w.flatten()\nfinite_diff_error = check_grad(func, grad, w.flatten())\nprint('Finite difference error grad_w:', finite_diff_error)\nassert finite_diff_error < 1e-3, 'Your gradient computation for w seems off'\n\ndef func(b):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)\n return logpt\ndef grad(b):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)\n return grad_b.flatten()\nfinite_diff_error = check_grad(func, grad, b)\nprint('Finite difference error grad_b:', finite_diff_error)\nassert finite_diff_error < 1e-3, 'Your gradient computation for b seems off'\n\n\n```\n\n Finite difference error grad_w: 6.3612946893e-07\n Finite difference error grad_b: 5.23511748609e-08\n\n\n\n```python\n\n```\n\n\n### 1.1.3 Stochastic gradient descent (10 points)\n\nWrite a function `sgd_iter(x_train, t_train, w, b)` that performs one iteration of stochastic gradient descent (SGD), and returns the new weights. It should go through the trainingset once in randomized order, call `logreg_gradient(x, t, w, b)` for each datapoint to get the gradients, and update the parameters **using a small learning rate of `1E-6`**. Note that in this case we're maximizing the likelihood function, so we should actually performing gradient ___ascent___... For more information about SGD, see Bishop 5.2.4 or an online source (i.e. https://en.wikipedia.org/wiki/Stochastic_gradient_descent)\n\n\n```python\ndef sgd_iter(x_train, t_train, W, b):\n # create a shuffled list of indecies of the same\n # length as the training set\n index_list = np.arange(len(x_train) - 1)\n np.random.shuffle(index_list)\n # define learning rate:\n learning_rate = 10 ** -6\n \n # initialize the sum\n logp_train_sum = 0\n\n for i in index_list:\n logp_train, grad_w, grad_b = logreg_gradient(x_train[i:i + 1, :],\n t_train[i:i + 1], W, b)\n # add logp to sum\n logp_train_sum += logp_train\n # update w and b with SGD\n W += learning_rate * grad_w\n b += learning_rate * grad_b\n\n return logp_train_sum, W, b\n\n```\n\n\n```python\n# Sanity check:\nnp.random.seed(1243)\nw = np.zeros((28*28, 10))\nb = np.zeros(10)\n \nlogp_train, W, b = sgd_iter(x_train[:5], t_train[:5], w, b)\n\n```\n\n## 1.2. Train\n\n### 1.2.1 Train (10 points)\nPerform 10 SGD iterations through the trainingset. Plot (in one graph) the conditional log-probability of the trainingset and validation set after each iteration.\n\n\n\n```python\ndef test_sgd(x_train, t_train, w, b):\n # lists for log probabilities for each iteration\n log_prob_train = []\n log_prob_valid = []\n # run over 10 iterations\n for _ in range(10):\n logp_valid_sum = 0\n # call SGD\n logp_train, w, b = sgd_iter(x_train, t_train, w, b)\n log_prob_train.append(logp_train)\n # apply new w and b on the validation set\n for i in range(10000):\n logpt, _, _ = logreg_gradient(x_valid[i:i + 1, :],\n t_valid[i:i + 1], w, b)\n logp_valid_sum += logpt\n log_prob_valid.append(logp_valid_sum)\n\n return w, b, log_prob_train, log_prob_valid\n\nnp.random.seed(1243)\nw = np.zeros((28 * 28, 10))\nb = np.zeros(10)\nw, b, log_prob_train_out, log_prob_valid_out = test_sgd(x_train, t_train, w, b)\n\n```\n\n\n```python\n# put plotting in extra cell, so you dont have to run the \n# SGD for fine tuning the plots\n\n\nx_axis = range(len(log_prob_train_out))\n\n# Normalize the training and validation set by deviding \n# it by the number of elements.\nlog_prob_train_out_scale = np.array(log_prob_train_out) / 50000\nlog_prob_valid_out_scale = np.array(log_prob_valid_out) / 10000\n\nplt.plot(x_axis, log_prob_train_out_scale, label='Training Set')\nplt.plot(x_axis, log_prob_valid_out_scale, label='Validation Set')\n\nplt.scatter(x_axis, log_prob_train_out_scale)\nplt.scatter(x_axis, log_prob_valid_out_scale)\n\nplt.legend(loc='upper left')\n\nplt.show()\n\n```\n\n### 1.2.2 Visualize weights (10 points)\nVisualize the resulting parameters $\\bW$ after a few iterations through the training set, by treating each column of $\\bW$ as an image. If you want, you can use or edit the `plot_digits(...)` above.\n\n\n\n```python\n# plot weights\nplot_digits(w.T, num_cols=5)\n\n```\n\n**Describe in less than 100 words why these weights minimize the loss**\n\nweights into: a medium-gray one (0 values), a black one (high positive values) and a white one (high negative values).\n* Gray: 0-weights are assigned to the parts of the input images that do not add any information, namely those that are the same in all input images\n* Black: For weight-vector $w_j$ high positive weights are assigned to those pixels that, if black, increase the probability of the image being labeled $j$\nWhite: With the same reasoning high negative weights are assigned to those pixels that, if black, decrease the probability of the image being labeled $j$\n\n\n\n### 1.2.3. Visualize the 8 hardest and 8 easiest digits (10 points)\nVisualize the 8 digits in the validation set with the highest probability of the true class label under the model.\nAlso plot the 8 digits that were assigned the lowest probability.\nAsk yourself if these results make sense.\n\n\n```python\n# redefine because of the change w and b values\n\n# calculate log_q for the validation set\nlog_q = x_valid.dot(w) + b\nindex_list = []\n\n# create list of validation targets in numbers and index\nfor (index,), number in np.ndenumerate(t_valid):\n index_list.append(log_q[index, number])\n\n# Sort after log_q\nvalid_sorted = np.argsort(np.array(index_list))\n\n# easiest are the first 8 in the list\neasy = valid_sorted[-8:]\nprint('Easiest 8 digits')\nplot_digits(x_valid[easy], num_cols=4)\n\n# hardest are the last 8 in the list\nprint('Hardest 8 digits')\nhard = valid_sorted[:8]\nplot_digits(x_valid[hard], num_cols=4)\n\n```\n\nIntuitively, easy digits are the ones that are not easily confused with other digits. As the zero is very different from the other numbers - and everyone tends to write zeros the same way -, it makes sense that it is a digit that is easy to classify.\nOn the other hand, hard digits are the ones that are ambiguous or come in a number of different shapes (7 with and without a horitontal bar, 1 with and without a serif) and thus easily confused with others. For the human eye this might not be the case for all hard digits (for example the 5 and the 9 are easy to identify) but for example the last digit could easily be a 4 or a 6.\n\n# Part 2. Multilayer perceptron\n\n\nYou discover that the predictions by the logistic regression classifier are not good enough for your application: the model is too simple. You want to increase the accuracy of your predictions by using a better model. For this purpose, you're going to use a multilayer perceptron (MLP), a simple kind of neural network. The perceptron wil have a single hidden layer $\\bh$ with $L$ elements. The parameters of the model are $\\bV$ (connections between input $\\bx$ and hidden layer $\\bh$), $\\ba$ (the biases/intercepts of $\\bh$), $\\bW$ (connections between $\\bh$ and $\\log q$) and $\\bb$ (the biases/intercepts of $\\log q$.\n\nThe conditional probability of the class label $j$ is given by:\n\n$\\log p(t = j \\;|\\; \\bx, \\bb, \\bW) = \\log q_j - \\log Z$\n\nwhere $q_j$ are again the unnormalized probabilities per class, and $Z = \\sum_j q_j$ is again the probability normalizing factor. Each $q_j$ is computed using:\n\n$\\log q_j = \\bw_j^T \\bh + b_j$\n\nwhere $\\bh$ is a $L \\times 1$ vector with the hidden layer activations (of a hidden layer with size $L$), and $\\bw_j$ is the $j$-th column of $\\bW$ (a $L \\times 10$ matrix). Each element of the hidden layer is computed from the input vector $\\bx$ using:\n\n$h_j = \\sigma(\\bv_j^T \\bx + a_j)$\n\nwhere $\\bv_j$ is the $j$-th column of $\\bV$ (a $784 \\times L$ matrix), $a_j$ is the $j$-th element of $\\ba$, and $\\sigma(.)$ is the so-called sigmoid activation function, defined by:\n\n$\\sigma(x) = \\frac{1}{1 + \\exp(-x)}$\n\nNote that this model is almost equal to the multiclass logistic regression model, but with an extra 'hidden layer' $\\bh$. The activations of this hidden layer can be viewed as features computed from the input, where the feature transformation ($\\bV$ and $\\ba$) is learned.\n\n## 2.1 Derive gradient equations (20 points)\n\nState (shortly) why $\\nabla_{\\bb} \\mathcal{L}^{(n)}$ is equal to the earlier (multiclass logistic regression) case, and why $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ is almost equal to the earlier case.\n\nLike in multiclass logistic regression, you should use intermediate variables $\\mathbf{\\delta}_j^q$. In addition, you should use intermediate variables $\\mathbf{\\delta}_j^h = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial h_j}$.\n\nGiven an input image, roughly the following intermediate variables should be computed:\n\n$\n\\log \\bq \\rightarrow Z \\rightarrow \\log \\bp \\rightarrow \\mathbf{\\delta}^q \\rightarrow \\mathbf{\\delta}^h\n$\n\nwhere $\\mathbf{\\delta}_j^h = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\bh_j}$.\n\nGive the equations for computing $\\mathbf{\\delta}^h$, and for computing the derivatives of $\\mathcal{L}^{(n)}$ w.r.t. $\\bW$, $\\bb$, $\\bV$ and $\\ba$. \n\nYou can use the convenient fact that $\\frac{\\partial}{\\partial x} \\sigma(x) = \\sigma(x) (1 - \\sigma(x))$.\n\n\\begin{align*}\n\t\\delta_j^q &= \\frac{\\partial\\mathcal{L}^{(n)}}{\\partial\\ln(q_j)} \\\\\n\t&= \\begin{cases}\n\t\tj = t^{(n)}: 1 - \\frac{1}{Z} \\exp{} (\\ln{} (q_j)) \\\\\n\t\tj \\neq t^{(n)}: - \\frac{1}{Z} \\exp{} (\\ln{} (q_j))\n\t\\end{cases}\n\\end{align*}\n\\begin{align*}\n\t\\delta_j^h &= \n\t\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{} h_i}\n\t\t= \\frac{\\partial\\mathcal{L}^{(n)}}{\\partial\\ln{}(q_i)} \n\t\t\\frac{\\partial\\ln{}(q_i)}{\\partial{} h_i} \n\t\t= \\delta_j^qW_{ij}\n\\end{align*}\n\\begin{align*}\n\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{} b_j}\n\t\t&= \\frac{\\partial\\mathcal{L}^{(n)}}{\\partial\\ln{}(q_i)} \n\t\t\\frac{\\partial\\ln{}(q_i)}{\\partial{} b_j} = \\delta_j^q\n\\end{align*}\n\\begin{align*}\n\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{} W_{ij}}\n\t\t&= \\frac{\\partial\\mathcal{L}^{(n)}}{\\partial\\ln{}(q_i)} \n\t\t\\frac{\\partial\\ln{}(q_i)}{\\partial{} W_{ij}} = \\delta_j^qh_i\n\\end{align*}\n\\begin{align*}\n\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{} a_i}\n\t\t&= \\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{}(h_i)} \n\t\t\\frac{\\partial{}(h_i)}{\\partial{} v_i^Tx+a_i} \\frac{\\partial{} v_i^tx+a_i}{\\partial{} a_i}\\\\ \n\t\t&=\\delta_j^h \\sigma (v_i^Tx + a_i)(1 - \\sigma (v_i^Tx + a_i)) = \\delta_j^hh_i(1-h_i)\n\\end{align*}\n\\begin{align*}\n\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial v_i}\n\t&= \\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{}(h_i)} \n\t\t\\frac{\\partial{}(h_i)}{\\partial{} v_i^Tx+a} \\frac{\\partial{} v_i^tx+a}{\\partial{} v_i^T}\\\\ \n\t\t&= \\delta_j^hh_i(1-h_i)x_h\n\\end{align*}\n\n\n\n## 2.2 MAP optimization (10 points)\n\nYou derived equations for finding the _maximum likelihood_ solution of the parameters. Explain, in a few sentences, how you could extend this approach so that it optimizes towards a _maximum a posteriori_ (MAP) solution of the parameters, with a Gaussian prior on the parameters. \n\nThe maximum likelihood solution we found above results from deriving the log-likelihood with respect to $\\ln(q_j)$. In order to obtain the MAP solution we'd have to multiply the likelihood with the prior prior to deriving. Since we're deriving the log-likelihood multiplying with the prior becomes equivalent to adding the log-prior. Deriving that sum is the same as adding the log-prior derived w.r.t. $\\ln(q_j)$ to the ML-solution for $\\delta^q_j$.\nThis could be implemented as:\n\n\\begin{align*}\n\t\\mathcal{L}^{(n)} &= \\ln(q_j) - \\ln(Z) - \\frac{\\alpha}{2}(V_{hi}-W_{ij})^2\\\\\n\t&= \\ln(q_j) - \\ln(Z) - \\frac{\\alpha_1}{2}\\sum^{784}_{h=1} \\sum_{i=0}^{\\mathcal{L}} V_{hi}^2 - \\frac{\\alpha_2}{2}\\sum_{i=0}^{\\mathcal{L}} \\sum_{j=1}^{10}W_{ij}^2\n\\end{align*}\n\nLeading to:\n\n\\begin{align*}\n\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial{} W_{ij}}\n\t\t&= \\delta_j^qh_i - \\alpha_2 W_{ij}\n\\end{align*}\n\\begin{align*}\n\t\\frac{\\partial\\mathcal{L}^{(n)}}{\\partial v_hi}\n\t&= \\delta_j^hh_i(1-h_i)x_h -\\alpha_1 v_{hi}\n\\end{align*}\n\nwith all other derivertives staying the same.\n\n## 2.3. Implement and train a MLP (15 points)\n\nImplement a MLP model with a single hidden layer of **20 neurons**. \nTrain the model for **10 epochs**.\nPlot (in one graph) the conditional log-probability of the trainingset and validation set after each two iterations, as well as the weights.\n\n- 10 points: Working MLP that learns with plots\n- +5 points: Fast, numerically stable, vectorized implementation\n\n\n```python\nimport numpy as np\n\nnp.random.seed(123)\n\n# initialize weights, so that we dont get a \n# bias by testing for differnt learning rates\ndef init_weights():\n # initialize variables\n w = (np.random.random_sample((20, 10)) - 0.5) * 0.001\n b = (np.random.random_sample(10) - 0.5) * 0.001\n v = (np.random.random_sample((28 * 28, 20)) - 0.5) * 0.001\n a = (np.random.random_sample(20) - 0.5) * 0.001\n return w, b, v, a\n\n\n# Calculate all gradients\ndef MLP_gradient(x, t, w, b, v, a):\n h = 1 / (1 + np.exp(-x.dot(v).squeeze() - a))\n \n log_q = h.dot(w) + b\n # calculate 'alpha' from the log sum trick\n alpha = np.max(log_q)\n # log sum trick\n log_Z = alpha + np.log(np.sum(np.exp(log_q - alpha)))\n Z = np.exp(log_Z)\n \n logp = log_q - log_Z\n\n # calculate derivertives\n dL_db = -np.exp(log_q) / Z\n dL_db[t] += 1\n dL_dw = h[np.newaxis].T.dot(dL_db[np.newaxis])\n dL_dh = w[:, t].dot(dL_db[t])\n dL_da = dL_dh * h * (1 - h)\n dL_dv = x.T.dot(dL_da[np.newaxis])\n\n return logp[t].squeeze(), dL_dw, dL_db, dL_dv, dL_da\n\n\n# update w, b, v, a with SGD\ndef sgd_iter_MLP(x_train, t_train, w, b, v, a, learning_rate):\n # create a shuffled list of indecies of the same\n # length as the training set\n index_list = np.arange(len(x_train) - 1)\n np.random.shuffle(index_list)\n # define learning rate:\n learning_rate_ = learning_rate\n\n logp_train_sum = 0\n\n for i in index_list:\n logp_train, grad_w, grad_b, grad_v, grad_a = MLP_gradient(\n x_train[i:i + 1, :],\n t_train[i:i + 1], w, b, v, a)\n logp_train_sum += logp_train\n w += learning_rate_ * grad_w\n b += learning_rate_ * grad_b\n v += learning_rate_ * grad_v\n a += learning_rate_ * grad_a\n\n return logp_train_sum, w, b, v, a\n\n\n# optimize over 10 iterations\ndef test_sgd_MLP(x_train, t_train, w, b, v, a, learning_rate):\n log_prob_train = []\n log_prob_valid = []\n for _ in range(10):\n logp_valid_sum = 0\n logp_train, w, b, v, a = sgd_iter_MLP(x_train, t_train, w, b, v, a,\n learning_rate)\n log_prob_train.append(logp_train)\n for i in range(10000):\n logp_temp, _, _, _, _ = MLP_gradient(x_valid[i:i + 1, :],\n t_valid[i:i + 1], w, b, v, a)\n logp_valid_sum += logp_temp\n log_prob_valid.append(logp_valid_sum)\n\n return w, b, v, a, log_prob_train, log_prob_valid\n\n\n# function for testing the model\ndef test_model(w, b, v, a, x_valid):\n h = 1 / (1 + np.exp(-x_valid.dot(v) - a))\n log_q = h.dot(w) + b\n predict = np.argmax(log_q, axis=1)\n correct = t_valid == predict\n accuracy = len(correct[correct == True]) / float(len(t_valid))\n\n return accuracy\n\n```\n\n\n```python\n\n# Learn and test the model\nlog_prob_train_out_, log_prob_valid_out_ = [], []\naccuracy_ = 0\nv_ = 0\n\n# run over differtn learning rates\nfor learning_rate in [10 ** -1, 10 ** -2, 10 ** -3, 10 ** -4]:\n # initialize weights\n w, b, v, a = init_weights()\n # calculate MLP with SGD\n w, b, v, a, log_prob_train_out, log_prob_valid_out = test_sgd_MLP(x_train,\n t_train,\n w, b, v,\n a,\n learning_rate)\n # calculate accuracy \n accuracy = test_model(w, b, v, a, x_valid)\n print('Accuracy: {} | for learning rate: {}'.format(accuracy,\n learning_rate))\n # update optimal values\n if accuracy > accuracy_:\n accuracy_ = accuracy\n log_prob_train_out_, log_prob_valid_out_ = log_prob_train_out, log_prob_valid_out\n v_ = v\n\n```\n\n Accuracy: 0.8995 | for learning rate: 0.1\n Accuracy: 0.9269 | for learning rate: 0.01\n Accuracy: 0.8951 | for learning rate: 0.001\n Accuracy: 0.2235 | for learning rate: 0.0001\n\n\n\n```python\n# put plotting in extra cell, so you dont have to run the \n# SGD for fine tuning the plots\n#print(w.shape, b.shape, v.shape, a.shape)\n\nx_axis = range(len(log_prob_train_out))\n\nlog_prob_train_out_scale = np.array(log_prob_train_out_) / 50000\nlog_prob_valid_out_scale = np.array(log_prob_valid_out_) / 10000\n\nplt.plot(log_prob_train_out_scale)\nplt.plot(log_prob_valid_out_scale)\n#print(log_prob_train_out)\nplt.scatter(x_axis, log_prob_train_out_scale)\nplt.scatter(x_axis, log_prob_valid_out_scale)\n\nplt.show()\n\n#plot_digits(, num_cols=5)\nplot_digits(v_.T, num_cols=5, targets=None, shape=(28, 28))\n\n```\n\n### 2.3.1. Explain the weights (5 points)\nIn less than 80 words, explain how and why the weights of the hidden layer of the MLP differ from the logistic regression model, and relate this to the stronger performance of the MLP.\n\nIn this model, each number is modelled out of a combination or different weightvectors and no longer by one single weightvector. The weights are now combinations of areas, with higher probabilities and lower probabilities. This means, that the weights now no longer model a whole number, but specific edges (dark areas) or explicitly areas where it is unlikely, that something is written (white / grey). This is more powerful because it allows for a exacter and more flexible digit definition. \n\n### 2.3.1. Less than 250 misclassifications on the test set (10 bonus points)\n\nYou receive an additional 10 bonus points if you manage to train a model with very high accuracy: at most 2.5% misclasified digits on the test set. Note that the test set contains 10000 digits, so you model should misclassify at most 250 digits. This should be achievable with a MLP model with one hidden layer. See results of various models at : `http://yann.lecun.com/exdb/mnist/index.html`. To reach such a low accuracy, you probably need to have a very high $L$ (many hidden units), probably $L > 200$, and apply a strong Gaussian prior on the weights. In this case you are allowed to use the validation set for training.\nYou are allowed to add additional layers, and use convolutional networks, although that is probably not required to reach 2.5% misclassifications.\n\n\n```python\npredict_test = np.zeros(len(t_test))\n# Fill predict_test with the predicted targets from your model, don't cheat :-).\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\nassert predict_test.shape == t_test.shape\nn_errors = np.sum(predict_test != t_test)\nprint('Test errors: %d' % n_errors)\n```\n", "meta": {"hexsha": "fbd0fe9d7c97bd74c557aed97fd3da4d26f4bf8e", "size": 263428, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lab/ML1_lab2.ipynb", "max_stars_repo_name": "pascalesser/Machine_Learning_1", "max_stars_repo_head_hexsha": "7138f3a6f94fbcd4f05a99b89827f987ff2b33a5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab/ML1_lab2.ipynb", "max_issues_repo_name": "pascalesser/Machine_Learning_1", "max_issues_repo_head_hexsha": "7138f3a6f94fbcd4f05a99b89827f987ff2b33a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab/ML1_lab2.ipynb", "max_forks_repo_name": "pascalesser/Machine_Learning_1", "max_forks_repo_head_hexsha": "7138f3a6f94fbcd4f05a99b89827f987ff2b33a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 164.6425, "max_line_length": 65092, "alphanum_fraction": 0.8686168517, "converted": true, "num_tokens": 9966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.275129717879598, "lm_q1q2_score": 0.13219396313848286}} {"text": "## Unemployment in different countries and age groups\n\n**Team:** Aristochats\n\n**Members:** Theresa Berz (txj188), Adrian Moise (bln333), Nam Anh Nguyen (xgw631), Karen Thule (lnc394)\n\nIn this project we want to investigate the unemployment rate in six different OECD countries over the time period 2007 to 2019. Using data from OECD statistics we first examine how the unemployment rate has developped over the timespan across countries and different ages groups. This in done in order to see if there is any patterns or significant differences across country or age. \n\n**Data**\n\nThe data we use is the unemploymentrate in different age groups taken from the OECD database: 'Labour market statistics' and later in the porject we add data for GDP using GDP per capita in current US dollars, which is also taken from the OECD. \n\nThe variables are:\n- **U_15_24** : Unemployment rate age 15-24\n- **U_15_64** : Unemployment rate age 15-64\n- **U_25_54** : Unemployment rate age 25-54\n- **U_55_64** : Unemployment rate age 55-64\n- **GDP_USD** : GDP per. capita in US dollars\n\n## Load and clean data\n\n\n```python\n#Import packages \nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n%matplotlib inline\nplt.style.use('seaborn-whitegrid')\n\nfrom IPython.display import Markdown, display\nfrom numpy import array\nimport sympy as sm\n```\n\nFirst we load the data, that can be found in the dataproject folder:\n\n\n```python\n#Load unemployment dataset from folder \nfilename = 'data_u.xlsx' # open the file and have a look at it\ndata=pd.read_excel(filename)\ndata\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryTimeU_15_24U_15_64U_25_54U_55_64
0DenmarkQ1-20077.24.03.63.6
1DenmarkQ2-20077.33.82.93.9
2DenmarkQ3-20078.33.93.13.4
3DenmarkQ4-20077.33.62.92.8
4DenmarkQ1-20088.73.52.62.8
.....................
373NetherlandsQ2-20196.53.32.53.3
374NetherlandsQ3-20196.73.32.53.2
375NetherlandsQ4-20197.03.42.62.7
376NetherlandsQ1-20206.63.22.42.6
377NetherlandsQ2-20209.73.92.82.5
\n

378 rows \u00d7 6 columns

\n
\n\n\n\nWe then want to investgate if there is any missing values. This is done by using the `isnull()` function that seaches for any missing values in the data-set. \n\n\n```python\n# Search for any missing values \nis_NaN = data.isnull()\nrow_has_NaN = is_NaN.any(axis=1)\n\n#define a dataset with the missing values \nrows_with_NaN = data[row_has_NaN] \n\n# print the observations with missing values \nrows_with_NaN\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryTimeU_15_24U_15_64U_25_54U_55_64
160GermanyQ1-2020NaNNaNNaNNaN
161GermanyQ2-2020NaNNaNNaNNaN
214OECD - AverageQ1-202011.75.54.8NaN
215OECD - AverageQ2-202017.78.67.5NaN
\n
\n\n\n\nWe want to remove the observations with missing values, this is done by using the `notna()`which does the oposite of `isnull()` i.e it finds all the observations with values which we then keep.\n\n\n```python\n#only keep observations with values (no data for 2020 unemployment in Germany and OECD-Average)\nI = data['U_55_64'].notna() #use the U_55_64 since we have seen data is missing here \ndata = data[I] #overwrite data with data containing values \n\n\n#rename time to quarter \nrename_dict = {} \nrename_dict['Time'] = 'Quarter'\ndata.rename(columns=rename_dict,inplace=True)\ndata\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64
0DenmarkQ1-20077.24.03.63.6
1DenmarkQ2-20077.33.82.93.9
2DenmarkQ3-20078.33.93.13.4
3DenmarkQ4-20077.33.62.92.8
4DenmarkQ1-20088.73.52.62.8
.....................
373NetherlandsQ2-20196.53.32.53.3
374NetherlandsQ3-20196.73.32.53.2
375NetherlandsQ4-20197.03.42.62.7
376NetherlandsQ1-20206.63.22.42.6
377NetherlandsQ2-20209.73.92.82.5
\n

374 rows \u00d7 6 columns

\n
\n\n\n\n# Descriptive statistic \n\n\nWe are now ready to look at the data. First we want to examine the data. In order to do so we use `groupby` and `describe`\n\n\n```python\nvar = ['U_15_24', 'U_15_64'] \ndata.groupby('Country')[var ].describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
U_15_24U_15_64
countmeanstdmin25%50%75%maxcountmeanstdmin25%50%75%max
Country
Austria54.09.7296301.0548417.69.1259.6010.40011.854.05.1611110.5887043.84.8005.105.6006.3
Belgium54.019.6537043.34600412.217.32520.2521.57526.254.07.4277781.0823485.16.9257.658.3758.7
Denmark54.012.6000002.6846887.210.52512.5014.97517.054.06.2462961.4370723.45.2256.307.6008.3
France54.022.0814812.23438617.020.32522.5023.85025.554.09.0796301.0209846.98.6259.109.97510.6
Germany52.08.3596151.9205045.46.9007.859.72512.452.05.5846151.7368423.24.1505.357.1509.2
Netherlands54.010.0518521.9673516.58.62510.1511.37513.654.05.1629631.3811803.23.9255.006.2757.7
OECD - Average52.014.4480772.10726911.412.40014.3516.52517.652.07.1173081.1339195.45.9757.108.2008.8
\n
\n\n\n\n\n```python\nvar2 = ['U_25_54', 'U_55_64']\ndata.groupby('Country')[var2].describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
U_25_54U_55_64
countmeanstdmin25%50%75%maxcountmeanstdmin25%50%75%max
Country
Austria54.04.5888890.5758233.44.2004.605.1005.554.03.5981480.8320221.93.2003.604.0005.2
Belgium54.06.5388890.9521414.46.1256.657.3758.054.03.5981480.8320221.93.2003.604.0005.2
Denmark54.05.2722221.2850982.54.5005.456.2757.054.04.4203701.0986062.33.6004.155.5006.3
France54.07.9111110.9787686.07.4008.008.7009.454.06.1629631.0965293.85.7006.357.0757.7
Germany52.05.1384621.5810343.03.8504.856.6508.552.05.7634622.2807832.63.7505.707.72511.1
Netherlands54.03.9962961.2791452.32.8003.805.0256.454.05.2240741.6460282.54.2004.706.6758.2
OECD - Average52.06.2519231.0133414.85.1006.307.2007.752.04.9153850.8115313.74.0755.005.7006.1
\n
\n\n\n\nFrom the above descriptive statistics we see that generally the unemployment rate is highest for the youngest group age 15-24 in all countries as well as for the OECD average, where France has the higest mean of 22.08, and a max of the period of 25.5 pct, i.e a some point during the period a quarter of the persons in the age 15-24 were unemployed in France. The lowest means are found in the oldest age group 55-64, except for Germany, where the lowest mean is found in the age group 25-54. \n\nWe now want to plot the unemployment rate of each country in order to see, how they have developed over the time period. This is done via. a loop over the four age groups. \n\n\n```python\n#plot the first age group by looping over the four unemployment groups\n\ndef plot_unempl(column,name,title=''):\n fig = plt.figure()\n ax = plt.subplot(111)\n data.set_index('Quarter').groupby('Country')[column].plot(kind='line', legend=True, ax=ax)\n ax.set_ylabel(name)\n ax.set_title(title)\n box = ax.get_position()\n ax.set_title(f'Development in the {name} age group',fontweight=\"bold\")\n ax.set_position([box.x0, box.y0 + box.height * 0.1,box.width, box.height * 0.9]) # shrink height by 10% at bottom\n ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.15),ncol=5); # Put a legend below current axis\ncolumns = ['U_15_24','U_15_64','U_25_54','U_55_64']\nnames = ['Unemployment 15-24','Unemployment 15-64','Unemployment 25-64','Unemployment 55-64']\nfor column,name in zip(columns,names):\n plot_unempl(column,name)\n\n```\n\nWhen looking at the figures, one sees a pattern of falling unemployment rates in all countries in the quarters leading up to the financial crisis in 2008, whereafter the general pattern is rising unemployment rates. In the three first age group (15-24, 15-64 and 25-54) France and Belgium has had the highest rates of unemployment throughout most of the investigated period, while Austria seems to have had a lower and a bit more steady unemployment rate. \nWhen looking at Germany we see quite a different pattern compared to the rest of the countries and the OECD - Average. Germany has (except for small jumps around 2008) had a clear falling trend throughout the entire period, the strongest one for the oldest age group and the weakest for the youngest. This shows, that even though we are comparing six OECD countries, that have all been exposed to the financial crisis and what followed, we are able to see very different patterns. This leads us to asking the question, what might explain this difference. However, before doing so, we will have a look at how each country is compared to the OECD average. \n\n## Deviation from OECD \n\nFirst we create a list to make it possible to drop 'Country', and 'Quarter' for the loop that follows\n\n\n```python\nlist__ = data.columns.drop(pd.Index(['Country','Quarter']))\nlist__\n\n```\n\n\n\n\n Index(['U_15_24', 'U_15_64', 'U_25_54', 'U_55_64'], dtype='object')\n\n\n\nWe create a loop that takes the OECD - Average data for all unemployment groups and stores it as seperate columns. \n\n\n```python\nfor name in list__:\n B = (data['Country']=='OECD - Average')\n New2 = data.loc[B,['Quarter',name]].rename(columns={name:f'{name}_OECD'})\n data = data.merge(New2,on='Quarter', how='left')\n \ndata\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECD
0DenmarkQ1-20077.24.03.63.612.56.05.14.2
1DenmarkQ2-20077.33.82.93.912.35.95.04.0
2DenmarkQ3-20078.33.93.13.412.45.95.04.0
3DenmarkQ4-20077.33.62.92.812.45.95.04.0
4DenmarkQ1-20088.73.52.62.812.45.95.03.9
.................................
369NetherlandsQ2-20196.53.32.53.311.65.64.93.8
370NetherlandsQ3-20196.73.32.53.211.95.64.93.7
371NetherlandsQ4-20197.03.42.62.711.55.44.83.7
372NetherlandsQ1-20206.63.22.42.6NaNNaNNaNNaN
373NetherlandsQ2-20209.73.92.82.5NaNNaNNaNNaN
\n

374 rows \u00d7 10 columns

\n
\n\n\n\nNow that we have the OECD - Average for all groups on seperate columns, we are able to create new variables. We loop over the unemployment groups and use the `lambda` function in order to store the results as columns.\n\n\n```python\nage_groups= ['15_24','15_64','25_54','55_64']\nfor age_group in age_groups:\n data[f'diff_oecd_{age_group}'] = data.apply(lambda x: x[f'U_{age_group}_OECD'] - x[f'U_{age_group}'], axis=1)\ndata\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64
0DenmarkQ1-20077.24.03.63.612.56.05.14.25.32.01.50.6
1DenmarkQ2-20077.33.82.93.912.35.95.04.05.02.12.10.1
2DenmarkQ3-20078.33.93.13.412.45.95.04.04.12.01.90.6
3DenmarkQ4-20077.33.62.92.812.45.95.04.05.12.32.11.2
4DenmarkQ1-20088.73.52.62.812.45.95.03.93.72.42.41.1
.............................................
369NetherlandsQ2-20196.53.32.53.311.65.64.93.85.12.32.40.5
370NetherlandsQ3-20196.73.32.53.211.95.64.93.75.22.32.40.5
371NetherlandsQ4-20197.03.42.62.711.55.44.83.74.52.02.21.0
372NetherlandsQ1-20206.63.22.42.6NaNNaNNaNNaNNaNNaNNaNNaN
373NetherlandsQ2-20209.73.92.82.5NaNNaNNaNNaNNaNNaNNaNNaN
\n

374 rows \u00d7 14 columns

\n
\n\n\n\nWe see that there are some missing values, therefor we remove these as done previous using `notna()`\n\n\n```python\n# Search for any missing values \nis_NaN = data.isnull()\nrow_has_NaN = is_NaN.any(axis=1)\n\n#define a dataset with the missing values \nrows_with_NaN = data[row_has_NaN] \n\n# print the observations with missing values \nrows_with_NaN\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64
52DenmarkQ1-202010.05.14.33.7NaNNaNNaNNaNNaNNaNNaNNaN
53DenmarkQ2-202011.75.54.64.0NaNNaNNaNNaNNaNNaNNaNNaN
106FranceQ1-202019.17.96.85.9NaNNaNNaNNaNNaNNaNNaNNaN
107FranceQ2-202021.07.26.14.7NaNNaNNaNNaNNaNNaNNaNNaN
264AustriaQ1-20209.64.63.83.8NaNNaNNaNNaNNaNNaNNaNNaN
265AustriaQ2-202011.85.85.33.7NaNNaNNaNNaNNaNNaNNaNNaN
318BelgiumQ1-202012.65.14.53.8NaNNaNNaNNaNNaNNaNNaNNaN
319BelgiumQ2-202016.15.14.43.7NaNNaNNaNNaNNaNNaNNaNNaN
372NetherlandsQ1-20206.63.22.42.6NaNNaNNaNNaNNaNNaNNaNNaN
373NetherlandsQ2-20209.73.92.82.5NaNNaNNaNNaNNaNNaNNaNNaN
\n
\n\n\n\n\n```python\n#only keep observations with values \nI = data['U_55_64_OECD'].notna()\ndata = data[I]\n\n#round to 1 decimal \ndata = data.round(decimals=1)\n\n#rename time to quarter \nrename_dict = {} \nrename_dict['Time'] = 'Quarter'\ndata.rename(columns=rename_dict,inplace=True)\ndata\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64
0DenmarkQ1-20077.24.03.63.612.56.05.14.25.32.01.50.6
1DenmarkQ2-20077.33.82.93.912.35.95.04.05.02.12.10.1
2DenmarkQ3-20078.33.93.13.412.45.95.04.04.12.01.90.6
3DenmarkQ4-20077.33.62.92.812.45.95.04.05.12.32.11.2
4DenmarkQ1-20088.73.52.62.812.45.95.03.93.72.42.41.1
.............................................
367NetherlandsQ4-20186.83.62.64.211.85.65.03.95.02.02.4-0.3
368NetherlandsQ1-20196.73.52.63.611.85.75.03.95.12.22.40.3
369NetherlandsQ2-20196.53.32.53.311.65.64.93.85.12.32.40.5
370NetherlandsQ3-20196.73.32.53.211.95.64.93.75.22.32.40.5
371NetherlandsQ4-20197.03.42.62.711.55.44.83.74.52.02.21.0
\n

364 rows \u00d7 14 columns

\n
\n\n\n\nWe are now able to plot the results. y=0 is the OECD average. Thus, for all positive values of y>0, the country's respective unemployment rate is lower than the OECD average and vice versa. \n\n\n```python\n#create a loop that plots the diviation from OECD average \ndef plot_unempl_deviation(column,name,title=''):\n fig = plt.figure()\n ax = plt.subplot(111)\n data.set_index('Quarter').groupby('Country')[column].plot(kind='line', legend=True, ax=ax)\n ax.set_ylabel(name)\n ax.set_title(title)\n box = ax.get_position()\n ax.set_title(f'Deviation in unemployment from OECD average for {name}',fontweight=\"bold\")\n ax.set_position([box.x0, box.y0 + box.height * 0.1,box.width, box.height * 0.9]) # shrink height by 10% at bottom\n ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.15),ncol=5); # Put a legend below current axis\ncolumns = ['diff_oecd_15_24','diff_oecd_15_64','diff_oecd_25_54','diff_oecd_55_64']\nnames = ['Unemployment, 15-24','Unemployment, 15-64','Unemployment, 25-64','Unemployment, 55-64']\nfor column,name in zip(columns,names):\n plot_unempl_deviation(column,name)\n\n```\n\nThe plots display that the unemployment devation from OECD average varies a lot between the six chosen countries, where especially France stands out with negative diviation in all age groups except the age group 55-65. Notice that, except for the youngest age group, Germany has the highest negative deviation for all age groups, in the beginning of the time periode i.e. the highest unemployment rate compared to the OECD-Average. However in the following years they \"catch up\" and move towards a positive deviation i.e a lower unemployment rate than OECD average. \n\n## Adding GDP\n\nIn the previous tables and figures we have seen how six different OECD countries have had very different unemployment rates - both in terms of levels and developments. This naturally leads to asking what might have caused this difference. As mentioned before, one factor that might explain some of the differences could be the financial crisis. We therefore try to add GDP to the data in order to see, if there might be any connection between the development in the two variables. \n\nWe therefore load another datafile containing data for GDP per capita in the six chosen countries. \n\n\n```python\n#load GDP dataset from folder \nfilename = 'data_gdp.xlsx' # open the file and have a look at it\ndataGDP=pd.read_excel(filename)\n\n\n#rename time to quarter \nrename_dict = {} \nrename_dict['Time'] = 'Quarter'\ndataGDP.rename(columns=rename_dict,inplace=True)\ndataGDP\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterGDP_USD
0DenmarkQ1-200738222.3
1DenmarkQ2-200738342.6
2DenmarkQ3-200739136.4
3DenmarkQ4-200740178.8
4DenmarkQ1-200840983.6
............
373NetherlandsQ2-201959387.3
374NetherlandsQ3-201959627.0
375NetherlandsQ4-201959985.6
376NetherlandsQ1-202059243.9
377NetherlandsQ2-202053934.0
\n

378 rows \u00d7 3 columns

\n
\n\n\n\nWe then merge it with the unemployment data\n\n\n```python\n#merge gdp data and unemployment data\ndata2 = pd.merge(data,dataGDP,on=['Country','Quarter'],how='outer')\ndata2\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64GDP_USD
0DenmarkQ1-20077.24.03.63.612.56.05.14.25.32.01.50.638222.3
1DenmarkQ2-20077.33.82.93.912.35.95.04.05.02.12.10.138342.6
2DenmarkQ3-20078.33.93.13.412.45.95.04.04.12.01.90.639136.4
3DenmarkQ4-20077.33.62.92.812.45.95.04.05.12.32.11.240178.8
4DenmarkQ1-20088.73.52.62.812.45.95.03.93.72.42.41.140983.6
................................................
373AustriaQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN50794.5
374BelgiumQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN53695.7
375BelgiumQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN47097.7
376NetherlandsQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN59243.9
377NetherlandsQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN53934.0
\n

378 rows \u00d7 15 columns

\n
\n\n\n\nAs earlier we seach for any missing values and remove them\n\n\n```python\n# Search for any missing values \nis_NaN = data2.isnull()\nrow_has_NaN = is_NaN.any(axis=1)\nrows_with_NaN = data2[row_has_NaN]\n\nrows_with_NaN\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64GDP_USD
364DenmarkQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN60786.6
365DenmarkQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN56527.9
366FranceQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN46793.8
367FranceQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN39965.9
368GermanyQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN55401.2
369GermanyQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN49784.8
370OECD - AverageQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN46186.6
371OECD - AverageQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN41132.4
372AustriaQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN57203.7
373AustriaQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN50794.5
374BelgiumQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN53695.7
375BelgiumQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN47097.7
376NetherlandsQ1-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN59243.9
377NetherlandsQ2-2020NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN53934.0
\n
\n\n\n\n\n```python\n#remove missing values \nI = data2['U_15_64_OECD'].notna()\ndata2 = data2[I]\ndata2\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64GDP_USD
0DenmarkQ1-20077.24.03.63.612.56.05.14.25.32.01.50.638222.3
1DenmarkQ2-20077.33.82.93.912.35.95.04.05.02.12.10.138342.6
2DenmarkQ3-20078.33.93.13.412.45.95.04.04.12.01.90.639136.4
3DenmarkQ4-20077.33.62.92.812.45.95.04.05.12.32.11.240178.8
4DenmarkQ1-20088.73.52.62.812.45.95.03.93.72.42.41.140983.6
................................................
359NetherlandsQ4-20186.83.62.64.211.85.65.03.95.02.02.4-0.358596.7
360NetherlandsQ1-20196.73.52.63.611.85.75.03.95.12.22.40.358904.7
361NetherlandsQ2-20196.53.32.53.311.65.64.93.85.12.32.40.559387.3
362NetherlandsQ3-20196.73.32.53.211.95.64.93.75.22.32.40.559627.0
363NetherlandsQ4-20197.03.42.62.711.55.44.83.74.52.02.21.059985.6
\n

364 rows \u00d7 15 columns

\n
\n\n\n\nBefore looking at umeployment and GDP together, we want to have a look at the development in the GDP per capita in the six countires and the OECD-average over the chosen period. We therefor plot the GDP data.\n\n\n```python\n# plot GDP \nfig = plt.figure()\nax = plt.subplot(111)\ndata2.set_index(\"Quarter\").groupby(\"Country\")[\"GDP_USD\"].plot(kind=\"line\", legend=True, ax=ax)\nax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('{x:,.0f}')) #allows for thousand number seperator\nax.set_ylabel(\"GDP per capita\")\nax.set_title(\"Development in quarterly GDP\", fontweight=\"bold\")\nbox = ax.get_position()\nax.set_position([box.x0, box.y0 + box.height * 0.1,box.width, box.height * 1]) \n\n\n```\n\nPreviously we showed that all countries had very different patterns of unemployment development. We have also mentioned the financial crisis as a factor that led to a decrease in employment. Thus, one could think, that unemployment and the economic development of a country could be very related . However, the above figure shows that the causal relationship between employment and the economy, aproximated by GDP, might not be as significant as one might thought. We can see that despite significant difference in the change en unemploment rate all countries had a very similar development in GDP over the chosen time period.\n\n## Correlation between unemployment and GDP\n\nIn order to further investigate if the unemployment and the economic development of a country could be related, we plot the log difference between unemployment rate according to the log difference between the GDP per capita, i.e. checking if the variation of unemployment rate is related to the variation of GDP per capita.\n\nFirst we drop the OECD-Average since we only want to look at the six countries:\n\n\n```python\npd.options.mode.chained_assignment = None \n\n#Dropping the OCED - Average\ndata2.drop(data2[data2[\"Country\"] == \"OECD - Average\"].index, inplace=True)\n```\n\nWe then apply logs to every variable using a loop, and apply `lambda` to store the new calculations as columns:\n\n\n```python\n#Applying logs on different age groupes\nnew_groups= [\"U_15_24\",\"U_15_64\",\"U_25_54\",\"U_55_64\",\"GDP_USD\"]\nfor new_group in new_groups:\n data2[f\"log_{new_group}\"] = data2[new_group].apply(lambda x: np.log(x))\ndata2\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryQuarterU_15_24U_15_64U_25_54U_55_64U_15_24_OECDU_15_64_OECDU_25_54_OECDU_55_64_OECDdiff_oecd_15_24diff_oecd_15_64diff_oecd_25_54diff_oecd_55_64GDP_USDlog_U_15_24log_U_15_64log_U_25_54log_U_55_64log_GDP_USD
0DenmarkQ1-20077.24.03.63.612.56.05.14.25.32.01.50.638222.31.9740811.3862941.2809341.28093410.551174
1DenmarkQ2-20077.33.82.93.912.35.95.04.05.02.12.10.138342.61.9878741.3350011.0647111.36097710.554317
2DenmarkQ3-20078.33.93.13.412.45.95.04.04.12.01.90.639136.42.1162561.3609771.1314021.22377510.574808
3DenmarkQ4-20077.33.62.92.812.45.95.04.05.12.32.11.240178.81.9878741.2809341.0647111.02961910.601095
4DenmarkQ1-20088.73.52.62.812.45.95.03.93.72.42.41.140983.62.1633231.2527630.9555111.02961910.620927
...............................................................
359NetherlandsQ4-20186.83.62.64.211.85.65.03.95.02.02.4-0.358596.71.9169231.2809340.9555111.43508510.978434
360NetherlandsQ1-20196.73.52.63.611.85.75.03.95.12.22.40.358904.71.9021081.2527630.9555111.28093410.983676
361NetherlandsQ2-20196.53.32.53.311.65.64.93.85.12.32.40.559387.31.8718021.1939220.9162911.19392210.991836
362NetherlandsQ3-20196.73.32.53.211.95.64.93.75.22.32.40.559627.01.9021081.1939220.9162911.16315110.995864
363NetherlandsQ4-20197.03.42.62.711.55.44.83.74.52.02.21.059985.61.9459101.2237750.9555110.99325211.001860
\n

312 rows \u00d7 20 columns

\n
\n\n\n\nWe now want to plot the correlation between unemployment rate and GDP per capita. First we create a dictionary in order to have proper names for our variables\n\n\n```python\n#We create a dictionary to attach a proper name for our variables\nvar = {}\nvar[\"log_U_15_24\"] = \"log unemployment 15-24 age group\"\nvar[\"log_U_15_64\"] = \"log unemployment 15-64 age group\"\nvar[\"log_U_25_54\"] = \"log unemployment 25-54 age group\"\nvar[\"log_U_55_64\"] = \"log unemployment 55-64 age group\"\nvar[\"log_GDP_USD\"] = \"log GDP per capita\"\n\n#Defining the axis\nx_col = \"log_GDP_USD\"\ny_columns = [\"log_U_15_24\",\"log_U_15_64\",\"log_U_25_54\",\"log_U_55_64\"]\n\n\nfor y_col in y_columns:\n\n figure = plt.figure()\n ax = plt.gca()\n ax.scatter(data2[x_col].diff(1), data2[y_col].diff(1),color = \"#FFC0CB\")\n ax.set_xlim([-0.04,0.03])\n ax.set_ylim([-0.4,0.3])\n ax.set_xlabel(f\"{var[x_col]}\")\n ax.set_ylabel(f\"{var[y_col]}\")\n ax.set_title(f\"Correlation plot for {var[y_col]}\", fontweight=\"bold\")\n plt.show()\n```\n\nThe correlation coefficient between the unemployment rate and the GDP per capita is then found as:\n\n\n```python\n\ndef fancy(string): \n \"\"\" A function that allows us to write markdown text (and LaTex) inside the fancy function\n args:\n string : a string\n returns : a string i an fancy way\n \"\"\"\n display(Markdown(string))\n \nfor y_col in y_columns:\n \"\"\"A loop that calculates the correlation coefficient\n args:\n y_col: log of unemployment for different age groups\n x_col: log of GDP per capita\n \n returns : a float \n \"\"\"\n corr = data2[y_col].corr(data2[x_col])\n fancy(f\"$\\hat\\sigma_{{YX}}=${corr:.3} for {var[y_col]} and {var[x_col]}.\")\n\n\n```\n\n\n$\\hat\\sigma_{YX}=$-0.484 for log unemployment 15-24 age group and log GDP per capita.\n\n\n\n$\\hat\\sigma_{YX}=$-0.503 for log unemployment 15-64 age group and log GDP per capita.\n\n\n\n$\\hat\\sigma_{YX}=$-0.472 for log unemployment 25-54 age group and log GDP per capita.\n\n\n\n$\\hat\\sigma_{YX}=$-0.156 for log unemployment 55-64 age group and log GDP per capita.\n\n\nFrom the above plots we see that the distribution of the observations is very large, and there does not seem to be any clear connection between the two variables. However the shown correlation coefficients show that there is some correlation between the two variables, and as expected the correlation is negative. When looking at the correlation coefficients exspecially the oldest age group stands out, only having a correlation of -0.156. When looking at the plots of the development in the unemployment rates this is not so supricing, since they already gave an indicator of the oldest age group reacting less to the 2008 financial crisis. \n\n## Conclusion \n\nIn this project we wanted to investigate the unemployment rate in six different OECD countries. Using two datasets: one with the unemployment rate and one with the GDP, we have shown how the countries have developed over the period of 2007 Q1 to 2019 Q4. \n\nUsing descriptive statistics and plotting we have seen very different patterns in unemployment. However, when adding the information of the development in the GDP we have seen, that these differences cannot only be explained by differences in GDP, as the causal relationship does not seem to be as significant as one might have expected. \n", "meta": {"hexsha": "87f300eae9e8cd37fb9404b2659697ebcac549de", "size": 800376, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dataproject/Dataproject_corrected.ipynb", "max_stars_repo_name": "AskerNC/projects-2021-aristochats", "max_stars_repo_head_hexsha": "cade4c02de648f4cd1220216598dc24b67bb8559", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dataproject/Dataproject_corrected.ipynb", "max_issues_repo_name": "AskerNC/projects-2021-aristochats", "max_issues_repo_head_hexsha": "cade4c02de648f4cd1220216598dc24b67bb8559", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dataproject/Dataproject_corrected.ipynb", "max_forks_repo_name": "AskerNC/projects-2021-aristochats", "max_forks_repo_head_hexsha": "cade4c02de648f4cd1220216598dc24b67bb8559", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 190.2938659058, "max_line_length": 69596, "alphanum_fraction": 0.85104251, "converted": true, "num_tokens": 26787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.27512971787959795, "lm_q1q2_score": 0.13219396313848283}} {"text": "# Pycheat - Python Cheatsheet\n\nWhen not specified: Python3 \nLink in markdown: \\[blue_text](url_here) \n[Help for markdown](https://commonmark.org/help/tutorial/index.html) \n[Built-in magic Jupyter commands](https://ipython.readthedocs.io/en/stable/interactive/magics.html)\n\n\n```latex\n%%latex\n\\begin{equation}\nH\u2190 \u200b\u200b\u200b60 \u200b+\u200b \\frac{\u200b\u200b30(B\u2212R)\u200b\u200b}{Vmax\u2212Vmin} \u200b\u200b, if V\u200bmax\u200b\u200b = G\n\\end{equation}\n```\n\n\n\\begin{equation}\nH\u2190 \u200b\u200b\u200b60 \u200b+\u200b \\frac{\u200b\u200b30(B\u2212R)\u200b\u200b}{Vmax\u2212Vmin} \u200b\u200b, if V\u200bmax\u200b\u200b = G\n\\end{equation}\n\n\n\n```python\nTo run a bash command simple use the exclamation mark at the beginning of a line\n!echo test\n```\n\n## Naming convention\n[difference between module/class/package](https://softwareengineering.stackexchange.com/a/111882/195918)\n\n[PEP 0008](https://www.python.org/dev/peps/pep-0008/#package-and-module-names) tells that:\n\n- **modules (filenames)**: should have short, all-lowercase names, and they can contain underscores;\n- **packages (directories)**: should have short, all-lowercase names, preferably without underscores;\n- **classes**: should use the CapWords convention.\n\n \n\n## Misc\n\n\n```python\n# Computing time for a line\n%timeit [i for i in range(100000)]\n```\n\n 12.9 ms \u00b1 1.49 ms per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n\n\n\n```python\n%%timeit # must be at the top of the cell => -r 1 to run only one time\n\n# Computing time for a cell\nfor i in range(1000):\n i\n```\n\n 36.3 \u00b5s \u00b1 0 ns per loop (mean \u00b1 std. dev. of 1 run, 10000 loops each)\n\n\n## Lists\n\n\n```python\nxs = [1,2,3,4,5]\nys = [6,7,8,9,10]\n```\n\n\n```python\nxs + ys # List concatenation\n```\n\n\n\n\n [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n\n\n\n## Dictionnaries\n\n\n```python\nfrom collections import Counter\n# Counter\nc = Counter(['eggs', 'ham', 'eggs'])\nc.update(['eggs', 'chesse'])\nprint(c)\nprint(c.most_common(2))\n```\n\n Counter({'eggs': 3, 'ham': 1, 'chesse': 1})\n [('eggs', 3), ('ham', 1)]\n\n\n## String formatting\n\n[pyformat.info](https://pyformat.info/)\n\n\n```python\n'{1} {0}'.format('one', 'two')\n```\n\n\n\n\n 'two one'\n\n\n\n\n```python\n'{{ {} {} }}'.format('one', 2)\n```\n\n\n\n\n '{ one 2 }'\n\n\n\n\n```python\nmultiline_string = \"a first line\" \\\n \"a second line\"\n```\n\n\n```python\n# To keep the zeroes at the end\nprint('{:.2f}'.format(round(2606.89579999999, 2)))\nprint('{:.2f}'.format(21))\n```\n\n 2606.90\n 21.00\n\n\n## Loops\n\n\n```python\nsome_list = [\"bananas\", \"apples\", \"mangos\"]\n```\n\n\n```python\nfor index, value in enumerate(some_list):\n print(value + \" is at index \" + str(index))\n```\n\n bananas is at index 0\n apples is at index 1\n mangos is at index 2\n\n\n\n```python\nsome_dict = {'three': 3, 'one': 1, 'two': 2}\n```\n\n\n```python\nfor key, value in some_dict.items(): # Python 2.7 : iteritems()\n print(key + \" is \" + str(value))\n```\n\n three is 3\n one is 1\n two is 2\n\n\n\n```python\n# Filter elements in comprehension list\n[x for x in some_list if x != 'bananas']\n```\n\n\n\n\n ['apples', 'mangos']\n\n\n\n## Files\n\n\n```python\nimport os\n# Check file existance\nos.path.exists('./file_or_link_or_dir_or_sym')\nos.path.isdir('./folder/')\nos.path.isfile('./file')\n```\n\n\n\n\n False\n\n\n\n\n```python\nos.listdir('.')\n#os.remove(\"dir_or_file_or_etc\")\n```\n\n\n\n\n ['python_cheatsheet.ipynb', '__main__.log', '.ipynb_checkpoints']\n\n\n\n\n```python\nimport shutil\nimport os\ndef copytree(src, dst, symlinks=False, ignore=None):\n for item in os.listdir(src):\n s = os.path.join(src, item)\n d = os.path.join(dst, item)\n if os.path.isdir(s):\n shutil.copytree(s, d, symlinks, ignore)\n else:\n shutil.copy2(s, d)\n\n```\n\n\n```python\n# source: https://gist.github.com/seanh/93666\ndef format_filename(s):\n \"\"\"Take a string and return a valid filename constructed from the string.\nUses a whitelist approach: any characters not present in valid_chars are\nremoved. Also spaces are replaced with underscores.\n \nNote: this method may produce invalid filenames such as ``, `.` or `..`\nWhen I use this method I prepend a date string like '2009_01_15_19_46_32_'\nand append a file extension like '.txt', so I avoid the potential of using\nan invalid filename.\n \n\"\"\"\n valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits)\n filename = ''.join(c for c in s if c in valid_chars)\n filename = filename.replace(' ','_') # I don't like spaces in filenames.\n return filename\n```\n\n## Functions\n\n\n```python\ndef a_function():\n print(\"myfunction\")\n \nif __name__ == '__main__': # If module (file) imported this part of the code will not be executed\n a_function()\n```\n\n myfunction\n\n\n\n```python\ndef multi_parameters_func(*params):\n print(\"params : {}\".format(', '.join(params)))\nmulti_parameters_func(\"first_param\", \"second_param\")\n# other example: see benchmark function below ;)\n```\n\n params : first_param, second_param\n\n\n## Functionnal\n\n[reduce_fold_left_python_haskell](https://eli.thegreenplace.net/2017/right-and-left-folds-primitive-recursion-patterns-in-python-and-haskell/)\n\n\n```python\nxs = [1,2,3,4,5]\nys = [6,7,8,9,10]\n```\n\n\n```python\n list(map(lambda a: a*10, xs))\n```\n\n\n\n\n [10, 20, 30, 40, 50]\n\n\n\n\n```python\n# map with 2 arguments\nlist(map(lambda x,y: x+y, xs,ys))\n```\n\n\n\n\n [7, 9, 11, 13, 15]\n\n\n\n\n```python\nlist(filter(lambda x: x%2==0,xs))\n```\n\n\n\n\n [2, 4]\n\n\n\n\n```python\nimport functools\nfunctools.reduce(lambda acc, x: acc+x, xs, 0) # ((((1+2)+3)+4)+5) == Fold left in Haskell \n```\n\n\n\n\n 15\n\n\n\n\n```python\n# Partial functions (e.g. useful to use with map when needing to pass a param)\nimport functools\ndef multiply(x,y):\n return x * y\n\ndbl = functools.partial(multiply,2) # create a new function that multiplies by 2\nprint(dbl(4))\n```\n\n 8\n\n\n## Logs\n\n\n```python\nimport logging\n\ndef create_logger(loglevel):\n numeric_level = getattr(logging, loglevel.upper(), logging.INFO)\n if not isinstance(numeric_level, int):\n raise ValueError('Invalid log level: %s' % loglevel)\n\n #import sys\n #import os\n #module_name = str(os.path.basename(sys.modules['__main__'].__file__)).split('.')[0]\n module_name = __name__\n \n logger = logging.getLogger(module_name)\n logger.setLevel(numeric_level)\n # create file handler which logs even debug messages\n fh = logging.FileHandler(module_name + '.log')\n fh.setLevel(logging.DEBUG)\n # create console handler with a higher log level\n ch = logging.StreamHandler()\n ch.setLevel(logging.INFO)\n # create formatter and add it to the handlers\n formatter = logging.Formatter('%(asctime)s\\t%(name)s\\t%(levelname)s\\t\\t%(message)s')\n fh.setFormatter(formatter)\n ch.setFormatter(formatter)\n # add the handlers to the logger\n logger.addHandler(fh)\n logger.addHandler(ch)\n logger.info(\"Logger created!\")\n return logger\n\nlogger = create_logger(\"info\")\nlogger.info(\"This is a log\")\n```\n\n 2018-07-23 15:30:33,168\t__main__\tINFO\t\tLogger created!\n 2018-07-23 15:30:33,172\t__main__\tINFO\t\tThis is a log\n\n\n## Various tools\n\n\n```python\ndef benchmark(func, *params):\n import datetime\n import time\n start_time = time.time()\n return_value = func(*params) if params else func()\n total_time = datetime.timedelta(seconds=time.time() - start_time)\n print(\"Function \" + func.__name__ + \" - execution time : \" + str(total_time))#.strftime('%H:%M:%S'))\n return return_value\n\ndef test():\n total = 0\n for i in range(0, 10000):\n total +=i\n return total\n\ndef sum(param1, param2):\n return param1 + param2\n\nresult = benchmark(sum, 1, 2)\nprint(\"Result : \" + str(result))\n\nresult = benchmark(test)\nprint(\"Result : \" + str(result))\n```\n\n Function sum - execution time : 0:00:00.000002\n Result : 3\n Function test - execution time : 0:00:00.000820\n Result : 49995000\n\n\n\n```python\nimport math\ndef entropy(string):\n \"Calculates the Shannon entropy of a string\"\n\n # get probability of chars in string\n prob = [float(string.count(c)) / len(string) for c in dict.fromkeys(list(string))]\n\n # calculate the entropy\n entropy = - sum([p * math.log(p) / math.log(2.0) for p in prob])\n\n return entropy\n\nprint(entropy(\"www.google.com\"))\n```\n\n 2.8423709931771093\n\n\n\n```python\ndef is_ipv4(ipv4_string):\n l = ipv4_string.split('.')\n if len(l) != 4:\n return False\n try:\n ip = list(map(int, l))\n except ValueError:\n return False\n if len(list(filter(lambda x: 0 <= x <= 255, ip))) == 4:\n return True\n return False\n\n# True\nprint(is_ipv4(\"192.168.1.1\"))\nprint(is_ipv4(\"0.0.0.0\"))\nprint(is_ipv4(\"255.255.255.255\"))\n\n# False\nprint(is_ipv4(\"255.255.255\"))\nprint(is_ipv4(\"255.255.255.255.3\"))\nprint(is_ipv4(\"255.255.255.erzr\"))\n\n```\n\n True\n True\n True\n False\n False\n False\n\n\n# Pandas dataframes\n\n[Pandas tips and tricks](https://towardsdatascience.com/pandas-tips-and-tricks-33bcc8a40bb9)\n\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame(data, columns=features_name)\n \nimport csv\ndf.to_csv(c.model_folder + filename, sep=',', encoding='utf-8', index=False, quoting=csv.QUOTE_NONNUMERIC)\n\n#df = pd.read_csv(c.model_folder + \"features.csv\")\n\n# Print a summary of 5 first rows\ndf.head(5)\n\n# check the data frame info\ndf.info()\n\n# Get unique values of a column\ndf['continent'].unique().tolist()\n\n# To add a new column and set all rows to a specific value\ndf['Name'] = 'abc'\n\n# Using DataFrame.drop\ndf.drop(df.columns[[1, 2]], axis=1, inplace=True)\n\n# drop by Name\ndf1 = df1.drop(['B', 'C'], axis=1)\n\n# Select the ones you want\ndf1 = df[['a','d']]\n\ncolumn_names = df.index\ndata = df.values\n\n# Where condition\ndf.loc[df['label'] == 'NORMAL']\n\n# Select column(s)\ndf[f_name]\n\n# Count number of != values\ndf[f_name].value_counts()\n\n# Put a column at the end\ndf_label = df.pop('label') # remove column 'label' and store it in df_label\ndf['label'] = df_label # add label as a 'new' column.\n\n# To modify a specific cell\ndf.loc[df['key'] == 'mykey', \"column_name\"] = 1\n\n\n# To iter over rows\nfor index, row in df.iterrows(): # row is a copy of the row from the dataframe\n print row['c1'], row['c2']\n \n # To modify the row: use the index to access the row in the dataframe\n df.loc[index, 'wgs1984_latitude'] = dict_temp['lat']\n \n# To sum rows\ndf = df.sum(axis=1)\n\n# To sum columns\nnormal_counts = df.[\"col_name\"].sum()\n```\n\n## OS\n\n\n```python\nimport subprocess\n# Run command \nsubprocess.Popen([\"bro\", \"-C\", \"-r\", \"../\"+filename], cwd=working_dir).wait()\n```\n\n## Web requests (requests & BeautifulSoup)\n- [requests doc](http://docs.python-requests.org/en/master/)\n- [BeautifulSoup doc](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)\n\n\n```python\nimport requests\nfrom bs4 import BeautifulSoup\n```\n\n\n```python\nurl = \"http://www.test.com\"\ncookies = {\n 'Cookie':'some_value',\n}\nr = requests.get(url, cookies=cookies)\n\n#print(r.text)\n\nsoup = BeautifulSoup(r.content, \"html5lib\")\n```\n\n\n```python\n#Scan the URLs present on the page\nfor link in soup.find_all('a'):\n href = link.get('href')\n if str(href).startswith(\"http\"): # To exclude refs that are links to paragraphs on the page (like #maincontent)\n print(href)\n```\n\n\n```python\ndef is_downloadable(url):\n \"\"\"\n Does the url contain a downloadable resource\n \"\"\"\n h = requests.head(url, allow_redirects=True, cookies=cookies)\n header = h.headers\n content_type = header.get('content-type')\n if 'text' in content_type.lower():\n return False\n if 'html' in content_type.lower():\n return False\n return True\n```\n\n\n```python\n# Decode URL\nfrom urllib.parse import unquote\nurl = unquote(url)\n```\n", "meta": {"hexsha": "eb4247f8e75551c1afca00cc19b28421a1b24745", "size": 23628, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "pycheat.ipynb", "max_stars_repo_name": "lminy/pycheat", "max_stars_repo_head_hexsha": "dae083417367851e14392ae92442a470447a4ff7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pycheat.ipynb", "max_issues_repo_name": "lminy/pycheat", "max_issues_repo_head_hexsha": "dae083417367851e14392ae92442a470447a4ff7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pycheat.ipynb", "max_forks_repo_name": "lminy/pycheat", "max_forks_repo_head_hexsha": "dae083417367851e14392ae92442a470447a4ff7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.1193737769, "max_line_length": 324, "alphanum_fraction": 0.4967411546, "converted": true, "num_tokens": 3320, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.28776780965284365, "lm_q1q2_score": 0.1315492317133404}} {"text": "```python\nGodfrey Beddard 'Applying Maths in the Chemical & Biomolecular Sciences an example-based approach' Chapter 9\n```\n\n\n```python\n# import all python add-ons etc that will be needed later on\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import *\nfrom scipy.integrate import quad\ninit_printing() # allows printing of SymPy results in typeset maths format\nplt.rcParams.update({'font.size': 14}) # set font size for plots\n```\n\n# 5 Fourier Transforms \n\n## 5.1 Motivation and concept\n\nFourier transforms are of fundamental importance in the analysis of signals from many types of instruments; these range from infra-red spectroscopy, to x-ray crystallography, to MRI and CT scan imaging and to seismology. Even in everyday life, Fourier transforms are important because they are used to produce the images observed in a digital television and in most other forms of digital information processing. Every scientist is familiar with the interference pattern produced by light passing through a pair of slits; this is the spatial Fourier transform of the two slits. \n\nUsually, the data, which might be a string of values taken at many sequential times, is transformed to allow the frequencies present be displayed and analysed. More fundamentally, the instruments used to measure infrared and NMR spectra produce data that is itself the Fourier transform of the spectrum, and similarly, in X-ray crystallography, the image of spots produced on the detector is the Fourier transform of the gaps between the planes of atoms in a crystal. \n\nAlthough we concentrate on Fourier transforms, they are only one in a class of _integral_ transforms. The Abel transform is an integral transform that is used to recover the three-dimensional information from its two-dimensional image. It is used in such diverse areas as astronomy and the study of the photo-dissociation pathways of molecules. In photo-dissociation experiments, the fragments (atoms, ions, electrons) are spatially dispersed depending on where the breaking bond is pointing at the instant of dissociation. Their image is captured on a camera as 2D information and by transforming this, the geometry of the dissociation process can be determined (Whittaker 2007). Other transforms are the Hilbert, used in signal processing, and the Laplace, used to solve differential equations.\n\nFolklore has it that Fourier transforms are formidably difficult and abstruse things. We know that they form the basis of the FTIR and NMR instrument, but secretly hope that nobody asks us how or why. In fact, Fourier transforms are quite straightforward but must be treated with respect. We are used to seeing the NMR or IR spectrum as a set of lines at different fixed frequencies and feel comfortable with this, but the raw data produced is a wiggly signal in which the information needed is almost totally obscured. This makes the process of unravelling this in a Fourier transform seems mysterious: \u2018I cannot understand the data so where does the spectrum come from? Contrariwise, we are used to interpreting speech and music that are oscillating signals in time, and would not easily understand either of them if Fourier transformed and viewed or heard as a continuously changing spectrum of frequencies.\n\nWe shall come back to this shortly but, briefly, a Fourier transform is an integral and therefore it can be evaluated by any of the methods used to solve integrals. The Fourier transform integral is one of several types of integral transforms that have the general form\n\n$$\\displaystyle g(k)=\\int f(x)G(k,x)dx \\tag{23}$$\n\nThe 'transformed' function is $g$ and the function being transformed is $f$. The algebraic expression $G$ is called the _kernel_ and this changes depending on the type of transform, Fourier, Abel, etc. The exact form of the kernel is also described later. However, whatever form the transform takes it always occurs between pairs of conjugate variables, which are $x$ and $k$ in equation (23). Often these conjugate pairs are time (seconds) and frequency (1/seconds), or distance and 1/distance. The reciprocal relationship between variables is why the transform converts time into frequency, changing, for example, an oscillating time profile into a spectrum. \n\nA second property is that these integral transforms are reversible, also called _invertible_, which means that $f$ can be changed into $g$ and $g$ can be changed into $f$ depending on which one we start with.\n\nSolving the Fourier transform integral both algebraically and numerically will be described starting in Section 5.5, but first the role of the Fourier transform in FTIR and NMR experiments, and in X-ray crystallography is outlined.\n\n### 5.2 The FTIR instrument\n\nThe Fourier transform infra-red (FTIR) spectrometer directly generates the Fourier transform of the spectrum by mechanically moving one mirror of a Michelson interferometer and measuring the signal generated by the interference of the two beams on the detector. Fig. 11 shows a (simulated) example of the raw data from the instrument and the IR spectrum produced after this is transformed. After transformation, the displacement from the centre of the interference signal is changed into reciprocal distance or wavenumbers, cm$^{-1}$, which is proportional to the IR transition frequency.\n\nThe FTIR spectrometer is an interferometer, therefore, the waves that have travelled down each of its arms are combined on the detector and this measures the intensity or the square of the wave's amplitude, Fig. 12. Constructive interference occurs when the path length in both arms differ by zero or a whole number of wavelengths; destructive interference occurs when they are exactly out of phase and the difference in length is an odd multiple of half a wavelength. If only one wavelength is present, changing the path-length $\\Delta$ would make the signal on the detector change sinusoidally. The 'coherent' broadband infrared 'light' from the source contains many wavelengths, and at a given path-length, some constructively and some destructively interfere, but the signal is greatest when both paths are the same. The relative path-length of the two arms of the interferometer can be changed by mechanically moving one mirror; the full interference pattern is mapped out as a function of path-length and this pattern decreases in an oscillatory manner to some constant, but not zero value, as the difference in path length increases. This is shown in the left of Fig. 11. Because changing either path's length has the same effect, the signal is symmetrical about zero path difference.\n\nWhen the sample is placed in the beam, it absorbs only some frequencies depending on the particular nature of the sample, which results in a change in the signal size on the detector. When this interference signal is subtracted from that obtained without the sample and is transformed, the infrared absorption spectrum is produced. The distance the mirrors are moved is accurately determined by using a visible laser that follows the same path in the interferometer, but does not pass through the sample. This laser produces an interference pattern on a second (photodiode) detector; the number of fringes passed as the arm of the interferometer moves is counted, and this is used to determine how far one mirror has moved relative to the other.\n\nThe FTIR spectrometer has the multiplex (Fellgett) advantage over a wavelength scanning instrument, because all wavelengths are simultaneously measured on the detector, which also receives a large and virtually noise free signal. Both of these factors improve the signal to noise ratio. In comparison, in a scanning instrument, the radiation is detected through a narrow slit and the wavelength is changed by rotating a diffraction grating. In such an instrument the narrow slit, necessary for high resolution, is responsible for a poor signal to noise ratio because only a little light can reach the detector at any given wavelength. Scanning the wavelength also makes the experiment lengthy.\n\n\nFigure 11. Left: A simulated Fourier transform as might be produced directly by an FTIR spectrometer. Right: The IR spectrum after Fourier transforming and converting into transmittance.\n\n____\n\n\n\nFigure 12. Schematic of an FTIR spectrometer as an interferometer. The laser is used to measure the relative distance of the two arms and does not pass through the sample.\n\n_____\n\n\n## 5.3 NMR\n\nPossibly the most important analytical technique for the synthetic chemist is NMR spectroscopy. In an NMR experiment, the nuclear magnetization, which is the vector sum of the individual nuclear spins, is tipped from its equilibrium direction, which is along the direction of the huge permanent magnetic field $B$, by a relatively weak RF pulse of short duration. By applying this short pulse along the $x$- or $y$-axis, and therefore at $90^\\text{o}$ to the permanent field, the magnetization is tipped away from the z-direction and experiences a torque and starts to precess. After the RF pulse has ended, the nuclear magnetization, and hence individual nuclear spins, undergoes a free induction decay (FID) by continuing to precess about the permanent magnetic field $B$. The rotating magnetization, Fig. 13, is measured by the detecting coil in the x-y plane as an oscillating and decaying signal, which, when Fourier transformed, produces the NMR spectrum.\n\nIn this experiment, the oscillating and decaying signal is converted into reciprocal time or frequency, which is ultimately displayed as a frequency shift $\\delta$ in ppm from a standard compound, such as tetramethylsilane. In a classical sense, it is possible imagine the rotating nuclear magnetization repeatedly passing in front of the detection coil, thereby inducing a current to flow in it as it does so, and which will cause the output signal to rise and fall. Many such magnetizations from the many groups of nuclear spins in different chemical environments produce many signals, resulting in a complicated oscillating FID. Figure 14 shows a synthesized NMR free induction decay of two spins with a frequency of $10$ and $11$ MHz, and the corresponding real and imaginary parts of the transform, which we will suppose is the NMR spectrum of two lines separated by $1$ MHz. The RF pulse used to tip the magnetization contains many frequencies, as may be seen from the Fourier series of a square pulse, and simultaneously excites the nuclear spins in different magnetic environments in the molecule. The analysis of the spectrum provides information about the structure of the molecule, but not bond distances or angles unless sophisticated multiple pulse methods are used (Sanders & Hunter 1987; Levitt 2001).\n\nIn an NMR experiment, the data is obtained as an FID rather than directly as a spectrum because this increases the speed of data acquisition and, more importantly, increases the signal to noise ratio over an instrument where the magnetic field is continuously changing in strength. In the FID, all frequencies are measured simultaneously, as in the FTIR instrument, giving the measurement a multiplex or Fellgett advantage. There are other reasons for measuring the FID, which is that the instrument now operates in real time; this allows multiples of RF pulses to be applied to the sample, and these allow the magnetization to be manipulated via multi-quantum processes.\n\n\n\nFigure 13. The sequence of the magnetization and the FID produced during a basic NMR experiment.\n\n____\n\n\n\nFigure 14. A simulated FID of two NMR transitions showing its real and 'imaginary' parts and the phase. The real part is the absorption spectrum or the normal NMR spectrum, the imaginary part the dispersion. The vertical dashed lines show the frequencies used in the calculation of the FID.\n\n## 5.4 X-ray diffraction\n\nIn FTIR and NMR, a conscious choice is made to perform a transform type of experiment. This is not so in X-ray diffraction, for the very nature of the experiment removes any choice. In X-ray crystallography, the three-dimensional diffraction pattern produced by the X-rays that scatter off the electrons in the many different planes of atoms in the crystal is projected onto the two-dimensional detector surface and is measured as a pattern of bright spots. This image is Fourier transformed and produces the distances between lattice planes from which the molecule's structure can be determined. \n\nScattering of the X-rays occurs because they interact with electrons and cause them to re-radiate, which they do so in all directions. Only when waves originate from planes of atoms that satisfy the Bragg law, $n\\lambda = 2d\\sin(\\theta)$, is there constructive interference, and an X-ray is detected on the detector, usually a CCD. Everywhere else, there is destructive interference and no waves exist. The CCD detector is similar in nature to the one in a digital camera or mobile phone and the brightness of a spot is proportional to the amplitude squared (intensity) of the X-ray waves arriving at that point. \n\nThe atoms in a crystal form repeating unit cells and each set of planes of atoms, in principle, will produce one spot on the detector and in a position proportional to the reciprocal of the lattice spacing between planes. Sometimes a crystal's symmetry may cause extra interference between X-rays from different planes, which produces systematic absences in the X-ray image and these can be use to distinguish one particular type of crystal lattice from another. \n\nIt is important to note that it is not the positions of the spots on the detector that ultimately produces the molecular structure, these positions are determined by the reciprocal planes, but the _intensity_ of the spots. \n\n\n\nFigure 14a. The phase of scattered x-rays is given by $2\\pi$ times the ratio of the perpendicular distance from the origin $R_{hkl}$ to an atom at $(x,y,z)$ to the separation of the hkl planes $d_{hkl}$ i.e $\\phi=2\\pi R_{hkl}/d_{hkl}$. The hkl plane is perpendicular to the plane of the diagram.\n\n_____\n\nThe two-dimensional image on the detector has to be Fourier transformed into a representation of the crystal structure but, because the absolute value of the transform rather than its amplitude is produced on the detector, phase information is lost and this makes the interpretation of the image very much more difficult than it would otherwise be. This is the origin of the 'phase' problem and ingenious methods have had to be devised to overcome this (McKie & McKie 1992; Giacovazzo et al. 1992). The summation of several waves particular to this problem is described in chapter 1.\n\nFourier transforms are widely used in other areas, such as image processing, for example from star fields, MRI images, X-ray CT scans, information processing, and in solving many types of differential equations such as those describing molecular diffusion or heat flow. These technologies show that it is essential to be familiar with Fourier transforms whether you are a chemist, physicist, biologist, or a clinician.\n\n## 5.5 Linear transforms\n\nThe next few sections describe the Fourier transform in detail, but first some jargon has to be explained. Formally, a Fourier transform is defined as a linear integral transform of one function or set of data into another; see equation (23). The transform is reversible, or invertible, enabling the original function or data to be retrieved after an inverse transform. These last two sentences are in 'math-speak', so what do they really mean?\n\nIntegral simply means that the transform involves an integration as shown in equation (23). The word 'linear', in 'linear transform, means that the transform $T$ has the property, when operating on two regular functions $f_1$ and $f_ 2$, that $T( f_1 + f_2) = T( f_1) + T( f_2)$. This means that the transform of the sum of $f_ 1$ and $f_ 2$ is the same as transforming $f_ 1$, and then transforming $f_2$ and adding the result. In addition, the linear transform has the property $T(cf_1) = cT( f_1)$ if $c$ is a constant.\n\nReversible, or invertible, means that a reverse transform exists that reforms the initial function from the transform; formally this can be written as $f = T^{-1}[T[f]]$ if $T^{-1}$ is the inverse transform. Put another way, if a function $f$ is transformed to form a new function $g$, as $T[f] = g$, then the inverse transform takes $g$ and reforms $f$ as $f = T^{-1}[g]$. This might seem to be rather abstract, but is, in fact, very common. \n\nA straightforward example is the log and exponential functions, as they are convertible into one another as an operator pair: If $T$ is the exponential operator $e^{( )}$, and $x^2$ is the 'function', then $\\displaystyle T[x^2] = e^{x^2}$. The inverse operator $T^{-1}$ reproduces the original function: $T^{-1}[T[x^2]] = x^2$ or, by substitution, it is true that $\\displaystyle T^{-1}[e^{x^2}] = x^2$ if $T^{-1}$ is the logarithmic operator $\\ln( )$ because $\\displaystyle \\ln(e^{x^2}) = x^2$. The Fourier transform is only a\nmore complicated version of an operator than is $\\ln( )$ or $e^{( )}$.\n\nThe Fourier transform can be thought of as changing or 'mapping' the initial function $f$ to another function $g$, but in a systematic way. The new function may not look like the original, but however one might modify the transformed function $g$, when transformed back to $f$, it is as if $f$ itself had been modified. Although it is common to use the word 'transform', the word 'operator' could equally well be used although this is not usual in this context. Conversely, a matrix when acting on another matrix or a vector, performs a linear transform, however, a matrix is usually called a linear operator.\n\n## 5.6 The Transform\n\nThe Fourier transform is used either because a problem is most easily solved in 'transform space', or, because of the way an experiment is performed, the data is produced in transform space and has then to be transformed back into 'real space'. This 'real space' is usually either time or distance; the transform space is then frequency (as inverse time) or inverse distance. The time-to-frequency and the distance-to-inverse distance are both _conjugate pairs_ of variables between which the Fourier transform operates. In practice, there are two 'flavours' of Fourier transforms. The simpler is the mathematical transformation of a function, such as a sine wave or exponential decay, the other is, effectively, the same process, but performed on real experimental data presented as a list of numbers. The latter is called the Discrete Fourier Transform (DFT). Because the transform is in reciprocal space, values near to zero on its abscissa correspond either to large values of frequency or reciprocal distance depending on whether the conjugate variable is time or distance respectively. \n\nThe Fourier transform is always between pairs of conjugate variables, time $\\leftrightharpoons$ frequency, so that $\\Delta t\\Delta v = 1$ or distance $\\leftrightharpoons$ 1/distance. As the transform changes one variable into its conjugate, it is possible in simple cases to visualize what the spectrum will look like without actually doing the calculation. A sine wave that has a single frequency has a Fourier transform that is a single line at the frequency of the wave. If there are two waves of different frequencies superimposed on one another, two lines will appear after transforming. \n\nSo far, so good, but the length of the waves is not specified. Are they of finite length and so contain only a finite number of oscillations, or are they of infinite extent? If a sine wave is infinitely long, then only one line is observed in the transform, and will be of infinitesimal width and occur at the frequency of the sine wave. This line is a delta function. If the waves are turned on at some point and off again at another, then there are discontinuities at these points, and some additional frequencies must be associated with turning the signal on and off, which will appear in the transformed spectrum as _new_ frequencies. Think of how a waveform is made up of a sum of sine waves of different frequency, see Fig. 1. If a waveform is to be zero in some regions and not in others, then many waves have to be present to cancel out one another as necessary and these are the new frequencies needed. A broadening of the lines also occurs, because $\\Delta t\\Delta \u03bd = 1$ and if $\\Delta t$, the length of the whole sine wave is finite, then $\\Delta v$ has a width associated with it. This is observed in FTIR and NMR spectra, but the software provided with many instruments can be set to remove as much of this broadening as possible by apodizing the lines (Sanders & Hunter 1987). This means multiplying the function with a decreasing function such as $\\displaystyle e^{-x}$ before transforming.\n\nThe effect of Fourier transforming a short and a long rectangular pulse is shown below. The right-hand plots show the real part of the transform, which is a _sinc_ (or Cardinal Sine) function, $\\mathrm{sinc}(ax) \\equiv \\sin(ax)/ax$. The result of transforming is mathematically the same for both long and sort pulses, of course, but in a fixed frequency range the effect appears to be different. The short pulse has a wide central band set at zero and widely spaced side bands, which decay rapidly at frequencies away from zero and extend to infinity. The longer pulse has a narrower central band, also centred at zero, and higher frequency side bands than in the short pulse case; the results conform to $\\Delta t\\Delta v = 1$, i.e. short $\\Delta t$ with wide $\\Delta v$ and _vice versa_.\n\nIf a pulse is turned on and off, as shown in Fig. 15, the transform must have frequencies associated with these changes. Again, think of the pulse being made of many terms in a Fourier series. Fig. 1 shows a few of the terms, but the more of these there are each with a different frequency, the better is a sharp edge or pulse defined. The oscillations in the transform of Fig. 15 arise from the many terms needed to describe the rectangular pulse. In fact, to reproduce the original pulse exactly by reverse transforming, an infinite frequency range is needed. If the transform of Fig. 15 _exactly_ as shown in the right-hand side were reverse transformed, the rectangular pulse shown on the left of the figure would not be produced, because on the plot the transform has a limited frequency range. \n\nThe reciprocal nature of the function and its transform is also clear in these plots. The wider the function the narrower the transform and vice versa, this leads to an 'uncertainty principle' in which it is not possible to measure, at the same time, both the function and its transform with unlimited precision. This is described in detail later on. In Quantum mechanics this leads to the Heisenberg Uncertainty Principle. \n\n\n\nFigure 15. Example of the Fourier transform of a short and a long rectangular pulse each centred about zero and of total width $a$. Only the real part of the transform is shown, and is the sinc function, $\\sin(ax)/ax$. The transform extends to $\\pm \\infty$ and $a=2$ in the top plots and 4 in the lower ones. The transform crosses zero at equally spaced points which are $\\pm n/a$ where $n=1,2\\cdots$.\n_____\n\nWhat is the transform of a cosine wave of finite length? The result is shown in Fig. 16 and is somewhat similar to that of the square pulse except that the transform frequency cannot be centred at zero because the cosine has a finite frequency. The main peak is almost at the cosine's frequency, and the many other sidebands are needed to account for the fact that the wave is suddenly turned off. Now suppose that the cosine is damped by an exponential function and smoothly decreases in amplitude, then these extra frequencies disappear, because at the end of the cosine wave there is no discontinuity; the exponential makes the cosine gently approach zero. The result is a widening of the feature at the frequency of the cosine wave, Fig. 17. The effect of the exponential decay is to _apodise_ the transform.\n\n\n\nFigure 16. Left: A truncated cosine wave of frequency 1/2, starting at zero and of length of 3.5 cycles. Right: The real part of its Fourier transform. The value of the wave\u2019s frequency is marked with a vertical line.\n\n\nFigure 17. The same cosine wave but now apodised by multiplying by $\\displaystyle e^{-t/2}$, which makes the cosine diminish at long times. In the transform (right), one peak is found at the frequency of the wave (small vertical line). All the frequencies associated with suddenly ending the cosine are effectively removed.\n\n_____\n\n## 5.7 Fourier series and transforms\n\nThe connection between the Fourier series and the Fourier transform is important, and it should not be ignored. To produce the Fourier series such as that which describes a rectangular pulse, infinitely many terms in the Fourier series will be needed, and of ever increasing frequency. The Fourier transform allows us to see these frequencies by transforming to frequency space, so that each frequency in the Fourier series appears as a feature.\n\nIn an NMR experiment, a square pulse of RF radiation is used to excite the nuclear spin states in the sample and, as has been seen, the Fourier transform of such a pulse illustrates that it has many frequencies contained within it. In an experiment, the pulse is made of sufficient duration to contain all those frequencies needed to excite the nuclear spins. Of course, these frequencies are not made by the transform, but are there all the time, because to form the pulse in the first place many different sine or cosine waves each of different frequency are added together in the electronic circuitry.\n\nTo illustrate this further, consider a laser pulse with the duration of a few femtoseconds. Such pulses are made by the process of mode-locking. For a laser to work, the light waves in the cavity must fit exactly into its length no matter what the colour of the light, and a node must occur at each of the mirrors; the restriction is that $n$ half wavelengths must equal the cavity length, $n\\lambda/2 = L$. If these waves, which have different frequencies for each $n$, can be forced to be in phase with one another, a pulse results; mode-locking is the process by which this is achieved. Making the phase the same means ensuring that each of the waves has a maximum in the same place, no matter what their frequency is. A pulse results because waves of different frequency must eventually fall out of step with one another away from zero or $\\pm n\\pi$, where they are in phase. Figure 18 shows that a pulse can only result from the addition of many different frequencies if they are in phase. The pulse is normalized to a maximum of $\\pm 1$ in the figure and shows the amplitude, a photodiode or CCD detector measure the intensity which is the square of this signal is always positive. In a mode-locked laser, $\\approx 10^6$ waves may be added together rather than the few shown; consequently, the laser pulse is far better defined.\n\n\n\nFigure 18. Left: Eleven cosine waves and their sum show that pulses can only be made by adding waves of different frequency together but only if they have the same phase. Right: One possible sum when the waves are added with random phases. The waves are $\\cos(nx/2)$ where $n$ is an odd integer. The effect is more pronounced if more waves are used; the pulse becomes shorter and the random noise (right) becomes smaller in amplitude. The lower two plots show the square of the signals in the upper ones, plotted on the same scale. The square is important because if the waves correspond to the photons electric field, the intensity measured is the square of this. The pulses and random noise are both clear.\n____\n\nTo realize mode-locking, a laser must have a broad emission spectrum and nowadays titanium sapphire is often used as the gain medium to produce femtosecond duration pulses, dye-lasers are sometimes still used to produce picosecond pulses. The Ti$^{3+}$ ions have many different sites in the sapphire ($\\mathrm{Al_2O_3}$) crystal lattice and therefore have a broad emission spectrum, which is in the far-red part of the visible spectrum and centred around $850$ nm. The molecules or ions used to produce the fluorescence/luminescence which give rise to lasing, have a certain wavelength range caused by the nature of their potential energy surfaces and by the inhomogeneity of the host material a glass or liquid, for example, which shifts energy levels up and down. The coating on the mirrors, and perhaps added optical elements such as gratings, interference or birefringent (Lyot) filters, restrict the wavelengths over which the laser can operate, and this is done to enable the wavelength to be changed. However, if a short pulse is to be produced, the wavelength range has to be so wide that no filters are wanted, quite the opposite, as little restriction as possible on the wavelength range is desirable, as the product $\\Delta v\\Delta t$ has a constant value. This means that a wide frequency (or wavelength) range is necessary if $\\Delta t$ is to be small. This is entirely consistent with the observation that many waves of different frequencies are needed to make a pulse. ( In practice is possible to produce femtosecond pulses centred at different the wavelenghts as the spread in wavelength needed to produce the pulse is less than the possible wavelength range of the emission.)\n\n## 6 The Fourier Transform equations\n\nThe derivation of the transform equations is now sketched out by starting with the Fourier series. Butkov (1968) gives the full derivation. The Fourier series, considered in Section 1, are all formed from periodic functions, but suppose that the function is thought of as having an infinite period, or to put it another way, if the limits are $-L \\to L$ then what happens when $L \\to \\infty $? It is easier here to use the complex exponential form of the series, equations (7), and write\n\n$$\\displaystyle f(x) = \\sum_{n=-\\infty}^{\\infty}c_ne^{+in\\pi x/L} \\tag{24} $$\n\nwith coefficients\n\n$$\\displaystyle c_n= \\frac{1}{2L}\\int_{-L}^L f(x)e^{-in\\pi x/L} \\tag{25} $$\n\nwhere $n$ is an integer specifying the position in the series, therefore, $c_n$ is one of a series of numbers that could be plotted on a graph $c_n$ vs $n$. To simplify (24), we define $k = n\\pi /L$, which gives $\\Delta k = (\\pi/L)\\Delta n$ for a small change in $k$, and clearly, as $L$ gets larger, $k$ gets smaller. However, there is a problem here, for when $L\\to \\infty$ it looks as though all values of $c_n$, equation (25), will go to zero, because $L$ is in the denominator. \n\nInstead of immediately taking the limit, suppose that the values of $n$ describe adjacent points on a graph of $c_n$ vs $n$, and because adjacent points are the smallest differences that $n$ can have, then $\\Delta n = 1$ and so $\\Delta k = \\pi /L$ or $(L/\\pi )\\Delta k = 1$. Equation (24) can now be multiplied by this factor without difficulty because it is $1$, giving\n\n$$\\displaystyle f(x)=\\sum_{n=-\\infty}^{\\infty}\\frac{L}{\\pi}c_ne^{+ikx} \\Delta k \\tag{26} $$\n\nand $c_n$ is given by equation (25). The limit $L\\to \\infty$ also means that $\\Delta k \\to 0$, which makes $k$ into a continuous variable, and the coefficients $c_n$ can now be written as a function of $k$, i.e. as $c(k)$ instead of the discrete values $c_n$. Taking this limit also changes $f(x)$ to an integral, because $\\Delta k \\to 0$,\n\n$$\\displaystyle f(x)=\\lim_{L \\rightarrow \\infty}\\sum_{n=-\\infty}^{\\infty}\\frac{L}{\\pi}c_ce^{in\\pi x/L}\\Delta k =\\int_{-\\infty}^\\infty c(k)e^{ikx} dk$$\n\nand $c(k) = Lc_n/\\pi $ but from eqn. 25 $c(k)$ is\n\n$$c(k)= \\frac{1}{2\\pi}\\int_{-\\infty}^\\infty f(x)e^{-ikx}dx $$\n\nThis equation is conventionally rewritten by defining a new function $g(k)$, where $g(k) = c(k)\\sqrt{2\u03c0}$. This function is the _forward transform_ and is defined as \n\n$$\\displaystyle g(k) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(x)e^{-ikx}dx \\qquad\\text{ forward transform} \\tag{27}$$\n\nand the reverse transform is \n\n$$\\displaystyle f(x) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty g(k)e^{+ikx}dk \\qquad\\text{ reverse transform} \\tag{28}$$\n\nnotice how the $x$ and $k$ and the signs in the exponential change.\n\nThe two functions form a Fourier transform pair; the function $f(x)$ with a positive exponential is the 'reverse' or 'inverse' transform, and $g(k)$, equation (27), with a negative exponential, the 'forward' transform because it converts the measured or known function $f(x)$, where $x$ might be distance, into the transformed space $k$ which is inverse distance. Alternatively, if $x$ represents time then $k$ represents frequency.\n\nThere are some other points to note.\n\n**(i)** These equations give the value of the transform at one point only. To obtain the full transform, $k$ has to be varied in principle from $-\\infty$ to +$\\infty$, but, in practice, a value of $k$ which is far less than infinity can be used because the transform often has an infinitesimal amplitude at large $k$; see Fig. 19 for an example.\n\n**(ii)** Because the integration involves a complex number, the result might be complex or it might be real; this just depends on what the function is and it might therefore be necessary to plot the real, imaginary, and absolute value of the transform.\n\n**(iii)** There are different forms of Fourier transform pairs that differ from one another by normalization constants, $1/ 2\\pi$ in our notation. This can lead to confusion when comparing one calculation with another.\n\n**(iv)** Finally, note that some authors, engineers in particular, often define the forward transform with a positive sign in the exponential and negative in the reverse, which is a change of phase with respect to our notation. They also often use $j$ instead of $i$ to mean $\\sqrt{-1}$.\n\n### 6.1 Plotting transforms\n\nBecause the transform is normally a complex quantity, it has a real and imaginary part. In plotting the transform three graphs can be produced; one for each of the real and the imaginary components of the whole transform and one of the square of the absolute value, which is usually called the power or transform spectrum and is $g(k)^*g(k) = |g(k)|^2$, the asterisk indicating the complex conjugate.\n\n### 6.2 What functions can be transformed?\n\nTo perform the transform, $f(x)$ must be integrable and must converge when the integration limits are infinity; this generally means that $f(x) \\to$ 0 as $x \\to \\pm \\infty$: a sufficient condition is that $\\int_{-\\infty}^{\\infty} |f(x)|dx$ exists.\n\n\n### 6.3 How to calculate and plot a Fourier transform\n\nAs an illustration, the Fourier transform of a sine wave $f(x) = \\sin(\\omega x)$ which has an angular frequency $\\omega = 2\\pi/L$ will be calculated over the range $-L$ to +$L$; this supposes also that the function $f(x)$ is zero everywhere else, see Fig. 19. Choosing the sine function to have the argument $2\\pi x/L$ means that it is zero, i.e. has a node, at $x = \\pm L$; note that the frequency need not be a multiple of the range of the transform, but the resulting equations are simpler if it is. Because the function is zero outside $\\pm L$, so is the integral, and the integration limits become $\\pm L$ rather than $\\pm \\infty$. The forward transform uses eqn. 27\n\n$$\\displaystyle g(k) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-L}^L \\sin(2\\pi x/L)e^{-ikx}dx $$\n\nwhich is easily integrated using the exponential form of the sine. The result is \n\n$$\\displaystyle g(k)= -\\frac{4i\\pi \\sin(Lk)}{k^2L^2-4\\pi^2}$$\n\n\n```python\n# check using sympy 1j is the Sympy definition of mathematical 'i' \n\nL,x,k = symbols( 'L x k',positive =True)\n\nf01 = sin(2*pi*x/L)*exp(-1j*k*x)\n\ng = simplify(integrate(f01,(x,-L,L),conds='none') )\ng\n```\n\nwhich is the same result after converting the exponentials to the sine. This can be checked also by SymPy using the instructions 'simplify(g.rewrite(sin))'\n\nThe Fourier transform is, in this particular example, wholly the imaginary part of a complex number. When $k = 0$ and when $Lk = \\pm n \\pi$, the transform is zero, except when $Lk = \\pm 2\\pi$ where the maximum or minimum occurs. When $Lk = +2\\pi$, the transform has the nominal value of $0/0$, which can be evaluated using l'Hopital's rule (see Chapter 3). Remember to stop differentiating when either the top or bottom of the fraction is not zero, the result is\n\n$$\\displaystyle \\lim_{k \\to 2\\pi /L} \\frac{-4\\pi iL\\sin(Lk)}{k^2L^2-4\\pi^2} \\to \\frac{-4\\pi i L^2\\cos(Lk)}{2kL^2} = -\\frac{2\\pi i}{k} = -iL$$\n\nwhich is the minimum value of the transform. The maximum occurs when $kL = -2\\pi$, (see Fig. 19), which corresponds to the frequency $k = 2\\pi /L \\equiv \\omega$ in radians, or 1/$L$ in Hz, if $L$ measures time. If $L$ is distance, cm for example, as in an FTIR spectrometer, then 1/$L$ is in wavenumbers or cm$^{-1}$.\n\nTo plot the transform, it is necessary to plot either the imaginary part (fig 19) or its absolute value; there is no real part in this particular example. Notice that there appear to be two frequencies, one at about 0.5 and at -0.5; negative frequencies do not make any sense if the sine wave is a signal from an experiment and for real experimental data, the negative frequencies need to be ignored. If the range $\\pm L$ is kept the same, and instead of a sine, a cosine wave of the same frequency transformed, the real frequency part of the Fourier transform would now look like the imaginary part of Fig. 19.\n\nAs a sine wave of infinite extent has a single frequency the extra frequencies seen in Fig. 19 must arise due to the fact that this wave exists only between $\\pm L$. The sudden change in value of the function at $\\pm L$ corresponds to having several different frequencies present, although they are not apparently there. Put another way, if a Fourier series of this truncated sine wave had to be formed very many sines or cosines of a different frequencies would have to be included. Why several terms? A single sine wave normally extends to infinity, many waves of different frequency are needed to reinforce the values near $k = 0$ and simultaneously to cancel out the part where the amplitude is zero, between $-L \\to -\\infty$ and $L$ and $\\infty$. Although, for practical purposes, these regions in the integration were ignored, this was only because the sine wave is zero here, but this does not mean that waves do not exist to make the amplitude zero. These terms produce the extra frequencies seen in the transform. Put another way, to understand the transform it is necessary to consider all the terms needed to describe the initial truncated function $f(x)$ as a Fourier series because it is exactly these terms that appear as frequencies in the transform.\n\n\n\nFigure 19. Graphs of $\\sin(\\omega x)$ from $\\pm L$ when $L = 10$; its Fourier transform, the imaginary part (top right), and its spectrum, the square of its absolute value (bottom right). The real part of the transform is zero because $\\displaystyle g(k)= -\\frac{4i\\pi \\sin(Lk)}{k^2L^2-4\\pi^2}$ has no real part. (The vertical scales are not the same, but the maximum value of the transform and its absolute value is $L$.)\n\n\n### 6.4 How the Fourier transform works\n\nThe transform appears to have the effect of seeking out any repetitive features in a signal $f(x)$. This is true whether it is the discrete transform acting on real data, or the mathematical transform of a sine wave or other function. To understand what the transform does, we must look at eqn 27,\n\n$$\\displaystyle g(k) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(x)e^{-ikx}dx $$\n\nand recall that this only gives the value at one point $k$. To obtain the transform, $k$ has to vary from -${\\infty} \\to \\infty$ although in practice only a limited range is needed to observe the major features of the transform. In this exponential form, the oscillating nature of the argument is not so apparent, but writing it as\n\n$$\\displaystyle g(k) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(x)(\\cos(kx)-i\\sin(kx))dx $$\n\nshows that the function $f$ is multiplied by a sine and cosine and integrated. Because $k$ can take any value, the sine and cosine of all possible frequencies multiply $f$. Most of the time, multiplication results in a highly oscillatory function, with as many positive parts as negative ones, and the integral evaluates to zero or something very close to it. \n\nWhen the period of $f$ is close to, or the same as, that of the sine or cosine, when multiplied together these no longer integrate to give zero. Hence, the particular frequency determined by $k$ gives the transform its value. This effect is pictured in Fig. 20. The left-hand column shows a function $f$ (top image) with a long period compared to a particular frequency of the sine wave in the transform at some value of $k$, middle image left. The bottom left curve shows the product of these two curves. The integral of this product, the area under the curve, almost evaluates to zero, with the positive and negative parts cancelling. \n\nThe middle graph of the right-hand column shows a different frequency of the sine wave, because $k$ now has a different value in $\\sin(kx)$, and the sine wave's period now matches that of the function $f$; their product is now positive and its integral is not zero. The Fourier transform therefore selects this frequency from among all others. Naturally, if there are several frequencies present in the function, these are each picked out in a similar manner as $k$ changes.\n\n\n\nFigure 20. Left column: The function $f(x)$ has a period that is very different from that of the sine wave $\\sin(4kx)$ middle curve. Their product, lowest left-hand curve, oscillates about zero and integrates to zero or very close to it and so appears as an insignificant feature in the transform.
\nRight column: The period of the sine wave, (middle curve) which is determined by $k$, is now changed compared to the left-hand figure and now matches the period of the function. The product $f(x)\\sin(kx)$ is now only positive, and integrates to a finite number and so appears as a peak in the transform.\n\n### 6.5 Phase sensitive detection\n\nIn measuring signals buried in noise, the technique of phase sensitive detection is a very effective way of extracting the data and removing noise. In this method, the input to an experiment is modulated at a fixed frequency and the signal produced by the experiment is measured at this same frequency by a device known as a _lock-in amplifier_. This device illustrates the principle underlying the Fourier transform, although it is not a transform method.\n\nIn using a lock-in amplifier to measure fluorescence the light used to excite the molecules, and so stimulate the fluorescence, is modulated by rotating a slotted disc (chopper) in the exciting light's path. The photomultiplier or photodiode detects the modulated (on - off ) fluorescence signal together with any noise and this signal is passed to the lock-in amplifier. The lock-in also receives a reference signal directly from the chopper and it electronically multiplies this with the fluorescence signal (see fig 20). The schematic of an instrument is shown in Fig.21. \n\nMultiples (higher harmonics) of the fundamental reference frequency are filtered away, the resulting signal is integrated over many periods of the fundamental frequency, and a DC output signal is produced. As shown in Fig. 20, when the product of reference and signal is integrated, frequencies dissimilar to the reference, $f(x)$ in the figure, will average to something approaching zero.\n\nIf the reference signal is $r = r_0\\sin(\\omega t)$ and the noise free signal $s = s_0\\sin(\\omega t + \\varphi)$ then the output of the lock-in is\n\n$$\\displaystyle V_s= \\frac{r_0s_0}{T}\\int_0^T \\sin(\\omega t + \\varphi)\\sin(\\omega t)dt$$\n\nwhere $T = 2\\pi n/\\omega$ and $n \\gg$ 1 is an integer and $\\varphi$ is the phase (time) delay between the reference and the signal and is due to detectors and the amplifiers and other components in the experiment, but can be changed by the user. Expanding the sines and integrating gives\n\n$$\\displaystyle V_s= \\frac{r_0s_0}{T}\\int_0^T \\cos(\\varphi)-\\cos(2\\omega t+\\varphi)dt\\\\\n=\\frac{r_0s_0}{4\\omega T}[\\sin(\\varphi)+2T\\omega \\cos(\\varphi)-\\sin(2\\omega T+\\varphi) ] $$\n\nThe sine at twice the reference frequency is electronically filtered away leaving a signal that is constant because $T$ is the integration time set by the experimentalist and normally ranges from a few milliseconds to a few seconds. The measured signal is\n\n$$\\displaystyle V_s=\\frac{r_0s_0}{4\\omega T}[\\sin(\\varphi)+2T\\omega \\cos(\\varphi) ] \\tag{29}$$\n\nand as the phase $\\varphi$ can be adjusted by the user, this signal can be maximized.\n\nNow consider the situation when noise is present and assume that this has a wide range of frequencies $\\omega_{1,2..}$ and amplitudes $n_{1,2..}$. The signal from an instrument is normally noisy and is represented as\n\n$$\\displaystyle s_0\\sin(\\omega t+\\varphi)+n_1\\sin(\\omega _1t+\\varphi_1)+n_2\\sin(\\omega _2t+\\varphi_2)+\\cdots$$\n\nwhere $s_0, \\omega$, and $\\varphi$ are respectively the amplitude, frequency, and phase (relative to the reference) of the data. \n\nThe first term of the signal arises from the information we wish to measure and produces $V_s$ equation (29). We need only consider one noise term, for all the others behave similarly. Multiplying by the reference at frequency $\\omega$ but ignoring the phase $\\varphi$, as this adds nothing fundamental but makes the equations more complicated, gives the term,\n\n$$\\displaystyle \\sin(\\omega_1 t)\\sin(\\omega t)=[\\cos((\\omega-\\omega_1)t)-\\cos((\\omega+\\omega_1)t)]/2 $$\n\nIntegrating produces\n\n$$\\displaystyle V_n= \\frac{r_0n_1}{2T} \\int_0^T \\cos([\\omega-\\omega_1]t)-\\cos([\\omega+\\omega_1]t) dt\\\\\n=\\frac{r_0n_1}{4T} \\left[ \\frac{ \\sin([\\omega_1-\\omega]T)}{\\omega_1-\\omega} -\\frac{\\sin([\\omega_1+\\omega]T)}{\\omega_1+\\omega} \\right] $$\n\nThe sum frequency term is filtered by the instrument and is removed from the output leaving the term $\\displaystyle \\frac{ \\sin([\\omega_1-\\omega]T)}{\\omega_1-\\omega}$ which is the sinc function, see Fig. 15. Suppose that the frequency $\\omega_1$ represents white noise that contains all frequencies more or less equally. As these frequencies differ from $\\omega$ and the absolute value $| \\omega_1-\\omega |$ becomes larger the sinc function rapidly becomes very small. This means that the reference sine wave picks out just that frequency containing the signal and rejects almost all of the noise. The total signal is $V + V_n$ and although it still contains noise at the reference frequency $\\omega$ it contains very little at other frequencies, and the signal to noise ratio is increased very considerably. Often signals can be extracted from what appears to be completely noisy data.\n\nAs a practical consideration, the reference frequency should always be chosen to be a prime number so that the chance of detecting one of the multiples of electrical mains frequency is reduced. Also, this frequency should be in a region where the inherent noise of the experiment is low and if possible be of a high enough frequency to allow a short time $T$ to be used in the integration step, allowing many separate measurements to be made in a reasonable time.\n\n\n\nFigure 21. Schematic of phase sensitive detection and a lock-in amplifier.\n\n### 6.6 Parseval or Plancherel theorem.\n\nThis theorem is important because it proves that there is no loss of information when transforming between Fourier transform pairs. This is rather important because otherwise how would it be possible to tell what information has been lost or added? Fortunately, it can be shown that\n\n$$\\displaystyle \\int_{-\\infty}^{\\infty} g^*(k)g(k)dk = \\int_{-\\infty}^{\\infty} f^*(x)f(x)dx \\tag{30}$$\n\nwhere the asterisk denotes the complex conjugate. The Fourier transform of $f(x)$ is $g(k)$, which is integrated over its variable $k$, and similarly $f(x)$ is integrated over its variable $x$. As the total integral taken over all coordinate space $x$ and that over its conjugate variable $k$ is the same, all the information in the original function is retained in the transformation, i.e. it looks as if the transform is a different beast but this is only a disguise as it contains exactly the same information. This, of course, means that if something is done to the transform, then, in effect, the same is done to the function.\n\nThe Plancherel theorem (also called Rayleigh's theorem as it was first used by him in the theory of Black-Body radiation) is effectively the same but usually written as \n\n$$\\displaystyle \\int_{-\\infty}^{\\infty} |g(k)|^2dk = \\int_{-\\infty}^{\\infty} |f(x)|^2 dx $$\n\nGraphically it means that the shaded areas are the same. The figure shows the transform of a square wave as in fig 15. The transform is $\\displaystyle g(k)=\\frac{\\sin(ak)}{ak}$ where $a = 4$. The function $f(x) = 1 $ in the range $-2\\to 2$ so $\\int f(x)^2 dx= 4$. \n\n\n\nFifure 21a. Illustrating the Parseval or Plancherel theorem. The shaded areas are the same size, although they do not appear to be. The absolute square of the transform $|g(k)|^2$ extends to $\\pm \\infty$ which makes up the area since the function is always positive.\n\nThis theorem is very important in quantum mechanics. Should $f(x)$ represent a wavefunction that varies as a function of distance $x$, which could be the displacement from equilibrium of an harmonic oscillator, then variable $k$ can be interpreted as the momentum (usually given the letter $p$) making $g(k)$ the wavefunction in 'momentum space'. This means that calculations can be formed either in spatial coordinates, i.e. distance or in 'momentum space' depending upon which is the most convenient mathematically. The change in displacement, $\\delta x$, and change in momentum, $\\delta p$, are conjugate pairs of variables and are linked by the Heisenberg uncertainty principle $\\delta x\\delta p \\ge \\hbar/2$.\n\n### 6.6.1 Uncertainty Principle\n\nIt is known from experiment that when an emission line from an atomic or molecular transition has a very broad frequency spread, then the lifetime $\\tau$ of the state involved is short lived and vice versa. This is called the 'time-energy' uncertainty relationship $\\Delta E\\tau \\ge \\hbar/2$ or equivalently $v\\tau\\ge 1/2$ as $E=hv$. A similar effect is observed when a time varying signal is measured such as a voltage on an oscilloscope. The product of the signal's duration and its bandwidth (its spread in frequency) has a certain minimum value. This is a consequence of the variables, time and frequency being related via a fourier transform. Time and frequency are called _conjugate_ variables. \n\nTo show this can be quite tricky since the variance of the transform has to be calculated and this may not be integrable. The proof is given by Bracewell 'The Fourier Tranfrom and its Applications'. Instead of giving this proof to illustrate the effect some examples of particular cases are described.\n\nAs a measure of the spread in a value its standard deviation can be used, see Chapter 4 Integration eqn. 26. The square of the standard deviation is the variance and is defined as \n\n$$\\displaystyle \\sigma^2 =\\langle x^2\\rangle - \\langle x\\rangle^2$$\n\nwhere the brackets $\\langle \\rangle$ indicate the average value. The average value of a function $p(x)$ is defined as \n\n$$\\displaystyle \\langle x_p\\rangle= \\int xp(x)dx\\big/\\int p(x)dx$$\n\nand the average of the square \n\n$$\\displaystyle \\langle x_p^2\\rangle= \\int x^2p(x)dx\\big/\\int p(x)dx$$\n\nThe denominator is the normalisation. If the function is symmetrical about zero the mean is also zero and $\\langle x\\rangle=0$ and can be ignored. All that is necessary is to calculate $\\langle f^2\\rangle\\langle g^2\\rangle$ for a function $f$ and its transform $g$ and take the square root to obtain the standard deviation of the product. \n\nOften the transform and function may not be good distributions in which case their square is taken as the function to use thus we make the average squared ( also called the energy function) as \n\n$$\\displaystyle \\langle x^2\\rangle= \\int x^2f^*f dx\\big/\\int f^*f dx$$\n\nand a similar equation for the transform $g$. As the function and transform is usually complex the square is obtained using the complex conjugate i.e. $p^2 \\to p^*p$.\n\nThis method will be demonstrated below but sometimes the transform integrals become infinity, in this case we choose to take the deviation as the measure from the peak of the transform $g$ to the first zero. \n\n#### (a) A Gaussian shaped pulse. \n\nA wavepacket comprising several waves of varying frequency can have an overall profile that is Gaussian in shape. This can apply, for example, to a laser pulse or a summation of harmonic oscillator wavefunctions. The normalised Gaussian function is $\\displaystyle f(t)= \\frac{e^{-t^2/2\\sigma^2}}{\\sigma \\sqrt{2\\pi} } $ where $\\sigma$ is the standard deviation, or width of the Gaussian, see figure 4, Chapter 13 'Data Analysis'. The transform of a Gaussian is also a Gaussian but with a different width. \n\n$$\\displaystyle g(v)= \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{-ivt} dt = \\frac{1}{\\sqrt{2\\pi}}e^{-i\\sigma^2v^2/2}$$\n\nAs the standard deviation of $f$ is by definition $\\sigma$ and by comparison the standard deviation of $g$ must be $1/\\sigma$ hence $\\sigma_t\\sigma_v=\\sigma/\\sigma=1$.\n\nCalculating using the method outlined above to find $\\langle t^2\\rangle$ and $\\langle v^2\\rangle$ we expect a different result because we now use the square of the function and transform.\n\nTaking all integrations between $\\pm\\infty$, the function $\\displaystyle \\int f(t)^2dt= \\frac{1}{2a\\sqrt{\\pi}}$ and $\\displaystyle \\int t^2f(t)^2dt = \\frac{a}{4\\sqrt{\\pi}}$ and their ratio is $\\displaystyle \\Delta t= \\frac{a^2}{2}$.\n\nThe transform has similar integrals$\\displaystyle \\int g(v)^2dv= \\frac{1}{2a\\sqrt{\\pi}}$ and $\\displaystyle \\int v^2g(v)^2dv = \\frac{1}{4a^3\\sqrt{\\pi}}$ and their ratio is $\\displaystyle \\Delta v= \\frac{1}{2a^2}$. \n\nThe product of uncertainties after remembering that we calculate the square of the values is \n\n$$\\displaystyle \\Delta t\\Delta v = 1/2$$ \n\nwhich is the minimum possible value.\n\n#### (b) Decaying excited state\n\nAn excited electronic state can decay by emitting a photon, fluorescence if the transition is allowed or phosphorescence if from a triplet excited state to a singlet ground state. The time profile can be measured as can the spectral width of the transition and they demonstrate the time - energy/frequency relationship. The excited state has a lifetime $\\tau$ and transition frequency $\\omega_0$. A spatial analogue is momentum broadening as a result of collisions, in this case the lifetime is replace by the mean free path.\n\nThe field of the emitted photon is $\\displaystyle f(t)=e^{i\\omega_0 t-t/2\\tau}$ which is that of a plane wave of frequency $\\omega_0$ that decays away (or is damped) with a lifetime $\\tau$. The detector measure the 'square' of this which is $f(t)^*f(t) = e^{-t/\\tau}$. \n\nThe transform is \n\n$$\\displaystyle g(\\omega)= \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{-i\\omega t} dt = \\sqrt{\\frac{2}{\\pi}}\\left(\\frac{1}{2i (\\omega-\\omega_0)+1/\\tau}\\right)$$\n\nmaking the measured spectral profile, which has the shape of a Lorentzian curve,\n\n$$\\displaystyle g(\\omega)^*g(\\omega)=\\frac{2}{\\pi}\\left( \\frac{1}{4(\\omega-\\omega_0)^2+1/\\tau^2} \\right) $$\n\nThe full width at half maximum of this curve is $1/\\tau$ which makes $\\Delta t\\Delta \\omega = \\tau/\\tau=1$. Figure 54 in the answer to question 7 shows the exponential decay and the Lorentzian curves. As the energy is related to the frequency as $\\Delta E =\\hbar \\Delta \\omega$ then $\\tau\\Delta E = \\hbar$\n\nThe calculation using $\\displaystyle \\int g(\\omega)^2 d\\omega$ as in the previous example will not work here because this integral is infinite.\n\n#### (c) Finite wave-train\n\nIf a plane wave of frequency $\\omega_0$ exists for a short time $\\pm a$ and is zero elsewhere then over this range $\\displaystyle f(t)=e^{i\\omega t};\\quad -a\\le t \\le a$, see figure 19 where $a=10$. The spread of the wave $\\Delta x = a$. The 'square' of $f$ is $f(x)^*f(x)=1$\n\nThe transform has the form of a sinc function \n\n$$\\displaystyle g(\\omega)=\\int_{-\\infty}^\\infty f(t)e^{-i\\omega t}dt= \\sqrt{\\frac{2}{\\pi}}\\frac{\\sin\\left( a (\\omega-\\omega_0)\\right)}{\\omega-\\omega_0}$$\n\nThe function $g^2$ cannot be normalised easily as it produces another integral (the Sine integral) but it is clear that the zeros of the function are at the same place in $g$ and its square and so we can take the spread $\\Delta \\omega$ to be the value from $\\omega_0$ to the first minimum. The zeros of the sinc function occur at $\\mathrm{sinc}(n\\pi)$ where $n$ is an integer greater than zero, thus the first zero occurs at $\\pm \\pi/a$ from the central frequency.\n\nIf this zero is associated with $\\Delta \\omega$ then $\\Delta \\omega =\\pi/a$ and the product $\\Delta x\\Delta \\omega= \\pi$.\n\n#### Heisenberg Uncertainty\n\nThe relationship $\\Delta \\omega\\Delta x \\ge 1$ as described in the last example above is not an inherent property of quantum mechanics but is a property of Fourier Transforms. The last example shows that it is not possible to form a train of electromagnetic waves for which it is possible to measure, at the same time, the position and wavelength with infinite accuracy. \n\nHowever, when considering quantum mechanics a particle is given wavelike properties via the de-Broglie relation. Such a material particle (such as an electron or a molecule) of energy $E$ and momentum $\\pmb p$ is now associated with a wave of angular frequency $\\omega=2\\pi\\nu$ and wavevector $\\pmb k$ i.e. $E=\\hbar\\omega; \\pmb p=\\hbar \\pmb k$ and leads to $\\Delta x\\Delta p \\ge \\hbar/2$. The fact that the Schroedinger equation is linear and homogeneous means that for particles a superposition principle applies which gives them wavelike properties. The small value of Planck's constant makes the limitations of the uncertainty principle totally negligible for anything macroscopic, i.e. greater than approx micron size. \n\n### 6.7 Summary of some Fourier transform properties\n\n$$\\displaystyle \n\\begin{array}{ll}\n\\hline\n\\text{The transform pair is} & f (x) \\rightleftharpoons g(k)\\\\[0.15cm] \n\\text{Shift or delay } & f (x - x_0) \\rightleftharpoons e^{- ikx_0}g(k)\\\\[0.15cm] \n\\text{frequency shift} & f (x)e^{ik_0x} \\rightleftharpoons g(k - k_0) \\\\[0.15cm]\n\\text{Reversal } & f (-x) \\rightleftharpoons g(-k) \\\\[0.15cm]\n\\text{Complex Conjugate } & f (x)^* \\rightleftharpoons g(k)^* \\\\[0.15cm]\n\\text{Scaling} & \\displaystyle f(ax)\\rightleftharpoons \\frac{g(k/a)}{ |a|}\\\\[0.15cm]\n\\text{Derivative} & \\displaystyle f'(x)\\rightleftharpoons 2\\pi\\,i\\,kg(k)\\\\[0.15cm]\n\\hline\n\\end{array}\n$$\n", "meta": {"hexsha": "cc632166a8ba021d4e96cc9646da410175272f9d", "size": 69355, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter-9/Fourier-C.ipynb", "max_stars_repo_name": "subblue/applied-maths-in-chem-book", "max_stars_repo_head_hexsha": "e3368645412fcc974e2b12d7cc584aa96e8eb2b4", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter-9/Fourier-C.ipynb", "max_issues_repo_name": "subblue/applied-maths-in-chem-book", "max_issues_repo_head_hexsha": "e3368645412fcc974e2b12d7cc584aa96e8eb2b4", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter-9/Fourier-C.ipynb", "max_forks_repo_name": "subblue/applied-maths-in-chem-book", "max_forks_repo_head_hexsha": "e3368645412fcc974e2b12d7cc584aa96e8eb2b4", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 117.5508474576, "max_line_length": 2732, "alphanum_fraction": 0.7205680917, "converted": true, "num_tokens": 14102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46101677931231594, "lm_q2_score": 0.28457601028405616, "lm_q1q2_score": 0.13119431573070406}} {"text": "# High Resolution Shock Capturing Methods: Lab 2\n\n\n```\nfrom IPython.core.display import HTML\ncss_file = '../ipython_notebook_styles/ngcmstyle.css'\nHTML(open(css_file, \"r\").read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDirectly relevant background material can be found in eg [David Ketcheson's *HyperPython* notebooks](https://github.com/ketch/HyperPython).\n\n\n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n## Systems\n\nIn [lab 1](./HRSC Lab 1.ipynb) the MUSCL scheme to solve scalar conservation laws was introduced. These methods work for discontinuous solutions, capturing the behaviour without introducing spurious oscillations. However, the convergence rate is not high (but still, better slow convergence than no convergence!).\n\nIn real life, we mostly want to solve conservation laws in \n\n1. multiple dimensions, and\n2. for systems of equations.\n\nWorking in multiple dimensions is relatively straightforward - the solution method can simply be applied one dimension at a time (although there exist more specialized methods that may have greater accuracy than this approach).\n\nMoving from a scalar conservation law to a system of conservation laws is also relatively straightforward; the same reconstruction & Riemann problem solution approach will work. The main complication is the solution of the Riemann problem.\n\nTo check you can see how this would work for a simple problem, try the system\n\n$$\n\\begin{equation}\n \\partial_t {\\bf q} + \\partial_x \\left( A {\\bf q} \\right) = {\\bf 0}, \\qquad A = \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\end{pmatrix},\n\\end{equation}\n$$\n\nwhich is just a system of (uncoupled) advection equations. Implement a MUSCL scheme to solve this on the domain $x \\in [-1, 1]$ with periodic boundaries, using initial data $(\\sin(\\pi x), \\sin(\\pi x))^T$, checking the solution at various times and the convergence at $t=2$ (where the exact solution is simply the initial profile). Note that the exact solution to the Riemann problem given left and right states ${\\bf q}^{(L,R)} = (q_1^{(L,R)}, q_2^{(L,R)})^T$ is\n\n$$\n\\begin{equation}\n {\\bf q}^{*} = \\begin{pmatrix} q_1^{(L)} \\\\ q_2^{(R)} \\end{pmatrix}\n\\end{equation}\n$$\n\nas the advection velocity for $q_1$ is positive whilst that for $q_2$ is negative.\n\n\n```\n\n```\n\n## Approximate Riemann Solvers\n\nIt may not always be possible, and frequently is not practical, to solve the Riemann problem exactly. When this occurs, it is typical to approximate the solution of the Riemann problem, sometimes by directly approximating the intercell flux $f_{i-1/2} = f_{i-1/2}(q^{(L)}_{i-1/2}, q^{(R)}_{i-1/2})$.\n\nThe simplest (and least accurate, or most diffusive) approximate solver is given by the Lax-Friedrichs, or HLL flux. Given the left and right states for the Riemann problem, and the time and space steps, this approximates the intercell flux as\n\n$$\n\\begin{equation}\n f_{i-1/2} = \\frac{1}{2} \\left( f \\left( q^{(L)}_{i-1/2} \\right) + f \\left( q^{(R)}_{i-1/2} \\right) + \\frac{\\Delta x}{\\Delta t} \\left( q^{(L)}_{i-1/2} + - q^{(R)}_{i-1/2} \\right) \\right).\n\\end{equation}\n$$\n\n[**Note**: This is not exactly the Lax-Friedrichs or HLL fluxes, which have parameters controlling the wavespeed. Here I have simplified by setting all parameters to \"safe\" values.]\n\nImplement this flux, and use it within your MUSCL scheme above. Compare the solution against the scheme using the exact solver.\n\n\n```\n\n```\n\n## Shallow water equations\n\nA simple nonlinear system, itself a simplification of the Navier-Stokes equations, is the shallow water equations. In non-dimensional form these are\n\n$$\n\\begin{equation}\n \\partial_t \\begin{pmatrix} \\phi \\\\ \\phi u \\end{pmatrix} + \\partial_x \\begin{pmatrix} \\phi u \\\\ \\phi u^2 + \\tfrac{1}{2} \\phi^2 \\end{pmatrix} = {\\bf 0}.\n\\end{equation}\n$$\n\nIt represents the flow of (typically incompressible) fluids in a shallow channel with a flat bottom topography; multi-dimensional versions that include topography as a source term are a standard way of simulating tsunamis, for example. Here $\\phi$ is the *geopotential* (essentially the depth of the fluid) and $u$ is the velocity.\n\nUse your MUSCL method with the Lax-Friedrichs solver to solve a *dam-break* problem. That is, solve the shallow water equations on the domain $x \\in [-1, 1]$ with fixed boundary conditions (the solutions in the ghost cells can be copied from the previous solution), using the initial data\n\n$$\n\\begin{equation}\n {\\bf q}(x) = \\begin{cases} {\\bf q}^{(L)} = \\begin{pmatrix} \\phi^{(L)} \\\\ \\phi^{(L)} u^{(L)} \\end{pmatrix} = \\begin{pmatrix} 3 \\\\ 0 \\end{pmatrix} & x < 0, \\\\\n {\\bf q}^{(R)} = \\begin{pmatrix} \\phi^{(R)} \\\\ \\phi^{(R)} u^{(R)} \\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} & x > 0. \\end{cases}\n\\end{equation}\n$$\n\nThat is, $u^{(L,R)} = 0$, $\\phi^{(L)} = 3$, $\\phi^{(R)} = 1$. Plot the solution at $t = 0.4$, investigating how it depends on resolution.\n\nThe solution should consist of a continuous *rarefaction* wave propagating to the left and a discontinuous *shock* propagating to the right.\n\n\n```\n\n```\n\n## Characteristics\n\nThe solution of the Riemann problem is constant along characteristics. Analyzing the result requires calculating the characteristic structure. For this we need the Jacobian matrix $\\partial {\\bf f} / \\partial {\\bf q}$, and its eigenvectors and eigenvalues. In order for this information to be practically useful in a real algorithm it needs calculating analytically.\n\nCalculate the Jacobian matrix for the shallow water equations. Also calculate its eigenvalues $\\lambda^{\\pm}$, and the left (${\\bf l}^{\\pm}$) and right (${\\bf r}^{\\pm}$) eigenvectors. The eigenvectors should be normalized so that ${\\bf l}^i \\cdot {\\bf r}^j = \\delta^{ij}$; that is, the eigenvectors are orthonormal. \n\nYou'll want to look at `sympy`, and the commands `symbols`, `Matrix`, `diff`, `inv` and `eigenvects`.\n\n\n```\nimport sympy\n```\n\n\n```\n\n```\n\nCalculate the characteristic speeds in the dam-break problem initial data. Use this to confirm that the wave structure is as expected.\n\n\n```\n\n```\n\nUse the equation across the rarefaction curve\n\n$$\n\\begin{equation}\n \\frac{\\partial}{\\partial \\xi} {\\bf q} = \\frac{{\\bf r}^{\\pm}}{{\\bf r}^{\\pm} \\cdot \\frac{\\partial \\lambda^{\\pm}}{\\partial {\\bf q}}}\n\\end{equation}\n$$\n\nto find the solution across the rarefaction wave.\n\n\n```\n\n```\n", "meta": {"hexsha": "b496b1a84e94664e355969e02b3851d5c6d19e1e", "size": 16394, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FEEG6016 Simulation and Modelling/2014/HRSC Lab 2.ipynb", "max_stars_repo_name": "ngcm/training-public", "max_stars_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-06-23T05:50:49.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-22T10:29:53.000Z", "max_issues_repo_path": "FEEG6016 Simulation and Modelling/2014/HRSC Lab 2.ipynb", "max_issues_repo_name": "Jhongesell/training-public", "max_issues_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-11-28T08:29:55.000Z", "max_issues_repo_issues_event_max_datetime": "2017-11-28T08:29:55.000Z", "max_forks_repo_path": "FEEG6016 Simulation and Modelling/2014/HRSC Lab 2.ipynb", "max_forks_repo_name": "Jhongesell/training-public", "max_forks_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2015-04-18T21:44:48.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-09T17:35:58.000Z", "avg_line_length": 36.4311111111, "max_line_length": 479, "alphanum_fraction": 0.5095156765, "converted": true, "num_tokens": 2711, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.44552953503957266, "lm_q2_score": 0.29421496597446145, "lm_q1q2_score": 0.1310814569922855}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pylab as plt\nimport seaborn as sns\nsns.set_context('notebook')\n\nRANDOM_SEED = 20090425\n```\n\n---\n\n# Comparing Two Groups with a Continuous or Binary Outcome\n\nStatistical inference is a process of learning from incomplete or imperfect (error-contaminated) data. Can account for this \"imperfection\" using either a sampling model or a measurement error model.\n\n### Statistical hypothesis testing\n\nThe *de facto* standard for statistical inference is statistical hypothesis testing. The goal of hypothesis testing is to evaluate a **null hypothesis**. There are two possible outcomes:\n\n- reject the null hypothesis\n- fail to reject the null hypothesis\n\nRejection occurs when a chosen test statistic is higher than some pre-specified threshold valuel; non-rejection occurs otherwise.\n\n\n\nNotice that neither outcome says anything about the quantity of interest, the **research hypothesis**. \n\nSetting up a statistical test involves several subjective choices by the user that are rarely justified based on the problem or decision at hand:\n\n- statistical test to use\n- null hypothesis to test\n- significance level\n\nChoices are often based on arbitrary criteria, including \"statistical tradition\" (Johnson 1999). The resulting evidence is indirect, incomplete, and typically overstates the evidence against the null hypothesis (Goodman 1999).\n\nMost importantly to applied users, the results of statistical hypothesis tests are very easy to misinterpret. \n\n### Estimation \n\nInstead of testing, a more informative and effective approach for inference is based on **estimation** (be it frequentist or Bayesian). That is, rather than testing whether two groups are different, we instead pursue an estimate of *how different* they are, which is fundamentally more informative. \n\nAdditionally, we include an estimate of **uncertainty** associated with that difference which includes uncertainty due to our lack of knowledge of the model parameters (*epistemic uncertainty*) and uncertainty due to the inherent stochasticity of the system (*aleatory uncertainty*).\n\n# An Introduction to Bayesian Statistical Analysis\n\nThough many of you will have taken a statistics course or two during your undergraduate (or graduate education, most of those who have will likely not have had a course in *Bayesian* statistics. Most introductory courses, particularly for non-statisticians, still do not cover Bayesian methods at all. Even today, Bayesian courses (similarly to statistical computing courses!) are typically tacked onto the curriculum, rather than being integrated into the program.\n\nIn fact, Bayesian statistics is not just a particular method, or even a class of methods; it is an entirely **different paradigm** for doing statistical analysis.\n\n> Practical methods for making inferences from data using probability models for quantities we observe and about which we wish to learn.\n*-- Gelman et al. 2013*\n\nA Bayesian model is described by parameters, uncertainty in those parameters is described using probability distributions.\n\nAll conclusions from Bayesian statistical procedures are stated in terms of **probability statements**\n\n\n\nThis confers several benefits to the analyst, including:\n\n- ease of interpretation, summarization of uncertainty\n- can incorporate uncertainty in parent parameters\n- easy to calculate summary statistics\n\n### Bayesian vs Frequentist Statistics: *What's the difference?*\n\nAny statistical inferece paradigm, Bayesian or otherwise, involves at least the following: \n\n1. Some **unknown quantities** about which we are interested in learning or testing. We call these *parameters*.\n2. Some **data** which have been observed, and hopefully contain information about.\n3. One or more **models** that relate the data to the parameters, and is the instrument that is used to learn.\n\n\n\n### The Frequentist World View\n\n\n\n- The **data** that have been observed are considered **random**, because they are realizations of random processes, and hence will vary each time one goes to observe the system.\n- Model **parameters** are considered **fixed**. A parameter's true value is uknown and fixed, and so we *condition* on them.\n\nIn mathematical notation, this implies a (very) general model of the following form:\n\n
\n\\\\[f(y | \\theta)\\\\]\n
\n\nHere, the model \\\\(f\\\\) accepts data values \\\\(y\\\\) as an argument, conditional on particular values of \\\\(\\theta\\\\).\n\nFrequentist inference typically involves deriving **estimators** for the unknown parameters. Estimators are formulae that return estimates for particular estimands, as a function of data. They are selected based on some chosen optimality criterion, such as *unbiasedness*, *variance minimization*, or *efficiency*.\n\n> For example, lets say that we have collected some data on the prevalence of autism spectrum disorder (ASD) in some defined population. Our sample includes \\\\(n\\\\) sampled children, \\\\(y\\\\) of them having been diagnosed with autism. A frequentist estimator of the prevalence \\\\(p\\\\) is:\n\n>
\n> $$\\hat{p} = \\frac{y}{n}$$\n>
\n\n> Why this particular function? Because it can be shown to be unbiased and minimum-variance.\n\nIt is important to note that, in a frequentist world, new estimators need to be derived for every estimand that is introduced.\n\n### The Bayesian World View\n\n\n\n- Data are considered **fixed**. They used to be random, but once they were written into your lab notebook/spreadsheet/IPython notebook they do not change.\n- Model parameters themselves may not be random, but Bayesians use probability distribtutions to describe their uncertainty in parameter values, and are therefore treated as **random**. In some cases, it is useful to consider parameters as having been sampled from probability distributions.\n\nThis implies the following form:\n\n
\n\\\\[p(\\theta | y)\\\\]\n
\n\nThis formulation used to be referred to as ***inverse probability***, because it infers from observations to parameters, or from effects to causes.\n\nBayesians do not seek new estimators for every estimation problem they encounter. There is only one estimator for Bayesian inference: **Bayes' Formula**.\n\n## Bayes' Formula\n\nNow that we have some probability under our belt, we turn to Bayes' formula. While frequentist statistics uses different estimators for different problems, Bayes formula is the **only estimator** that Bayesians need to obtain estimates of unknown quantities that we care about. \n\n\n\nThe equation expresses how our belief about the value of \\\\(\\theta\\\\), as expressed by the **prior distribution** \\\\(P(\\theta)\\\\) is reallocated following the observation of the data \\\\(y\\\\).\n\nThe innocuous denominator \\\\(P(y)\\\\) usuallt cannot be computed directly, and is actually the expression in the numerator, integrated over all \\\\(\\theta\\\\):\n\n
\n\\\\[Pr(\\theta|y) = \\frac{Pr(y|\\theta)Pr(\\theta)}{\\int Pr(y|\\theta)Pr(\\theta) d\\theta}\\\\]\n
\n\nThe intractability of this integral is one of the factors that has contributed to the under-utilization of Bayesian methods by statisticians.\n\n### Priors\n\nOnce considered a controversial aspect of Bayesian analysis, the prior distribution characterizes what is known about an unknown quantity before observing the data from the present study. Thus, it represents the information state of that parameter. It can be used to reflect the information obtained in previous studies, to constrain the parameter to plausible values, or to represent the population of possible parameter values, of which the current study's parameter value can be considered a sample.\n\n### Likelihood functions\n\nThe likelihood represents the information in the observed data, and is used to update prior distributions to posterior distributions. This updating of belief is justified becuase of the **likelihood principle**, which states:\n\n> Following observation of \\\\(y\\\\), the likelihood \\\\(L(\\theta|y)\\\\) contains all experimental information from \\\\(y\\\\) about the unknown \\\\(\\theta\\\\).\n\nBayesian analysis satisfies the likelihood principle because the posterior distribution's dependence on the data is **only through the likelihood**. In comparison, most frequentist inference procedures violate the likelihood principle, because inference will depend on the design of the trial or experiment.\n\nRemember from the density estimation section that the likelihood is closely related to the probability density (or mass) function. The difference is that the likelihood varies the parameter while holding the observations constant, rather than *vice versa*.\n\n## Bayesian Inference, in 3 Easy Steps\n\n\n\nGelman et al. (2013) describe the process of conducting Bayesian statistical analysis in 3 steps.\n\n### Step 1: Specify a probability model\n\nAs was noted above, Bayesian statistics involves using probability models to solve problems. So, the first task is to *completely specify* the model in terms of probability distributions. This includes everything: unknown parameters, data, covariates, missing data, predictions. All must be assigned some probability density.\n\nThis step involves making choices.\n\n- what is the form of the sampling distribution of the data?\n- what form best describes our uncertainty in the unknown parameters?\n\n### Discrete Random Variables\n\n$$X = \\{0,1\\}$$\n\n$$Y = \\{\\ldots,-2,-1,0,1,2,\\ldots\\}$$\n\n**Probability Mass Function**: \n\nFor discrete $X$,\n\n$$Pr(X=x) = f(x|\\theta)$$\n\n\n\n***e.g. Poisson distribution***\n\nThe Poisson distribution models unbounded counts:\n\n
\n$$Pr(X=x)=\\frac{e^{-\\lambda}\\lambda^x}{x!}$$\n
\n\n* $X=\\{0,1,2,\\ldots\\}$\n* $\\lambda > 0$\n\n$$E(X) = \\text{Var}(X) = \\lambda$$\n\n\n```python\nfrom pymc3 import Poisson\n\nx = Poisson.dist(mu=1)\nsamples = x.random(size=10000)\n```\n\n\n```python\nsamples.mean()\n```\n\n\n\n\n 1.0115\n\n\n\n\n```python\nplt.hist(samples, bins=len(set(samples)));\n```\n\n### Continuous Random Variables\n\n$$X \\in [0,1]$$\n\n$$Y \\in (-\\infty, \\infty)$$\n\n**Probability Density Function**: \n\nFor continuous $X$,\n\n$$Pr(x \\le X \\le x + dx) = f(x|\\theta)dx \\, \\text{ as } \\, dx \\rightarrow 0$$\n\n\n\n***e.g. normal distribution***\n\n
\n$$f(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right]$$\n
\n\n* $X \\in \\mathbf{R}$\n* $\\mu \\in \\mathbf{R}$\n* $\\sigma>0$\n\n$$\\begin{align}E(X) &= \\mu \\cr\n\\text{Var}(X) &= \\sigma^2 \\end{align}$$\n\n\n```python\nfrom pymc3 import Normal\n\ny = Normal.dist(mu=-2, sd=4)\nsamples = y.random(size=10000)\n```\n\n\n```python\nsamples.mean()\n```\n\n\n\n\n -2.027323774069744\n\n\n\n\n```python\nsamples.std()\n```\n\n\n\n\n 3.969166857847105\n\n\n\n\n```python\nplt.hist(samples);\n```\n\n### Step 2: Calculate a posterior distribution\n\nThe mathematical form \\\\(p(\\theta | y)\\\\) that we associated with the Bayesian approach is referred to as a **posterior distribution**.\n\n> posterior /pos\u00b7ter\u00b7i\u00b7or/ (pos-t\u0113r\u00b4e-er) later in time; subsequent.\n\nWhy posterior? Because it tells us what we know about the unknown \\\\(\\theta\\\\) *after* having observed \\\\(y\\\\).\n\nThis posterior distribution is formulated as a function of the probability model that was specified in Step 1. Usually, we can write it down but we cannot calculate it analytically. In fact, the difficulty inherent in calculating the posterior distribution for most models of interest is perhaps the major contributing factor for the lack of widespread adoption of Bayesian methods for data analysis. Various strategies for doing so comprise this tutorial.\n\n**But**, once the posterior distribution is calculated, you get a lot for free:\n\n- point estimates\n- credible intervals\n- quantiles\n- predictions\n\n### Step 3: Check your model\n\nThough frequently ignored in practice, it is critical that the model and its outputs be assessed before using the outputs for inference. Models are specified based on assumptions that are largely unverifiable, so the least we can do is examine the output in detail, relative to the specified model and the data that were used to fit the model.\n\nSpecifically, we must ask:\n\n- does the model fit data?\n- are the conclusions reasonable?\n- are the outputs sensitive to changes in model structure?\n\n\n\n## Estimation for one group\n\nBefore we compare two groups using Bayesian analysis, let's start with an even simpler scenario: statistical inference for one group.\n\nFor this we will use Gelman et al.'s (2007) radon dataset. In this dataset the amount of the radioactive gas radon has been measured among different households in all counties of several states. Radon gas is known to be the highest cause of lung cancer in non-smokers. It is believed to be more strongly present in households containing a basement and to differ in amount present among types of soil.\n\n> the US EPA has set an action level of 4 pCi/L. At or above this level of radon, the EPA recommends you take corrective measures to reduce your exposure to radon gas.\n\n\n\nLet's import the dataset:\n\n\n```python\nradon = pd.read_csv('../data/radon.csv', index_col=0)\nradon.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
idnumstatestate2stfipszipregiontypebldgfloorroombasement...pcterradjwtdupflagzipflagcntyfipscountyfipsUppmcounty_codelog_radon
05081.0MNMN27.0557355.01.01.03.0N...9.71146.4991901.00.01.0AITKIN27001.00.50205400.832909
15082.0MNMN27.0557485.01.00.04.0Y...14.5471.3662230.00.01.0AITKIN27001.00.50205400.832909
25083.0MNMN27.0557485.01.00.04.0Y...9.6433.3167180.00.01.0AITKIN27001.00.50205401.098612
35084.0MNMN27.0564695.01.00.04.0Y...24.3461.6236700.00.01.0AITKIN27001.00.50205400.095310
45085.0MNMN27.0550113.01.00.04.0Y...13.8433.3167180.00.03.0ANOKA27003.00.42856511.163151
\n

5 rows \u00d7 29 columns

\n
\n\n\n\nLet's focus on the (log) radon levels measured in a single county (Hennepin). \n\nSuppose we are interested in:\n\n- whether the mean log-radon value is greater than 4 pCi/L in Hennepin county\n- the probability that any randomly-chosen household in Hennepin county has a reading of greater than 4\n\n\n```python\nhennepin_radon = radon.query('county==\"HENNEPIN\"').log_radon\nsns.distplot(hennepin_radon)\n```\n\n\n```python\nhennepin_radon.shape\n```\n\n\n\n\n (105,)\n\n\n\n### The model\n\nRecall that the first step in Bayesian inference is specifying a **full probability model** for the problem.\n\nThis consists of:\n\n- a likelihood function(s) for the observations\n- priors for all unknown quantities\n\nThe measurements look approximately normal, so let's start by assuming a normal distribution as the sampling distribution (likelihood) for the data. \n\n$$y_i \\sim N(\\mu, \\sigma^2)$$\n\n(don't worry, we can evaluate this assumption)\n\nThis implies that we have 2 unknowns in the model; the mean and standard deviation of the distribution. \n\n#### Prior choice\n\nHow do we choose distributions to use as priors for these parameters? \n\nThere are several considerations:\n\n- discrete vs continuous values\n- the support of the variable\n- the available prior information\n\nWhile there may likely be prior information about the distribution of radon values, we will assume no prior knowledge, and specify a **diffuse** prior for each parameter.\n\nSince the mean can take any real value (since it is on the log scale), we will use another normal distribution here, and specify a large variance to allow the possibility of very large or very small values:\n\n$$\\mu \\sim N(0, 10^2)$$\n\nFor the standard deviation, we know that the true value must be positive (no negative variances!). I will choose a uniform prior bounded from below at zero and from above at a value that is sure to be higher than any plausible value the true standard deviation (on the log scale) could take.\n\n$$\\sigma \\sim U(0, 10)$$\n\nWe can encode these in a Python model, using the PyMC3 package, as follows:\n\n\n```python\nfrom pymc3 import Model, Uniform\n\nwith Model() as radon_model:\n \n \u03bc = Normal('\u03bc', mu=0, sd=10)\n \u03c3 = Uniform('\u03c3', 0, 10)\n```\n\n> ## Software\n> Today there is an array of software choices for Bayesians, including both open source software (*e.g.*, Stan, PyMC, JAGS, emcee) and commercial (*e.g.*, SAS, Stata). These examples can be replicated in any of these environments.\n\nAll that remains is to add the likelihood, which takes $\\mu$ and $\\sigma$ as parameters, and the log-radon values as the set of observations:\n\n\n```python\nwith radon_model:\n \n y = Normal('y', mu=\u03bc, sd=\u03c3, observed=hennepin_radon)\n```\n\nNow, we will fit the model using a numerical approach called **variational inference**. This will estimate the posterior distribution using an optimized approximation, and then draw samples from it.\n\n\n```python\nfrom pymc3 import fit\nwith radon_model:\n\n samples = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 136.48: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:05<00:00, 1937.16it/s]\n Finished [100%]: Average Loss = 136.37\n\n\n\n```python\nfrom pymc3 import plot_posterior\n\nplot_posterior(samples, varnames=['\u03bc'], ref_val=np.log(4), color='LightSeaGreen');\n```\n\nThe plot shows the posterior distribution of $\\mu$, along with an estimate of the 95% posterior **credible interval**. \n\nThe output\n\n 83.1% < 1.38629 < 16.9%%\n \ninforms us that the probability of $\\mu$ being less than $\\log(4)$ is 83.1%% and the corresponding probability of being greater than $\\log(4)$ is 16.9%.\n\n> The posterior probability that the mean level of household radon in Henneprin County is greater than 4 pCi/L is 0.17.\n\n### Prediction\n\nWhat is the probability that a given household has a log-radon measurement larger than one? To answer this, we make use of the **posterior predictive distribution**.\n\n$$p(z |y) = \\int_{\\theta} p(z |\\theta) p(\\theta | y) d\\theta$$\n\nwhere here $z$ is the predicted value and y is the data used to fit the model.\n\nWe can estimate this from the posterior samples of the parameters in the model.\n\n\n```python\nmus = samples['\u03bc']\nsigmas = samples['\u03c3']\n```\n\n\n```python\nradon_samples = Normal.dist(mus, sigmas).random()\n```\n\n\n```python\n(radon_samples > np.log(4)).mean()\n```\n\n\n\n\n 0.463\n\n\n\n> The posterior probability that a randomly-selected household in Henneprin County contains radon levels in excess of 4 pCi/L is 0.48.\n\n### Model checking\n\nBut, ***how do we know this model is any good?***\n\nIts important to check the fit of the model, to see if its assumptions are reasonable. One way to do this is to perform **posterior predictive checks**. This involves generating simulated data using the model that you built, and comparing that data to the observed data.\n\nOne can choose a particular statistic to compare, such as tail probabilities or quartiles, but here it is useful to compare them graphically.\n\nWe already have these simulations from the previous exercise!\n\n\n```python\nsns.distplot(radon_samples, label='simulated')\nsns.distplot(hennepin_radon, label='observed')\nplt.legend()\n```\n\n### Prior sensitivity\n\nIts also important to check the sensitivity of your choice of priors to the resulting inference.\n\nHere is the same model, but with drastically different (though still uninformative) priors specified:\n\n\n```python\nfrom pymc3 import Flat, HalfCauchy\n\nwith Model() as prior_sensitivity:\n \n \u03bc = Flat('\u03bc')\n \u03c3 = HalfCauchy('\u03c3', 5)\n \n dist = Normal('dist', mu=\u03bc, sd=\u03c3, observed=hennepin_radon)\n \n sensitivity_samples = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 123.98: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:04<00:00, 2201.42it/s]\n Finished [100%]: Average Loss = 123.94\n\n\n\n```python\nplot_posterior(sensitivity_samples, varnames=['\u03bc'], ref_val=np.log(4), color='LightSeaGreen');\n```\n\nHere is the original model for comparison:\n\n\n```python\nplot_posterior(samples, varnames=['\u03bc'], ref_val=np.log(4), color='LightSeaGreen');\n```\n\n## Two Groups with Continiuous Outcome\n\nTo illustrate how this Bayesian estimation approach works in practice, we will use a fictitious example from Kruschke (2012) concerning the evaluation of a clinical trial for drug evaluation. The trial aims to evaluate the efficacy of a \"smart drug\" that is supposed to increase intelligence by comparing IQ scores of individuals in a treatment arm (those receiving the drug) to those in a control arm (those recieving a placebo). There are 47 individuals and 42 individuals in the treatment and control arms, respectively.\n\n\n```python\ndrug = pd.DataFrame(dict(iq=(101,100,102,104,102,97,105,105,98,101,100,123,105,103,100,95,102,106,\n 109,102,82,102,100,102,102,101,102,102,103,103,97,97,103,101,97,104,\n 96,103,124,101,101,100,101,101,104,100,101),\n group='drug'))\nplacebo = pd.DataFrame(dict(iq=(99,101,100,101,102,100,97,101,104,101,102,102,100,105,88,101,100,\n 104,100,100,100,101,102,103,97,101,101,100,101,99,101,100,100,\n 101,100,99,101,100,102,99,100,99),\n group='placebo'))\n\ntrial_data = pd.concat([drug, placebo], ignore_index=True)\nsns.set()\ntrial_data.hist('iq', by='group');\n```\n\nSince there appear to be extreme (\"outlier\") values in the data, we will choose a Student-t distribution to describe the distributions of the scores in each group. This sampling distribution adds **robustness** to the analysis, as a T distribution is less sensitive to outlier observations, relative to a normal distribution. \n\nThe three-parameter Student-t distribution allows for the specification of a mean $\\mu$, a precision (inverse-variance) $\\lambda$ and a degrees-of-freedom parameter $\\nu$:\n\n$$f(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\frac{\\nu + 1}{2})}{\\Gamma(\\frac{\\nu}{2})} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{\\frac{1}{2}} \\left[1+\\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\frac{\\nu+1}{2}}$$\n \nthe degrees-of-freedom parameter essentially specifies the \"normality\" of the data, since larger values of $\\nu$ make the distribution converge to a normal distribution, while small values (close to zero) result in heavier tails.\n\nThus, the likelihood functions of our model are specified as follows:\n\n$$\\begin{align}\ny^{(drug)}_i &\\sim T(\\nu, \\mu_1, \\sigma_1) \\\\\ny^{(placebo)}_i &\\sim T(\\nu, \\mu_2, \\sigma_2)\n\\end{align}$$\n\nAs a simplifying assumption, we will assume that the degree of normality $\\nu$ is the same for both groups. \n\n### Exercise\n\nDraw 10000 samples from a Student-T distribution (`StudentT` in PyMC3) with parameter `nu=3` and compare the distribution of these values to a similar number of draws from a Normal distribution with parameters `mu=0` and `sd=1`.\n\n\n```python\nfrom pymc3 import StudentT\n\nt = StudentT.dist(nu=3).random(size=1000)\nn = Normal.dist(0, 1).random(size=1000)\n```\n\n\n```python\nsns.distplot(t, label='Student-T')\nsns.distplot(n, label='Normal')\nplt.legend()\nplt.xlim(-10,10);\n```\n\n\n### Prior choice\n\nSince the means are real-valued, we will apply normal priors. Since we know something about the population distribution of IQ values, we will center the priors at 100, and use a standard deviation that is more than wide enough to account for plausible deviations from this population mean:\n\n$$\\mu_k \\sim N(100, 10^2)$$\n\n\n```python\nwith Model() as drug_model:\n \n \u03bc_0 = Normal('\u03bc_0', 100, sd=10)\n \u03bc_1 = Normal('\u03bc_1', 100, sd=10)\n```\n\nSimilarly, we will use a uniform prior for the standard deviations, with an upper bound of 20.\n\n\n```python\nwith drug_model:\n \u03c3_0 = Uniform('\u03c3_0', lower=0, upper=20)\n \u03c3_1 = Uniform('\u03c3_1', lower=0, upper=20)\n```\n\nFor the degrees-of-freedom parameter $\\nu$, we will use an **exponential** distribution with a mean of 30; this allocates high prior probability over the regions of the parameter that describe the range from normal to heavy-tailed data under the Student-T distribution.\n\n\n```python\nfrom pymc3 import Exponential\n\nwith drug_model:\n \u03bd = Exponential('\u03bd_minus_one', 1/29.) + 1\n\n```\n\n\n```python\nsns.distplot(Exponential.dist(1/29).random(size=10000), kde=False);\n```\n\n\n```python\nfrom pymc3 import StudentT\n\nwith drug_model:\n\n drug_like = StudentT('drug_like', nu=\u03bd, mu=\u03bc_1, lam=\u03c3_1**-2, observed=drug.iq)\n placebo_like = StudentT('placebo_like', nu=\u03bd, mu=\u03bc_0, lam=\u03c3_0**-2, observed=placebo.iq)\n```\n\nNow that the model is fully specified, we can turn our attention to tracking the posterior quantities of interest. Namely, we can calculate the difference in means between the drug and placebo groups.\n\nAs a joint measure of the groups, we will also estimate the \"effect size\", which is the difference in means scaled by the pooled estimates of standard deviation. This quantity can be harder to interpret, since it is no longer in the same units as our data, but it is a function of all four estimated parameters.\n\n\n```python\nfrom pymc3 import Deterministic\n\nwith drug_model:\n \n diff_of_means = Deterministic('difference of means', \u03bc_1 - \u03bc_0)\n \n effect_size = Deterministic('effect size', \n diff_of_means / np.sqrt((\u03c3_1**2 + \u03c3_0**2) / 2))\n\n\n```\n\n\n```python\nwith drug_model:\n \n drug_trace = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 245.2: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:05<00:00, 1863.59it/s]\n Finished [100%]: Average Loss = 245.13\n\n\n\n```python\nplot_posterior(drug_trace[100:], \n varnames=['\u03bc_0', '\u03bc_1', '\u03c3_0', '\u03c3_1', '\u03bd_minus_one'],\n color='#87ceeb');\n```\n\n\n```python\nplot_posterior(drug_trace[100:], \n varnames=['difference of means', 'effect size'],\n ref_val=0,\n color='#87ceeb');\n```\n\n> The posterior probability that the mean IQ of subjects in the treatment group is greater than that of the control group is 0.99.\n\n### Exercise\n\nLoad the `nashville_precip.txt` dataset. Build a model to compare rainfall in January and July. \n\n- What's the probability that the expected rainfall in January is larger than in July?\n- What's the probability that January rainfall exceeds July rainfall in a given year?\n\n\n```python\nnash_precip = pd.read_table('../data/nashville_precip.txt', \n delimiter='\\s+', na_values='NA', index_col=0)\nnash_precip.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
JanFebMarAprMayJunJulAugSepOctNovDec
Year
18712.764.585.014.133.302.981.582.360.951.312.131.65
18722.322.113.145.913.095.176.101.654.501.582.252.38
18732.967.144.113.596.314.204.632.361.814.284.365.94
18745.229.235.3611.841.492.872.653.523.122.636.124.19
18756.153.068.144.221.735.638.121.603.791.255.464.30
\n
\n\n\n\n\n```python\n# %load ../exercises/rainfall.py\n```\n\n## Two Groups with Binary Outcome\n\nNow that we have seen how to generalize normally-distributed data to another distribution, we are equipped to analyze other data types. Binary outcomes are common in clinical research: \n\n- survival/death\n- true/false\n- presence/absence\n- positive/negative\n\n> *Never, ever dichotomize continuous or ordinal variables prior to statistical analysis*\n\nIn practice, binary outcomes are encoded as ones (for event occurrences) and zeros (for non-occurrence). A single binary variable is distributed as a **Bernoulli** random variable:\n\n$$f(x \\mid p) = p^{x} (1-p)^{1-x}$$\n\nSuch events are sometimes reported as sums of individual events, such as the number of individuals in a group who test positive for a condition of interest. Sums of Bernoulli events are distributed as **binomial** random variables.\n\n$$f(x \\mid n, p) = \\binom{n}{x} p^x (1-p)^{n-x}$$\n\nThe parameter in both models is $p$, the probability of the occurrence of an event. In terms of inference, we are typically interested in whether $p$ is larger or smaller in one group relative to another.\n\nTo demonstrate the comparison of two groups with binary outcomes using Bayesian inference, we will use a sample pediatric dataset. Data on 671 infants with very low (<1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center. Of interest is the relationship between the outcome intra-ventricular hemorrhage (IVH) and predictor such as birth weight, gestational age, presence of pneumothorax and mode of delivery.\n\n\n\n\n```python\nvlbw = pd.read_csv('../data/vlbw.csv', index_col=0).dropna(axis=0, subset=['ivh', 'pneumo'])\nvlbw.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
birthexithospstaylowphpltctracebwtgestinouttwn...ventpneumopdacldpvhivhipeyearsexdead
581.59300281.5989992.06.96999754.0black925.028.0born at Duke0.0...1.01.00.00.0definitedefiniteNaN81.594055female1
681.60199781.77100462.07.189999NaNwhite940.028.0born at Duke0.0...1.00.00.00.0absentabsentabsent81.602295female0
1381.68399881.85399662.07.179996182.0black1110.028.0born at Duke0.0...0.01.00.01.0absentabsentabsent81.684448male0
1481.68900381.87799869.07.419998361.0white1180.028.0born at Duke0.0...0.00.00.00.0absentabsentabsent81.689880male0
1681.69699981.95200493.07.239998255.0black770.026.0born at Duke0.0...1.00.00.01.0absentabsentabsent81.698120male0
\n

5 rows \u00d7 26 columns

\n
\n\n\n\nTo demonstrate binary data analysis, we will try to estimate the difference between the probability of an intra-ventricular hemorrhage for infants with a pneumothorax. \n\n\n```python\npd.crosstab(vlbw.ivh, vlbw.pneumo)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pneumo0.01.0
ivh
absent35973
definite4530
possible64
\n
\n\n\n\nWe will create a binary outcome by combining `definite` and `possible` into a single outcome.\n\n\n```python\nivh = vlbw.ivh.isin(['definite', 'possible']).astype(int).values\nx = vlbw.pneumo.astype(int).values\n```\n\n### Prior choice\n\nWhat should we choose as a prior distribution for $p$?\n\nWe could stick with a normal distribution, but note that the value of $p$ is **constrained** by the laws of probability. Namely, we cannot have values smaller than zero nor larger than one. So, choosing a normal distribution will result in ascribing positive probability to unsupported values of the parameter. In many cases, this will still work in practice, but will be inefficient for calculating the posterior and will not accurately represent the prior information about the parameter.\n\nA common choice in this context is the **beta distribution**, a continuous distribution with 2 parameters and whose support is on the unit interval:\n\n$$ f(x \\mid \\alpha, \\beta) = \\frac{x^{\\alpha - 1} (1 - x)^{\\beta - 1}}{B(\\alpha, \\beta)}$$\n\n- Support: $x \\in (0, 1)$\n- Mean: $\\dfrac{\\alpha}{\\alpha + \\beta}$\n- Variance: $\\dfrac{\\alpha \\beta}{(\\alpha+\\beta)^2(\\alpha+\\beta+1)}$\n\n\n```python\nfrom pymc3 import Beta\n\nparams = (5, 1), (1, 3), (5, 5), (0.5, 0.5), (1, 1)\n\nfig, axes = plt.subplots(1, len(params), figsize=(14, 4), sharey=True)\nfor ax, (alpha, beta) in zip(axes, params):\n sns.distplot(Beta.dist(alpha, beta).random(size=10000), ax=ax, kde=False)\n ax.set_xlim(0, 1)\n ax.set_title(r'$\\alpha={0}, \\beta={1}$'.format(alpha, beta));\n```\n\nSo let's use a beta distribution to model our prior knowledge of the probabilities for both groups. Setting $\\alpha = \\beta = 1$ will result in a uniform distribution of prior mass:\n\n\n```python\nwith Model() as ivh_model:\n \n p = Beta('p', 1, 1, shape=2)\n```\n\nWe can now use `p` as the parameter of our Bernoulli likelihood. Here, `x` is a vector of zeros an ones, which will extract the approproate group probability for each subject:\n\n\n```python\nfrom pymc3 import Bernoulli\n\nwith ivh_model:\n \n bb_like = Bernoulli('bb_like', p=p[x], observed=ivh)\n```\n\nFinally, since we are interested in the difference between the probabilities, we will keep track of this difference:\n\n\n```python\nwith ivh_model:\n \n p_diff = Deterministic('p_diff', p[1] - p[0])\n```\n\n\n```python\nwith ivh_model:\n ivh_trace = fit(random_seed=RANDOM_SEED).sample(1000)\n```\n\n Average Loss = 226.28: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:00<00:00, 13352.85it/s]\n Finished [100%]: Average Loss = 226.28\n\n\n\n```python\nplot_posterior(ivh_trace[100:], varnames=['p'], color='#87ceeb');\n```\n\nWe can see that the probability that `p` is larger for the pneumothorax with probability one.\n\n\n```python\nplot_posterior(ivh_trace[100:], varnames=['p_diff'], ref_val=0, color='#87ceeb');\n```\n\n## References and Resources\n\n- Goodman, S. N. (1999). Toward evidence-based medical statistics. 1: The P value fallacy. Annals of Internal Medicine, 130(12), 995\u20131004. http://doi.org/10.7326/0003-4819-130-12-199906150-00008\n- Johnson, D. (1999). The insignificance of statistical significance testing. Journal of Wildlife Management, 63(3), 763\u2013772.\n- Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian Data Analysis, Third Edition. CRC Press.\n- Kruschke, J.K. *Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan.* 2015. Academic Press / Elsevier. \n- O'Shea M, Savitz D.A., Hage M.L., Feinstein K.A.: *Prenatal events and the risk of subependymal / intraventricular haemorrhage in very low birth weight neonates*. **Paediatric and Perinatal Epdiemiology** 1992;6:352-362\n", "meta": {"hexsha": "a6b31d193a6f47f80288b00fdadfa93264dac0d2", "size": 331721, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/2. Basic Bayesian Inference.ipynb", "max_stars_repo_name": "gjcooper/intro_stat_modeling_2017", "max_stars_repo_head_hexsha": "cabb3e4f8b410a9917057f11c5618ac1478794b8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/2. Basic Bayesian Inference.ipynb", "max_issues_repo_name": "gjcooper/intro_stat_modeling_2017", "max_issues_repo_head_hexsha": "cabb3e4f8b410a9917057f11c5618ac1478794b8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/2. Basic Bayesian Inference.ipynb", "max_forks_repo_name": "gjcooper/intro_stat_modeling_2017", "max_forks_repo_head_hexsha": "cabb3e4f8b410a9917057f11c5618ac1478794b8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 156.104, "max_line_length": 49912, "alphanum_fraction": 0.8663726445, "converted": true, "num_tokens": 11752, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3812195803163617, "lm_q2_score": 0.34158250614097546, "lm_q1q2_score": 0.1302179396344737}} {"text": "```python\nimport sys\nprint(\"Echo system status -- reset environment if kernel NOT 3.8...\")\nprint(sys.executable)\nprint(sys.version)\nprint(sys.version_info)\n! hostname\n```\n\n Echo system status -- reset environment if kernel NOT 3.8...\n /opt/jupyterhub/bin/python3\n 3.8.2 (default, Jul 16 2020, 14:00:26) \n [GCC 9.3.0]\n sys.version_info(major=3, minor=8, micro=2, releaselevel='final', serial=0)\n atomickitty\n\n\n# Conditional Computations\n\nI could not think of a good name for this section - decisions perhaps? \nAnyway, this section is all about using conditions - logical expressions that evaluate as TRUE or FALSE and using\nthese results to perform further operations based on these conditions.\nAll flow control in a program depends on evaluating conditions. The program will proceed\ndifferently based on the outcome of one or more conditions - really sophisticated \"Artifical Intelligence\" pro-\ngrams are a collection of conditions and correlations (albeit very complex). \nAmazon knowing what you kind of\nwant is based on correlations of your past behavior compared to other peoples similar, but\nmore recent behavior, and then it uses conditional statements to decide what item to offer\nyou in your recommendation items. Its uncanny, but ultimately just a program.\n\n## Comparisons\nThe most common conditional operation is comparison. If we wish to compare whether two\nvariables are the same we use the ``==`` (double equal sign).\nFor example ``x == y`` means the program will ask whether ``x`` and ``y`` have the same value. If\nthey do, the result is ``TRUE`` if not then the result is `FALSE`.\nOther comparison signs are ``!=`` does NOT equal, ``<`` smaller than,`` >`` larger than, ``<=`` less than\nor equal, and ``>=`` greater than or equal.\nLike Excel there are also three logical operators when we want to build multiple compares\n(multiple conditioning); these are ``and``, ``or``, and ``not``.\nThe ``and`` operator returns ``TRUE`` iff all conditions are ``TRUE``. For instance ``5 == 5 and 5\n< 6`` will return a ``TRUE`` because both conditions are true.\nThe ``or`` operator returns ``TRUE`` if at least one condition is true. If all conditions are ``FALSE``,\nthen it will return a ``FALSE``. For instance ``4 > 3 or 17 > 20 or 3 == 2`` will return TRUE\nbecause the first condition is true.\nThe ``not`` operator returns ``TRUE`` if the condition after the ``not`` keyword is false. Think of it\nas a way to do a logic reversal.\n\nThe script below is a few simple examples of compares.\n\n\n```python\n# compare\nx = 7\ny = 10\nprint(\"x= \",x,\"y= \",y)\nprint(\"x is equal to y :\",x==y)\nprint(\"x is not equal to y :\",x!=y)\nprint(\"x is greater than y :\",x>y)\nprint(\"x is less than y :\",x 3 or 17 > 20 or 3 ==2 ? :\",4 > 3 or 17 > 20 or 3 ==2)\nprint(\"not5 == 5 :\",not 5 == 5)\n```\n\n 5 == 5 and 5 < 6 : True\n 4 > 3 or 17 > 20 or 3 ==2 ? : True\n not5 == 5 : False\n\n\n## Block `if` statement\n\nThe `if` statement is a common \nflow control statement. It allows the program to evaluate if\na certain condition is satisfied and to perform a designed action based on the result of the\nevaluation. The structure of an `if` statement is\n\n if condition1 is met:\n do A\n elif condition 2 is met:\n do b\n elif condition 3 is met:\n do c\n else:\n do e\n\nThe `elif` means \"else if\". The `:` colon is an important part of the structure it tells where the\naction begins. Also there are no scope delimiters like `{} ` or `()` common in other programming tools. \nInstead Python uses indentation to isolate blocks of code. \nThis convention is hugely important -- many other coding envi-\nronments use delimiters (called scoping delimiters), but Python does not. The indentation\nitself is the scoping delimiter.\nThe intent is for the code to be humanly readable for maintenance - you can use comment symbol ``#`` as a fake, searchable, delimiter but at times that itself gets cluttered.\n\nThe script below is an example that illustrates how the `if` statements work. The program asks the\nuser for input. The use of `input()` will let the program read any input as a string\nso non-numeric results will not throw an error. The input is stored in the variable named\n`userInput`. Next the statement `if userInput == \"1\":` compares the value of userInput\nwith the string \"1\". If the value in the variable is indeed \"1\", then the program will execute\nthe block of code in the indentation after the colon. In this case it will execute\n\n print (\"Hello World\")\n print (\"How do you do?\")\n\nAlternatively, if the value of ``userInput`` is the string \"2\", then the program will execute\n\n print (\"Snakes on a plane\")\n\nFor all other values the program will execute\n\n print (\"You did not enter a valid number\")\n\n\n```python\n# block if\nuserInput = input(\"Enter the number 1 or the number 2\")\nif userInput == \"1\":\n print (\"Hello World\")\n print (\"How do you do?\")\nelif userInput == \"2\":\n print(\"Snakes on a plane\")\nelse:\n print(\"You did not enter a valid value\")\n\n```\n\n Enter the number 1 or the number 2 1\n\n\n Hello World\n How do you do?\n\n\n## Inline `if` statement\nAn inline if statement is a simpler form of an if statement and is more convenient if you\nonly need to perform a simple conditional task. The syntax is\n\n do TaskA if condition is true else do TaskB\n\nAn example would be:\n\n\n```python\nmyInt = 3\nnum1 = 12 if myInt == 0 else 13\nnum1\n```\n\n\n\n\n 13\n\n\n\nAn alternative way is to enclose the condition in brackets for some clarity like\n\n\n```python\nmyInt = 3\nnum1 = 12 if (myInt == 0) else 13\nnum1\n```\n\n\n\n\n 13\n\n\n\nIn either case the result is that num1 will have the value 13 (unless you set myInt to 0).\n\n## `for` loop\n\nWe have seen the for loop already, but we will formally introduce it here. \nThe loop executes a block of code repeatedly until the condition in the ``for`` statement is no longer true.\n\n### Looping through an iterable\nAn iterable is anything that can be looped over - typically a list, string, or tuple. \nThe syntax for looping through an iterable is illustrated by an example.\nFirst a generic syntax\n\n for a in iterable:\n print a\n\nNotice our friends the colon `:` and the indentation.\n\n\nNow a specific example\n\n\n\n\n\n```python\n# set a list\nMyPets = [\"dusty\",\"aspen\",\"merrimee\"]\n# loop thru the list\nfor AllStrings in MyPets:\n print(AllStrings)\n```\n\n dusty\n aspen\n merrimee\n\n\nWe can also display the index of the list elements using the enumerate() function. Try the\ncode below\n\n\n```python\n# set a list\nMyPets = [\"dusty\",\"aspen\",\"merrimee\"]\n# loop thru the list\nfor index, AllStrings in enumerate(MyPets):\n print(index,AllStrings)\n```\n\n 0 dusty\n 1 aspen\n 2 merrimee\n\n\n`For` loops can be used for count controlled repetition, they work on a generic increment skip if greater type loop structure. \nThe range function is used in place of the iterable, and the list is accessed directly by a `name[index]` structure\n\n\n```python\n# set a list\nMyPets = [\"dusty\",\"aspen\",\"merrimee\"]\nhow_many = len(MyPets)\n# loop thru the list\nfor index in range(0,how_many,1):\n print(index,MyPets[index])\n```\n\n 0 dusty\n 1 aspen\n 2 merrimee\n\n\n### ``while`` loop\nThe while loop repeats a block of instructions inside the loop while a condition remains\ntrue. The structure is\n\n while condition is true:\n execute a\n ....\n \nNotice our friends the colon : and the indentation again.\n\nTry the code below to illustrate a while loop:\n\n\n```python\n# set a counter\ncounter = 5\n# while loop\nwhile counter > 0:\n print (\"Counter = \",counter)\n counter = counter -1\n\n\n```\n\n Counter = 5\n Counter = 4\n Counter = 3\n Counter = 2\n Counter = 1\n\n\n\nThe `while` loop structure depicted above is a\nstructure that is referred to as \"decrement, skip if equal\" in lower level languages. \nThe next structure, also a while loop is an \"increment, skip if greater\"\nTry this code:\n\n\n```python\n# set a counter\ncounter = 0\n# while loop\nwhile counter < 5:\n print (\"Counter = \",counter)\n counter = counter +1\n```\n\n Counter = 0\n Counter = 1\n Counter = 2\n Counter = 3\n Counter = 4\n\n\nA few more variants include that same code, except the \\+=\" operator replaces portions of\nthe code.\n\n\n```python\n# set a counter\ncounter = 0\n# while loop\nwhile counter < 5:\n print (\"Counter = \",counter)\n counter += 1 #use the self+ operator\n```\n\n Counter = 0\n Counter = 1\n Counter = 2\n Counter = 3\n Counter = 4\n\n\nAnd here we vary the condition slightly and use \\<=\" to capture the value 5 itself.\n\n\n```python\n# set a counter\ncounter = 0\n# while loop\nwhile counter <= 5:\n print (\"Counter = \",counter)\n counter += 1 #use the self+ operator\n```\n\n Counter = 0\n Counter = 1\n Counter = 2\n Counter = 3\n Counter = 4\n Counter = 5\n\n\n### The infinite loop (a cool street address, poor programming structure)\n`while` loops need to be used with care, it is reasonably easy to create infinite loops and we\nhave to interrupt with a system call (possibly have to externally kill the process - easy if you are\nroot and know how, a disaster if you have deployed the code to other people who are not\nprogrammers and system savvy)\nHere is an example of an infinite loop. (kill the process by halting the kernel when you run it)\n\n\n\n```python\n# set a counter\ncounter = 5\ncounter = -4 #deactivate for infinite loop\n# while loop\nwhile counter > 0:\n print (\"Counter = \",counter)\n counter = counter +1 # pretend we accidentally incremented instead of decremented the counter!\n```\n\nInfinite loops can be frustrating when you are maintaining a large (long) complex code and\nyou have no idea which code segment is causing the infinite loop. Often a system administrator has to kill the process at the OS level.\n\n### The `break` instruction\nSometimes you may want to exit the loop when a certain condition different from the counting\ncondition is met. Perhaps you are looping through a list and want to exit when you find the\nfirst element in the list that matches some criterion. The break keyword is useful for such\nan operation.\nFor example run the following program\n\n\n\n```python\nj = 0\nfor i in range(0,5,1):\n j += 2\n print (\"i = \",i,\"j = \",j)\n if j == 6:\n print(\"break from loop\")\n break\nprint(\"value of j is: \",j)\n```\n\n i = 0 j = 2\n i = 1 j = 4\n i = 2 j = 6\n break from loop\n value of j is: 6\n\n\n\n```python\nj = 0\nfor i in range(0,5,1):\n j += 2\n print (\"i = \",i,\"j = \",j)\n if j == 7:\n print(\"break from loop\")\n break\nprint(\"value of j is: \",j)\n```\n\n i = 0 j = 2\n i = 1 j = 4\n i = 2 j = 6\n i = 3 j = 8\n i = 4 j = 10\n value of j is: 10\n\n\nExamining these two simple codes. In the first case, the for loop only executes 3\ntimes before the condition `j == 6` is `TRUE` and the loop is exited. In the second case, `j == 7` never happens so the loop completes all its directed traverses.\nIn both cases an `if` statement was used within a `for` loop. Such \"mixed\" control structures\nare quite common (and pretty necessary). A `while` loop contained within a `for` loop, with\nseveral `if` statements would be very common and such a structure is called nested control.\nThere is typically an upper limit to nesting but the limit is pretty large -- easily in the\nhundreds. It depends on the language and the system architecture - suffice to say it is not\na practical limit except possibly for AI applications.\n\n### The `continue` instruction\nThe `continue` instruction skips the block of code after it is executed for that iteration. \nIt is best illustrated by an example.\n\n\n\n```python\nj = 0\nfor i in range(0,5,1):\n j += 2\n print (\"\\ni = \", i , \", j = \", j) #here the \\n is a newline command\n if j == 6:\n continue\n print (\" this message will be skipped over if j = 6 \")\n```\n\n \n i = 0 , j = 2\n this message will be skipped over if j = 6 \n \n i = 1 , j = 4\n this message will be skipped over if j = 6 \n \n i = 2 , j = 6\n \n i = 3 , j = 8\n this message will be skipped over if j = 6 \n \n i = 4 , j = 10\n this message will be skipped over if j = 6 \n\n\nWhen `j==6` the line after the `continue` keyword is not printed. Other than that one\ndifference the rest of the script runs normally.\n\n### The `try, except` instruction\nThe final control statement (and a pretty cool one for error trapping) is the `try ..., except`\nstatement.\nThe statement controls how the program proceeds when an error(called an exception) occurs in an instruction.\nThe structure is really useful to trap likely errors (divide by zero, wrong kind of input) yet let the program keep running or at least issue a meaningful massage to the user.\n\nThe syntax is:\n\n try:\n do something\n except:\n do something else if ``do something'' returns an error\n \nHere is a really simple, but hugely important example:\n\n\n```python\n#MyErrorTrap.py\nx = 12.\ny = 12.\nwhile y >= -12.:\n try:\n print (\"x = \", x, \"y = \", y, \"x/y = \", x/y)\n except:\n print (\"error divide by zero\")\n y -= 1\n```\n\n x = 12.0 y = 12.0 x/y = 1.0\n x = 12.0 y = 11.0 x/y = 1.0909090909090908\n x = 12.0 y = 10.0 x/y = 1.2\n x = 12.0 y = 9.0 x/y = 1.3333333333333333\n x = 12.0 y = 8.0 x/y = 1.5\n x = 12.0 y = 7.0 x/y = 1.7142857142857142\n x = 12.0 y = 6.0 x/y = 2.0\n x = 12.0 y = 5.0 x/y = 2.4\n x = 12.0 y = 4.0 x/y = 3.0\n x = 12.0 y = 3.0 x/y = 4.0\n x = 12.0 y = 2.0 x/y = 6.0\n x = 12.0 y = 1.0 x/y = 12.0\n error divide by zero\n x = 12.0 y = -1.0 x/y = -12.0\n x = 12.0 y = -2.0 x/y = -6.0\n x = 12.0 y = -3.0 x/y = -4.0\n x = 12.0 y = -4.0 x/y = -3.0\n x = 12.0 y = -5.0 x/y = -2.4\n x = 12.0 y = -6.0 x/y = -2.0\n x = 12.0 y = -7.0 x/y = -1.7142857142857142\n x = 12.0 y = -8.0 x/y = -1.5\n x = 12.0 y = -9.0 x/y = -1.3333333333333333\n x = 12.0 y = -10.0 x/y = -1.2\n x = 12.0 y = -11.0 x/y = -1.0909090909090908\n x = 12.0 y = -12.0 x/y = -1.0\n\n\nSo this silly code starts with x fixed at a value of 12, and y starting at 12 and decreasing by\n1 until y equals -12. The code returns the ratio of x to y and at one point y is equal to zero\nand the division would be undefined. \nBy trapping the error the code can issue us a measure\nand keep running.\nNotice how the error is trapped when y is zero and reported as an attempted divide by zero, but the code keeps running.\n\n# Exercises\n\n### 1)\nWrite a Python script that takes a real input value (a float) for x and returns the y\nvalue according to the rules below \n\n\\begin{equation}\ny = x for 0 <= x < 1 \\\\\ny = x^2 for 1 <= x < 2 \\\\\ny = x + 2 for 2 <= x < inf \\\\\n\\end{equation}\n\n\n```python\nx = float(input(\"Enter a value for x\"))\nprint(\"input =: \",x)\nif x < 1:\n y=x\nelif x>= 1 and x<2:\n y=x**2\nelif x >=2:\n y=x+2\nprint(\"x and y = :\",x,y)\n```\n\n Enter a value for x 2\n\n\n input =: 2.0\n x and y = : 2.0 4.0\n\n\n### 2)\nModify the script to automatically print a table with x ranging from 0 to 5.0\n\n\n```python\n# getX function\ndef getX(x):\n if x < 1:\n y=x\n elif x>= 1 and x<2:\n y=x**2\n elif x >=2:\n y=x+2\n return(y)\nxlist = [0.,1.,2.,3.,4.,5.]\nfor i in xlist:\n print(\"x= \",i,\"y =\",getX(i))\n```\n\n x= 0.0 y = 0.0\n x= 1.0 y = 1.0\n x= 2.0 y = 4.0\n x= 3.0 y = 5.0\n x= 4.0 y = 6.0\n x= 5.0 y = 7.0\n\n\n### 3)\nModify the script again to increment the values by 0 to 0.5. \n\n\n```python\n# getX function\ndef getX(x):\n if x < 1:\n y=x\n elif x>= 1 and x<2:\n y=x**2\n elif x >=2:\n y=x+2\n return(y)\nxlist=[] # null list to take input\n# build a list\nxlist = [x/2+0.5 for x in range(-1,10,1)]\nfor i in xlist:\n if i <= 5.0:\n print(\"x= \",i,\"y =\",getX(i))\n```\n\n x= 0.0 y = 0.0\n x= 0.5 y = 0.5\n x= 1.0 y = 1.0\n x= 1.5 y = 2.25\n x= 2.0 y = 4.0\n x= 2.5 y = 4.5\n x= 3.0 y = 5.0\n x= 3.5 y = 5.5\n x= 4.0 y = 6.0\n x= 4.5 y = 6.5\n x= 5.0 y = 7.0\n\n\n### 4) \nRepeat Exercise 1 above, but include error trapping that:\n(a) Takes any numeric input and forces into a float.\n(b) Takes any non-numeric input, issues a message that the input needs to be numeric,\nand makes the user try again.11\n(c) Once you have acceptable input, trap the condition if x < 0 and issue a message,\notherwise complete the requisite arithmetic.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5eec27feb871c9120f97f2aa94d3517006f2736d", "size": 27427, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "9-MyJupyterNotebooks/5-ConditionalComputations/.ipynb_checkpoints/ConditionalComputation-checkpoint.ipynb", "max_stars_repo_name": "dustykat/engr-1330-psuedo-course", "max_stars_repo_head_hexsha": "3e7e31a32a1896fcb1fd82b573daa5248e465a36", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9-MyJupyterNotebooks/5-ConditionalComputations/.ipynb_checkpoints/ConditionalComputation-checkpoint.ipynb", "max_issues_repo_name": "dustykat/engr-1330-psuedo-course", "max_issues_repo_head_hexsha": "3e7e31a32a1896fcb1fd82b573daa5248e465a36", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9-MyJupyterNotebooks/5-ConditionalComputations/.ipynb_checkpoints/ConditionalComputation-checkpoint.ipynb", "max_forks_repo_name": "dustykat/engr-1330-psuedo-course", "max_forks_repo_head_hexsha": "3e7e31a32a1896fcb1fd82b573daa5248e465a36", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.8446700508, "max_line_length": 184, "alphanum_fraction": 0.5144565574, "converted": true, "num_tokens": 5120, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3073580168652638, "lm_q2_score": 0.4225046348141882, "lm_q1q2_score": 0.12986018667287139}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport matplotlib\nimport seaborn as sns\n%matplotlib inline\n```\n\n\n```python\nmatplotlib.rcParams['figure.figsize'] = (10, 8) # set default figure size, 10in by 8in\n```\n\nThis week, you will be learning about unsupervised learning. While supervised learning algorithms need labeled examples (x,y), unsupervised learning algorithms need only the input (x). You will learn about clustering\u2014which is used for market segmentation, text summarization, among many other applications.\n\n# Video W8 01: Unsupervised Learning\n\n[YouTube Video Link](https://www.youtube.com/watch?v=PK5JsJZd1Uk&index=77&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW)\n\nIn an unsupervised learning problem, we are given data that does not have any labels associated with it. So what we want\nfrom unsupervised learning algorithms is to discover some sort of structure or organization or pattern in our data. For example,\nthe easiest type of structure to understand is to try and find clusters in the data of items that appear related. Such clusters\ncan be useful in many applications to identify and process the members of a cluster in some specific way, such as clusters of\ndifferent types customers and their buying habits.\n\nUp to this point we have been studying supervised learning methods. In supervised learning, for example to\nperform a classification task, we are given a traing set of data, and all of the $m$ samples in the training set are\nlabeled:\n\n\\begin{equation}\n\\text{Training set:} \\{ (x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), (x^{(3)}, y^{(3)}), \\ldots, (x^{(m)}, y^{(m)}) \\} \n\\end{equation}\n\nHere the $y^{(m)}$ are the labels for the data. For example if we have the data:\n\n\n```python\nx = np.array([[0.5, 0.5],\n [1.0, 0.5],\n [0.75, 1.0],\n [1.9, 0.25],\n [1.6, 0.75],\n [1.25, 1.25],\n [0.5, 1.6],\n [0.5, 2.25],\n [3.1, 1.1],\n [2.9, 1.5],\n [2.1, 2.1],\n [2.1, 2.75],\n [1.5, 3.1],\n [3.5, 1.9],\n [3.0, 2.1],\n [3.0, 3.0],\n [2.0, 3.5],\n [2.5, 3.5]])\ny = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n\nneg_idx = np.where(y == 0)\npos_idx = np.where(y == 1)\n\n# plot the example figure\nplt.figure(figsize=(10,10))\n\n# plot the points in our two categories, y=0 and y=1, using markers to indicated\n# the category or output\nneg_handle = plt.plot(x[neg_idx,0], x[neg_idx,1], 'bo', markersize=8, fillstyle='none', markeredgewidth=1, label='0 negative class') \npos_handle = plt.plot(x[pos_idx,0], x[pos_idx,1], 'rx', markersize=8, markeredgewidth=1, label='1 positive class') \n\n# add some labels and titles\nplt.axis([0, 4, 0, 4])\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')\nplt.title('Supervised Learning: Classification of labeled data');\nplt.legend([neg_handle[0], pos_handle[0]], ['0 negative class', '1 positive class']);\n```\n\nHere the $y$ vector holds the binary classification labels, and the data we are given to train with is\nin one of two classes, $0$ negative class or $1$ positive class.\n\nFor unsupervised learning we are given $m$ unlabeled samples of data to use:\n\n\\begin{equation}\n\\text{Training set:} \\{ x^{(1)}, x^{(2)}, x^{(3)}, \\ldots, x^{(m)} \\} \n\\end{equation}\n\n\n```python\n# plot the example figure\nplt.figure(figsize=(10,10))\n\n# plot the points in our unlabeled data\nplt.plot(x[:,0], x[:,1], 'ko', markersize=8, fillstyle='full')\n\n# add some labels and titles\nplt.axis([0, 4, 0, 4])\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')\nplt.title('Unsupervised Learning');\n```\n\nIn unsupervised learning, we give the algorithm some data and we ask the algorithm to \"find some structure\"\nin the data.\n\nFor example, given this data set, we might want the algorithm to find some likely clusters of the data, points that\nmay be similar or of related categories.\n\nAn algorithm that finds clusters is called a clustering algorithm. The previous might have 2 clusters, or there\nmight be even 3 or 4 good clusters.\n\n## Applications of Clustering\n\n- marked segmentation\n- social network analysis (coherent groups of people that form organically)\n- organize computing clusters\n- astronomical data analysis\n\n# Video W8 02: K Means Algorithm\n\n[YouTube Video Link](https://www.youtube.com/watch?v=6u19018FeHg&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=78)\n\nThe K-means algorithm is an example of a clustering unsupervised learning algorithm. It is probably the simplest clustering\nalgorithm, but it is still quite effective. Thus it is still one of the most popular and most used clustering algorithms.\n\nK-means is an iterative algorithm. We start by specifying how many clusters (e.g. K clusters) we want the algorithm to \ndiscover. More formally, we can define the **K-means algorithm**\n\n- Input:\n - $K$ (number of clusters)\n - Training set of $m$ inputs $\\{x^{(1)}, x^{(2)}, \\ldots, x^{(m)}\\}$\n- Where $x^{(i)} \\in \\mathbb{R}^n$ (we drop the $x_0 = 1$ convention) \n\nAnd the **K-means algorithm** pseudocode\n\n- Randomly initialize $K$ cluster centroids $\\mu_1, \\mu_2, \\ldots, \\mu_K \\in \\mathbb{R}^n$\n\n- Repeat {\n - for $i = 1$ to $m$\n - $c^{(i)}$ := index (from 1 to $K$) of cluster centroid closest to $x^{(i)}$\n - for $k = 1$ to $K$\n - $\\mu_k$ := average (mean) of points assigned to cluster $k$\n- }\n\nThis basic algorithm for K-means clustering is really fairly simple, and it will help to understand it even further if we make\na quick and basic implementation of the algorithm in Python code. First of all, we will read in a small simple set of\ndata that appears to be well separated into 2 clusters. This dataset has $m = 32$ examples. The dataset has only 2 features\n$n = 2$, thus all of the points are in 2 dimensional space.\n\n\n```python\n#from sklearn.datasets import make_blobs\n#X, y = make_blobs(n_samples = 10, n_features=2, centers=2, cluster_std = 0.8, center_box=(2, 5))\n#np.savetxt('../data/lect-11-ex1data.csv', X, delimiter=',')\nX = np.loadtxt('../../data/lect-11-ex1data.csv', delimiter=',')\n\nplt.plot(X[:, 0], X[:, 1], 'go')\nplt.xlabel(r'$x_1$', fontsize=20)\nplt.ylabel(r'$x_2$', fontsize=20);\n```\n\nThe first step in the K-means algorithm is to randomly initialize a set of centroids. We usually initialze the centroids to be\nwithin the ranges of the data set. So for example, if we find the minimum and maximum values for the data for each of the dimensions,\nwe can use this to randomly initialize our centroids. In this case, we are going to try and find $K = 2$ clusters, so we want to\ncreate two centroids within the range of our data:\n\n\n```python\n# The number of clusters K we will find\nK = 2\n\n# The number of training data points, and the number of dimensions of our data set\nm, n = X.shape\n\n# randomly initialize K centroids\nmin_x1, max_x1 = min(X[:, 0]), max(X[:, 0])\nmin_x2, max_x2 = min(X[:, 1]), max(X[:, 1])\nprint(min_x1, max_x1)\nprint(min_x2, max_x2)\n\n# create K centroids mu, where each point is randomly chosen within the range of the data\nmu = np.zeros( (K, n) )\nfor k in range(K):\n mu[k, 0] = np.random.uniform(low = min_x1, high = max_x1)\n mu[k, 1] = np.random.uniform(low = min_x2, high = max_x2)\n \n# visualize the original data, with our randomly chosen initial centroid points\nplt.plot(X[:, 0], X[:, 1], 'go', label='training data')\nplt.plot(mu[:,0], mu[:,1], 'rx', markersize=15, label='centroids')\nplt.xlabel(r'$x_1$', fontsize=20)\nplt.ylabel(r'$x_2$', fontsize=20)\nplt.legend();\n```\n\nIn the video, the first step in the iterative part of the K-means algorithm is to assign each of the training data points to\none of our $\\mu$ clusters. As shown in the video, we do this by calculating the distance between each data point and our\ntwo centroids, and we assign the point to the closest centroid. The measure used to calculate the distance can actually be\ncalculated in different ways. The simplest is to use the eucledian distance. And since the distance can be negative\ndepending on the order we evaluate points when calculating the distance, we usually take the square of the distance so that\nall values are positive (e.g. we get the magnitude of the distance), and we can thus compare directly and find the minimum.\n\n$$\n\\underset{k}{\\textrm{min}} \\;\\; \\| x^{(i)} - \\mu_k \\|^2\n$$\n\nFor example, lets calculate the distance between the first training data point and the two randomly generated centroids. Keep \nin mind that in Python, our arrays are indexed starting at 0, so the first training data example will be at $i = 0$. Also, with\n$K = 2$ cluster centroids, the $k$ clusters will range from $0$ to $1$.\n\nLets start by defining a function that will take 2 $n$ dimensional points, and calculate the square of the distance between\nthe two points:\n\n\n```python\ndef distance(x, y):\n # calculate the square of the distance between 2 n dimensional points (passed as numpy arrays)\n # eucledian distance is sqrt( (x_1 - y_1)**2.0 + (x_2 - y_2)**2.0 ), but we then square this, so\n # we simply need the sum of the differences squared\n return np.sum( (x - y)**2.0 )\n```\n\n\n```python\n# distance from the 0th training example and 0th cluster\ni = 0\nk = 0\nprint(\"%d training example: (%f, %f)\" % (i, X[i,0], X[i,1]))\nprint(\"%d cluster centroid: (%f, %f)\" % (k, mu[k,0], mu[k,1]))\nprint(\"square distance between input %d and cluster %d: %f\" % (i, k, distance(X[i,:], mu[k,:])))\n\n# distance from the 0th training example and 1th cluster\ni = 0\nk = 1\nprint(\"\")\nprint(\"%d training example: (%f, %f)\" % (i, X[i,0], X[i,1]))\nprint(\"%d cluster centroid: (%f, %f)\" % (k, mu[k,0], mu[k,1]))\nprint(\"square distance between input %d and cluster %d: %f\" % (i, k, distance(X[i,:], mu[k,:])))\n```\n\n 0 training example: (-3.500000, -5.000000)\n 0 cluster centroid: (2.652079, 3.691993)\n square distance between input 0 and cluster 0: 113.398815\n \n 0 training example: (-3.500000, -5.000000)\n 1 cluster centroid: (2.742979, -0.253101)\n square distance between input 0 and cluster 1: 61.507844\n\n\nBy the way, the above function for calculating the distance basically does the same thing as calculating \nthe norm between the two point vectors\n\n\n```python\ndef distance_norm(x, y):\n # calculate the square of the distance between 2 n dimensional points (passed as numpy arrays)\n # using the linear algebra vector norm to calculate the distance\n return np.linalg.norm(x - y)**2.0\n\n```\n\n\n```python\n# distance from the 0th training example and 0th cluster\ni = 0\nk = 0\nprint(\"%d training example: (%f, %f)\" % (i, X[i,0], X[i,1]))\nprint(\"%d cluster centroid: (%f, %f)\" % (k, mu[k,0], mu[k,1]))\nprint(\"square distance between input %d and cluster %d: %f\" % (i, k, distance_norm(X[i,:], mu[k,:])))\n\n# distance from the 0th training example and 1th cluster\ni = 0\nk = 1\nprint(\"\")\nprint(\"%d training example: (%f, %f)\" % (i, X[i,0], X[i,1]))\nprint(\"%d cluster centroid: (%f, %f)\" % (k, mu[k,0], mu[k,1]))\nprint(\"square distance between input %d and cluster %d: %f\" % (i, k, distance_norm(X[i,:], mu[k,:])))\n```\n\n 0 training example: (-3.500000, -5.000000)\n 0 cluster centroid: (2.652079, 3.691993)\n square distance between input 0 and cluster 0: 113.398815\n \n 0 training example: (-3.500000, -5.000000)\n 1 cluster centroid: (2.742979, -0.253101)\n square distance between input 0 and cluster 1: 61.507844\n\n\nThe first part of the iterative algorithm is to calculate such distances between each training data item and every centroid, \nfind the minimum, and assign the training data item to be in the cluster whose centroid it is closest too. So for example,\nwe can determine the closest centroid for each training data point like this\n\n\n```python\n# This array will hold the index k of the cluster centroid each training data point is assigned too\nc = np.zeros(m)\n\n# for each training data point i\nfor i in range(m):\n # determine distance to cluster 0\n min_distance = distance_norm(X[i,:], mu[0, :])\n c[i] = 0\n # find out if any other cluster centroid k=1,...K is closer\n for k in range(1, K):\n another_distance = distance_norm(X[i,:], mu[k, :])\n if another_distance < min_distance:\n min_distance = another_distance\n c[i] = k\n \n# the above loop represents the code needed to assign each point to the closest cluster mu. Here were the\n# clusters that each point was assigned to\nfor i in range(m):\n print(\"point x[%d] in cluster: %d\" % (i, c[i]))\n \n# lets visualize the resulting assignments of the points to the current cluster centroids\ncluster_0 = np.where(c == 0)[0]\ncluster_1 = np.where(c == 1)[0]\n\n#plt.figure(figsize=(8,16))\nax = plt.gca()\nax.set_aspect('equal')\n\nplt.plot(X[cluster_0, 0], X[cluster_0, 1], 'ro', label='assg cluster 0')\nplt.plot(mu[0,0], mu[0,1], 'rx', markersize=15)\nplt.plot(X[cluster_1, 0], X[cluster_1, 1], 'bo', label='assg cluster 1')\nplt.plot(mu[1,0], mu[1,1], 'bx', markersize=15, label='centroids')\nplt.xlabel(r'$x_1$', fontsize=20)\nplt.ylabel(r'$x_2$', fontsize=20)\nplt.legend(loc=2)\nplt.axis([-5, 7, -7, 7]);\n```\n\nOnce we have found which centroid each training data item is closest to, it is time to update the centroids. We do this by calculating\nnew mu centroids which are simply the average of all of the points assigned to that centroid. For example, we can use numpy vector\noperations and the c array to find and average all of the points assigned to cluster $k = 0$\n\n\n```python\ncluster_0 = np.where(c == 0)[0]\nprint(X[cluster_0])\nprint(np.mean(X[cluster_0], axis=0))\n```\n\n [[0. 3.8 ]\n [0.4 1.9 ]\n [0.5 2. ]\n [0.55 2.3 ]\n [0.6 2.5 ]\n [0.55 3. ]\n [0.5 4.2 ]\n [2.6 2.9 ]\n [2.7 5. ]\n [3. 2.5 ]\n [6. 2. ]]\n [1.58181818 2.91818182]\n\n\nWe can use the above idea to recalculate all $K$ centroids:\n\n\n```python\n# recalculate all cluster centroids\nfor k in range(K):\n cluster_pts = np.where(c == k)[0]\n mu[k] = np.mean(X[cluster_pts], axis=0)\n\n\n# show the resulting new cluster centroids\nprint(mu)\n\n\n# visualize the new centroid locations in relation to the assigned points in the clusters\ncluster_0 = np.where(c == 0)[0]\ncluster_1 = np.where(c == 1)[0]\n\n#plt.figure(figsize=(8,16))\nax = plt.gca()\nax.set_aspect('equal')\n\nplt.plot(X[cluster_0, 0], X[cluster_0, 1], 'ro', label='assg cluster 0')\nplt.plot(mu[0,0], mu[0,1], 'rx', markersize=15)\nplt.plot(X[cluster_1, 0], X[cluster_1, 1], 'bo', label='assg cluster 1')\nplt.plot(mu[1,0], mu[1,1], 'bx', markersize=15, label='centroids')\nplt.xlabel(r'$x_1$', fontsize=20)\nplt.ylabel(r'$x_2$', fontsize=20)\nplt.legend(loc=1)\nplt.axis([-5, 7, -7, 7]);\n```\n\nThe previous steps to randomly initialize a set of centroids, then repeatedly assign points to closest centroid and move the\ncentroids can easily be made into a function that performs the basic K-means algorithm. We will leave this as an exercise\nfor the student for now to try and bring these pieces together.\n\n## K-means for non-separated clusters\n\nThe previous example(s) had data that looked well separated. However you can still run k-means clustering on\ndata that is not so clearnly separated. Often analysis of a market to do market segmentation can benefit from\ndoing a k-means clustering even on data without real clear segmented groups.\n\n# Video W8 03: Optimization Objective\n\n[YouTube Video Link](https://www.youtube.com/watch?v=omcDeBY4lGE&index=79&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW)\n\nAs this video discusses, we can formally define a cost function and optimization objective for the K-means algorithm.\nThe cost function is high when the points in a cluster are far away from the cluster centroid, and it will be lower\nwhen the points in a cluster are close to the cluster centroid:\n\n$$\nJ(c^{(1)}, \\ldots, c^{(m)}, \\mu_1, \\ldots, \\mu_K) = \\frac{1}{m} \\sum_{i=1}^m \\| x^{(i)} - \\mu_{c^{(i)}} \\|^2\n$$\n\nThus for optimization we are trying to assign our points to clusters, which define the cluster centroids, that minimizes\nthis cost objective function:\n\n$$\n\\underset{c^{(1)}, \\ldots, c^{(m)}, \\\\ \\mu_1, \\ldots, \\mu_K}{\\textrm{min}} \\;\\; J(c^{(1)}, \\ldots, c^{(m)}, \\mu_1, \\ldots, \\mu_K)\n$$\n\nIn words, k-means is trying to find parameters $c^{(i)}$ and $\\mu_k$ that minimizes the sum of the squared distances\nbetween each point and its assigned centroid. It should be obvious from the previous pseudocode that we\nassign the points to the centroid that it is closest too, thus we are minimizing the distance from each point to the\ncurrent set of centroids. K-means is another example of a greedy algorithm. But in this case, the next step\nafter cluster assignment of recalculating centroid locations based on the current assigned points in the cluster, can\nbe shown to lead to $\\mu_k$ points that will end up having minimal overall summed up costs, once the algorithm\nhas converged.\n\nSo in other words, the cluster assignment step minimizes the cost function of the centroids with respect to\nthe cluster assignments, while holding the centroids fixed. Then in the second step to move the centroids,\nit chooses the values of $\\mu_k$ that minimize the cost function with respect to the changing centroids.\n\n# Video W8 04: Random Initialization\n\n[YouTube Video Link](https://www.youtube.com/watch?v=wniLibHEE2Y&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=80)\n\nAbove we showed simply picking $K$ random points within the range of the training data examples in order to randomly choose the\ninitial cluster centroids. In this video, the instructor illustrates a different method, which usually works a bit better, and\nit is actually a bit easier to understand. If we want to discover $K$ cluster, we can simply choose $K$ of our input training\ndata points at random to be our initial centroids. We know that by picking 2 of the input data points that the centroids will\nautomatically be within the range of the training data. So for example, in Python, we could choose K points at random\nto be our centroids like this:\n\n\n```python\n# choose the number of clusters we will be creating\nK = 2\n\n# this will choose 2 indexes in range 0 to m-1, that we will use as our initial points for the mu centroids\n# NOTE: in the next function, the replace=False ensures that the choice() function will not pick the same random\n# index.\nrandom_pts = np.random.choice(m, size=K, replace=False)\nprint(random_pts)\n\nmu = X[random_pts]\nprint(mu)\n```\n\n [10 12]\n [[-1.7 -1.7]\n [-0.2 -1.8]]\n\n\nThis method is the recommended way to choose the initial K centroid methods, and is what will normally be used\nby a K-means library like for example the scikit-learn K-means implementation.\n\nThe optimization cost function defined by K-means is not guaranteed to have only 1 global minimum (unlike\nsome previous cost functions we defined). Thus when you run a K-means with random initial points, the\nclusters the algorithm finds and converges on can be different depending on the random starting locations\npicked. Thus K-means is not deterministic, you can get different clustering results each time you run\nthe algorithm.\n\nOne solution to this is to run K-means multiple times with different starting random initializations. At the\nend of K-means clustering, once the algorithm has converged, you can find the final cost of the discovered\nclusters. Usually if you are hitting local minimum when clustering, if you run multiple times you can compare\nthe final costs of the multiple runs, and usually the lower or lowest final costs achieved will be the better\nclusterings of your data.\n\nSomewhat backwards from what you might intuitively expect, local optimization tend to be more of a problem when\nthe number of clusters K you want to determine is relatively small, say from 2 to 10 as a rule of thumb. For these\nnumber of clusters it is usually a good idea to run 50 to 1000 or so K-means clustering attempts, keep track\nof the final cost of each, and examine/use the one that achieved the lowest cost at the end.\n\nHowever when you are trying to create larger number of clusters, often the minimum that exist are going to\nbe all relatively close to the same, so one clustering, even if different from another one found, will have\na similar overall cost. Thus when trying to determine a large number of clusters/segments it is not as useful\nor necessary to run multiple times to watch out for local minima results.\n\n# Video W8 05: Choosing the Number of Clusters\n\n[YouTube Video Link](https://www.youtube.com/watch?v=izCbbMbRWHw&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=81)\n\nThe most common method is still to choose the number of clusters you want by hand. Sometimes the problem you are trying to\ncluster natually lends itself to a particular number of clusters you desire (e.g. we want to design t-shirts for 3 sizes, S, M and L). Other times, you can do some visualization of the data, and get a rough idea of how many there might be, but often there can\nbe different interpretations of this.\n\n\nIt is often genuinly ambiguous the number of clusters in a data set. Even if/when you can visualize, there will\noften be ambiguity, and different clustering can be supported based on the needs of the application.\n\nThere are some things you can do to try and help to algorithmically pick a good size for K for your clustering.\n\nIn the Elbow method, you compute the cluster for $K=1, 2, 3, ... N$, and look at the final cost function $J$ achieved\nfor each $K$ clustering size. If you plot the cost, often there will be some point where the cost changes from going\ndown rapidly to going down much slower. Often the \"Elbow\" of this curve, or somewhere around it, will be a good\nnumber of cluster for the data you have.\n\nHowever it is possible to get a much more ambiguous result, where there is no apparent elbow to your graph.\nIn practice this will often be the case. It can be worth a shot, but as often as not you won't get a good idea\nfrom this of what might be a good K size, thus you will have to result to other means.\n\nAnother method is really application driven. If you have some metric downstream for evaluating the effectiveness\nof your application, you can then compare that metric when you try different clustering values K, and use the\nclustering/segmentation that works best for the application domain.\n\n# K-Means Clustering with Scikit-Learn\n\n`Scikit-Learn` has clustering algorithms in the `sklearn.cluster` sublibrary.\n\nWe can get a basic clustring of the made up data set we had previously using $K = 2$\nclusters like this:\n\n\n```python\n# fit a clustering estimator to the made up date\nfrom sklearn.cluster import KMeans\n\ncluster = KMeans(n_clusters=2);\ncluster.fit(X)\n\n# display/access the results, like the final cluster centers, the labels of the data, and the final cost\ncenters = cluster.cluster_centers_\nprint(centers)\nlabels = cluster.labels_\nprint(labels)\ncost = cluster.inertia_\nprint(cost)\n\n```\n\n [[-1.96875 -2.49375]\n [ 1.99375 2.28125]]\n [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n 139.75249999999997\n\n\n\n```python\n# visualize the final cluster labels found\ncluster_0 = np.where(labels == 0)[0]\ncluster_1 = np.where(labels == 1)[0]\n\nplt.plot(X[cluster_0, 0], X[cluster_0, 1], 'ro', label='assg cluster 0')\nplt.plot(centers[0,0], centers[0,1], 'rx', markersize=15)\nplt.plot(X[cluster_1, 0], X[cluster_1, 1], 'bo', label='assg cluster 1')\nplt.plot(centers[1,0], centers[1,1], 'bx', markersize=15, label='centroids')\nplt.xlabel(r'$x_1$', fontsize=20)\nplt.ylabel(r'$x_2$', fontsize=20)\nplt.legend(loc=2)\nplt.axis([-5, 7, -7, 7]);\n```\n\n## Check for Local Minima\n\nIn this case for our made up data the clusters are pretty well separated, and local minima are not so easy\nto find. Lets run the cluster 1000 times, and keep the worst and best K-means clusters we find based on the\ncost (inertia) measure.\n\nBy default `scikit-learn` actually performs the clustering 10 times (controlled by the `n_init`) parameter.\nIf we want to see different results, we should set `n_init = 1` so that only 1 clustering is performed.\n\n\n```python\nN = 1000\n\nclusters = []\ncost = np.empty(N)\n\n# now perform different clusterings, checking for ones that improve or make things worse\nfor n in range(N):\n cluster = KMeans(n_clusters=2, n_init=1)\n cluster.fit(X)\n clusters.append(cluster)\n cost[n] = cluster.inertia_\n```\n\n\n```python\n# in this case, we are never seeing any solution other than the 1 minimum that is discovered\nprint(cost.min())\nprint(cost.max())\n```\n\n 139.75249999999997\n 139.75249999999997\n\n\n## Determine K number of clusters\n\nLikewise lets try and illustrate the Elbow method, and fit the data with clusters of size $K = 1 \\cdots 10$\nAgain the data looks like 2 clusters is pretty optimal, so we won't get a very useful result here.\n\n\n```python\nMAX_K = 10\n\nclusters = []\ncost = np.empty(MAX_K+1)\n\n# perform clusterings for K ranging from 2 to MAX_K. If we were having local minima issues, we might also want\n# to run k-means multiple times for each K and find/choose the best cost achieved for the elbow graph\nfor k in range(2, MAX_K+1):\n cluster = KMeans(n_clusters=k)\n cluster.fit(X)\n clusters.append(cluster)\n cost[k] = cluster.inertia_\n \n# visualize the resulting costs as a function of K clustering size\nplt.plot(np.arange(2,MAX_K+1), cost[2:])\nplt.plot(np.arange(2,MAX_K+1), cost[2:], 'bo')\nplt.xlabel('K (clustering size)')\nplt.ylabel('J (cost or inertia) achieved')\nplt.title(\"Elbow plot of cost as a function of K clustering size\");\n```\n\n# K-Means Clustering on Iris Data\n\nUsing the simple data set does not give a great example of the potential for local minima and choosing K.\nHere we perform the previous again, but use the iris data set. The iris data set is 4 dimensional, and we will\nuse all 4 dimensions. We will try clustering into 3 cluster, which is of course the number of categories\nwe have for the original iris data set.\n\n\n\n```python\nfrom sklearn import datasets\n\niris = datasets.load_iris()\nX = iris.data\n```\n\n\n```python\n# create a K-means clustering with K=3\ncluster = KMeans(n_clusters=3);\ncluster.fit(X)\n\n# display/access the results, like the final cluster centers, the labels of the data, and the final cost\ncenters = cluster.cluster_centers_\nprint(centers)\nlabels = cluster.labels_\nprint(labels)\ncost = cluster.inertia_\nprint(cost)\n\n```\n\n [[5.9016129 2.7483871 4.39354839 1.43387097]\n [5.006 3.428 1.462 0.246 ]\n [6.85 3.07368421 5.74210526 2.07105263]]\n [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 2 2 2 2 0 2 2 2 2\n 2 2 0 0 2 2 2 2 0 2 0 2 0 2 2 0 0 2 2 2 2 2 0 2 2 2 2 0 2 2 2 0 2 2 2 0 2\n 2 0]\n 78.851441426146\n\n\n\n```python\n# visualize resulting clusters on the 2 dimensions of Petal length/width\ncluster_0 = np.where(labels == 0)[0]\ncluster_1 = np.where(labels == 1)[0]\ncluster_2 = np.where(labels == 2)[0]\n\nplt.plot(X[cluster_0, 0], X[cluster_0, 1], 'ro', label='assg cluster 0')\nplt.plot(centers[0,0], centers[0,1], 'rx', markersize=15)\nplt.plot(X[cluster_1, 0], X[cluster_1, 1], 'bo', label='assg cluster 1')\nplt.plot(centers[1,0], centers[1,1], 'bx', markersize=15, label='centroids')\nplt.plot(X[cluster_2, 0], X[cluster_2, 1], 'go', label='assg cluster 2')\nplt.plot(centers[2,0], centers[2,1], 'gx', markersize=15, label='centroids')\nplt.xlabel(r'petal width', fontsize=20)\nplt.ylabel(r'petal height', fontsize=20)\nplt.legend();\n```\n\n## Check for Local Minima\n\n\n```python\nN = 1000\n\nclusters = []\ncost = np.empty(N)\n\n# now perform different clusterings, checking for ones that improve or make things worse\nfor n in range(N):\n cluster = KMeans(n_clusters=3, n_init=1)\n cluster.fit(X)\n clusters.append(cluster)\n cost[n] = cluster.inertia_\n```\n\n\n```python\n# For the iris data\nprint(cost.min())\nprint(cost.max())\nprint(np.unique(cost))\n```\n\n 78.851441426146\n 145.4526917648503\n [ 78.85144143 78.85566583 142.7540625 145.45269176]\n\n\n\n```python\nsns.distplot(cost);\n```\n\nIn this case, it looks like we usually discover 3 or 4 unique mimina. Most of the time we get a cost of a bit\nover 78. But sometimes we get 142 or 145, which are probably not as optimal clusterings. Lets plot one of\nthe 145 cost clusterings:\n\n\n```python\nlen(clusters)\nclusters[318]\n```\n\n\n\n\n KMeans(n_clusters=3, n_init=1)\n\n\n\n\n```python\ncluster_num = np.where(cost > 140)[0][0]\nprint(cluster_num)\ncluster = clusters[cluster_num]\n\ncenters = cluster.cluster_centers_\nprint(centers)\nlabels = cluster.labels_\nprint(labels)\ncost = cluster.inertia_\nprint(cost)\n\n```\n\n 4\n [[5.19375 3.63125 1.475 0.271875 ]\n [6.31458333 2.89583333 4.97395833 1.703125 ]\n [4.73181818 2.92727273 1.77272727 0.35 ]]\n [0 2 2 2 0 0 2 0 2 2 0 0 2 2 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 2 2 0 0 0 2 0 0\n 0 2 0 0 2 2 0 0 2 0 2 0 0 1 1 1 1 1 1 1 2 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1]\n 142.7540625000001\n\n\n\n```python\n# visualize resulting clusters on the 2 dimensions of Petal length/width\ncluster_0 = np.where(labels == 0)[0]\ncluster_1 = np.where(labels == 1)[0]\ncluster_2 = np.where(labels == 2)[0]\n\nplt.plot(X[cluster_0, 0], X[cluster_0, 1], 'ro', label='assg cluster 0')\nplt.plot(centers[0,0], centers[0,1], 'rx', markersize=15)\nplt.plot(X[cluster_1, 0], X[cluster_1, 1], 'bo', label='assg cluster 1')\nplt.plot(centers[1,0], centers[1,1], 'bx', markersize=15, label='centroids')\nplt.plot(X[cluster_2, 0], X[cluster_2, 1], 'go', label='assg cluster 2')\nplt.plot(centers[2,0], centers[2,1], 'gx', markersize=15, label='centroids')\nplt.xlabel(r'petal width', fontsize=20)\nplt.ylabel(r'petal height', fontsize=20)\nplt.legend();\n```\n\nThis is definitely not a good clustering, it has broken up the smaller separate group (which were the easier\nto classify Virginica samples) into 2, and group the other 2 into 1 big cluster.\n\n## Determine K number of Clusters\n\n\n```python\nMAX_K = 10\n\nclusters = []\ncost = np.empty(MAX_K+1)\n\n# perform clusterings for K ranging from 2 to MAX_K. If we were having local minima issues, we might also want\n# to run k-means multiple times for each K and find/choose the best cost achieved for the elbow graph\nfor k in range(2, MAX_K+1):\n cluster = KMeans(n_clusters=k, n_init=1)\n cluster.fit(X)\n clusters.append(cluster)\n cost[k] = cluster.inertia_\n \n# visualize the resulting costs as a function of K clustering size\nplt.plot(np.arange(2,MAX_K+1), cost[2:])\nplt.plot(np.arange(2,MAX_K+1), cost[2:], 'bo')\nplt.xlabel('K (clustering size)')\nplt.ylabel('J (cost or inertia) achieved')\nplt.title(\"Elbow plot of cost as a function of K clustering size\");\n```\n\n\n```python\nimport sys\nsys.path.append(\"../../src\") # add our class modules to the system PYTHON_PATH\n\nfrom ml_python_class.custom_funcs import version_information\nversion_information()\n```\n\n Module Versions\n -------------------- ------------------------------------------------------------\n matplotlib: ['3.3.0']\n numpy: ['1.18.5']\n pandas: ['1.0.5']\n seaborn: ['0.10.1']\n\n", "meta": {"hexsha": "4fabdc0a253a71829432a43314a7f36fb4625515", "size": 275363, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/ng/Lecture-11-Unsupervised-Learning.ipynb", "max_stars_repo_name": "tgrasty/CSCI574-Machine-Learning", "max_stars_repo_head_hexsha": "bcf797262852c4b46a6702c69f69724b0b9e93f6", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lectures/ng/Lecture-11-Unsupervised-Learning.ipynb", "max_issues_repo_name": "tgrasty/CSCI574-Machine-Learning", "max_issues_repo_head_hexsha": "bcf797262852c4b46a6702c69f69724b0b9e93f6", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/ng/Lecture-11-Unsupervised-Learning.ipynb", "max_forks_repo_name": "tgrasty/CSCI574-Machine-Learning", "max_forks_repo_head_hexsha": "bcf797262852c4b46a6702c69f69724b0b9e93f6", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-17T17:03:58.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-17T17:03:58.000Z", "avg_line_length": 181.9980171844, "max_line_length": 29068, "alphanum_fraction": 0.8946336291, "converted": true, "num_tokens": 9412, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49218813572079556, "lm_q2_score": 0.26284183737131667, "lm_q1q2_score": 0.12936763392521688}} {"text": "```python\n# This mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive')\n\n# TODO: Enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment1/'\nFOLDERNAME = 'CS231N/assignment/assignment2/'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# Now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# This downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd /content/drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content/drive/My\\ Drive/$FOLDERNAME\n```\n\n Mounted at /content/drive\n /content/drive/My Drive/CS231N/assignment/assignment2/cs231n/datasets\n /content/drive/My Drive/CS231N/assignment/assignment2\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization, proposed by [1] in 2015.\n\nTo understand the goal of batch normalization, it is important to first recognize that machine learning methods tend to perform better with input data consisting of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features. This will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance, since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, they propose to insert into the network layers that normalize batches. At training time, such a layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```python\n# Setup cell.\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams[\"figure.figsize\"] = (10.0, 8.0) # Set default size of plots.\nplt.rcParams[\"image.interpolation\"] = \"nearest\"\nplt.rcParams[\"image.cmap\"] = \"gray\"\n\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\"Returns relative error.\"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(f\" means: {x.mean(axis=axis)}\")\n print(f\" stds: {x.std(axis=axis)}\\n\")\n```\n\n\n```python\n# Load the (preprocessed) CIFAR-10 data.\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(f\"{k}: {v.shape}\")\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n# Batch Normalization: Forward Pass\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```python\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n\n# Means should be close to zero and stds close to one.\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```python\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n# Batch Normalization: Backward Pass\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```python\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-13 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.7029261167605239e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n# Batch Normalization: Alternative Backward Pass\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hard part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```python\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 6.284600172572596e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 2.03x\n\n\n# Fully Connected Networks with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\n**Hint:** You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`.\n\n\n```python\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.261195509168841\n W1 relative error: 1.10e-04\n W2 relative error: 3.89e-06\n W3 relative error: 8.12e-10\n b1 relative error: 2.22e-03\n b2 relative error: 5.55e-09\n b3 relative error: 5.81e-10\n beta1 relative error: 6.82e-09\n beta2 relative error: 2.40e-09\n gamma1 relative error: 1.83e-08\n gamma2 relative error: 2.82e-09\n \n Running check with reg = 3.14\n Initial loss: 6.996533219149863\n W1 relative error: 1.98e-06\n W2 relative error: 2.28e-06\n W3 relative error: 2.23e-08\n b1 relative error: 5.55e-09\n b2 relative error: 2.22e-08\n b3 relative error: 7.06e-10\n beta1 relative error: 6.32e-09\n beta2 relative error: 4.34e-09\n gamma1 relative error: 5.61e-09\n gamma2 relative error: 5.28e-09\n\n\n# Batch Normalization for Deep Networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```python\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340974\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.313000; val_acc: 0.265000\n (Iteration 21 / 200) loss: 2.039345\n (Epoch 2 / 10) train acc: 0.396000; val_acc: 0.280000\n (Iteration 41 / 200) loss: 2.047471\n (Epoch 3 / 10) train acc: 0.484000; val_acc: 0.316000\n (Iteration 61 / 200) loss: 1.739554\n (Epoch 4 / 10) train acc: 0.525000; val_acc: 0.318000\n (Iteration 81 / 200) loss: 1.246973\n (Epoch 5 / 10) train acc: 0.595000; val_acc: 0.335000\n (Iteration 101 / 200) loss: 1.354766\n (Epoch 6 / 10) train acc: 0.638000; val_acc: 0.331000\n (Iteration 121 / 200) loss: 1.014049\n (Epoch 7 / 10) train acc: 0.673000; val_acc: 0.323000\n (Iteration 141 / 200) loss: 1.135644\n (Epoch 8 / 10) train acc: 0.682000; val_acc: 0.305000\n (Iteration 161 / 200) loss: 0.652241\n (Epoch 9 / 10) train acc: 0.781000; val_acc: 0.336000\n (Iteration 181 / 200) loss: 0.782712\n (Epoch 10 / 10) train acc: 0.749000; val_acc: 0.318000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696059\n (Epoch 6 / 10) train acc: 0.535000; val_acc: 0.345000\n (Iteration 121 / 200) loss: 1.557987\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.304000\n (Iteration 141 / 200) loss: 1.432189\n (Epoch 8 / 10) train acc: 0.628000; val_acc: 0.339000\n (Iteration 161 / 200) loss: 1.034116\n (Epoch 9 / 10) train acc: 0.654000; val_acc: 0.342000\n (Iteration 181 / 200) loss: 0.905794\n (Epoch 10 / 10) train acc: 0.712000; val_acc: 0.328000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```python\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch Normalization and Initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train eight-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```python\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```python\n# Plot results of weight scale experiment.\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the weight initialization scale affect models with/without batch normalization differently, and why?\n\n## Answer:\n\nBatch normalization is less sensisive to the weight initialization\n\n\n# Batch Normalization and Batch Size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```python\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n \n # Try training a very deep net with batchnorm.\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```python\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\n\nIf the batch size is too small, the accuracy can goes down because it cannot represent the whole dataset.\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n\nBatch: 1\n\nLayer: 2\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```python\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization.\n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\n# Means should be close to zero and stds close to one.\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```python\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-12 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.4336158494902849e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and Batch Size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```python\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n\n2. If the dimension is very small, then the mean and variance are not representable for the data.\n\n", "meta": {"hexsha": "dac01323ae92bd3c53ce5767b53542ecc83e13a2", "size": 441134, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "Mael-zys/CS231N", "max_stars_repo_head_hexsha": "d8c1b7305f0cc86c37c23d1d438c731557c9e7d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "Mael-zys/CS231N", "max_issues_repo_head_hexsha": "d8c1b7305f0cc86c37c23d1d438c731557c9e7d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "Mael-zys/CS231N", "max_forks_repo_head_hexsha": "d8c1b7305f0cc86c37c23d1d438c731557c9e7d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 441134.0, "max_line_length": 441134, "alphanum_fraction": 0.9374113081, "converted": true, "num_tokens": 9208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.40733341443526055, "lm_q2_score": 0.31742627204485063, "lm_q1q2_score": 0.12929832722348492}} {"text": "\n\n# Lambda School Data Science Module 143\n\n## Introduction to Bayesian Inference\n\n!['Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'](https://imgs.xkcd.com/comics/frequentists_vs_bayesians.png)\n\n*[XKCD 1132](https://www.xkcd.com/1132/)*\n\n\n## Prepare - Bayes' Theorem and the Bayesian mindset\n\nBayes' theorem possesses a near-mythical quality - a bit of math that somehow magically evaluates a situation. But this mythicalness has more to do with its reputation and advanced applications than the actual core of it - deriving it is actually remarkably straightforward.\n\n### The Law of Total Probability\n\nBy definition, the total probability of all outcomes (events) if some variable (event space) $A$ is 1. That is:\n\n$$P(A) = \\sum_n P(A_n) = 1$$\n\nThe law of total probability takes this further, considering two variables ($A$ and $B$) and relating their marginal probabilities (their likelihoods considered independently, without reference to one another) and their conditional probabilities (their likelihoods considered jointly). A marginal probability is simply notated as e.g. $P(A)$, while a conditional probability is notated $P(A|B)$, which reads \"probability of $A$ *given* $B$\".\n\nThe law of total probability states:\n\n$$P(A) = \\sum_n P(A | B_n) P(B_n)$$\n\nIn words - the total probability of $A$ is equal to the sum of the conditional probability of $A$ on any given event $B_n$ times the probability of that event $B_n$, and summed over all possible events in $B$.\n\n### The Law of Conditional Probability\n\nWhat's the probability of something conditioned on something else? To determine this we have to go back to set theory and think about the intersection of sets:\n\nThe formula for actual calculation:\n\n$$P(A|B) = \\frac{P(A \\cap B)}{P(B)}$$\n\n\n\nThink of the overall rectangle as the whole probability space, $A$ as the left circle, $B$ as the right circle, and their intersection as the red area. Try to visualize the ratio being described in the above formula, and how it is different from just the $P(A)$ (not conditioned on $B$).\n\nWe can see how this relates back to the law of total probability - multiply both sides by $P(B)$ and you get $P(A|B)P(B) = P(A \\cap B)$ - replaced back into the law of total probability we get $P(A) = \\sum_n P(A \\cap B_n)$.\n\nThis may not seem like an improvement at first, but try to relate it back to the above picture - if you think of sets as physical objects, we're saying that the total probability of $A$ given $B$ is all the little pieces of it intersected with $B$, added together. The conditional probability is then just that again, but divided by the probability of $B$ itself happening in the first place.\n\n\\begin{align}\nP(A|B) &= \\frac{P(A \\cap B)}{P(B)}\\\\\n\\Rightarrow P(A|B)P(B) &= P(A \\cap B)\\\\\nP(B|A) &= \\frac{P(B \\cap A)}{P(A)}\\\\\n\\Rightarrow P(B|A)P(A) &= P(B \\cap A)\\\\\n\\Rightarrow P(A|B)P(B) &= P(B|A)P(A) \\\\\nP(A \\cap B) &= P(B \\cap A)\\\\\nP(A|B) &= \\frac{P(B|A) \\times P(A)}{P(B)}\n\\end{align}\n\n### Bayes Theorem\n\nHere is is, the seemingly magic tool:\n\n$$P(A|B) = \\frac{P(B|A)P(A)}{P(B)}$$\n\nIn words - the probability of $A$ conditioned on $B$ is the probability of $B$ conditioned on $A$, times the probability of $A$ and divided by the probability of $B$. These unconditioned probabilities are referred to as \"prior beliefs\", and the conditioned probabilities as \"updated.\"\n\nWhy is this important? Scroll back up to the XKCD example - the Bayesian statistician draws a less absurd conclusion because their prior belief in the likelihood that the sun will go nova is extremely low. So, even when updated based on evidence from a detector that is $35/36 = 0.972$ accurate, the prior belief doesn't shift enough to change their overall opinion.\n\nThere's many examples of Bayes' theorem - one less absurd example is to apply to [breathalyzer tests](https://www.bayestheorem.net/breathalyzer-example/). You may think that a breathalyzer test that is 100% accurate for true positives (detecting somebody who is drunk) is pretty good, but what if it also has 8% false positives (indicating somebody is drunk when they're not)? And furthermore, the rate of drunk driving (and thus our prior belief) is 1/1000.\n\nWhat is the likelihood somebody really is drunk if they test positive? Some may guess it's 92% - the difference between the true positives and the false positives. But we have a prior belief of the background/true rate of drunk driving. Sounds like a job for Bayes' theorem!\n\n$$\n\\begin{aligned}\nP(Drunk | Positive) &= \\frac{P(Positive | Drunk)P(Drunk)}{P(Positive)} \\\\\n&= \\frac{1 \\times 0.001}{0.08} \\\\\n&= 0.0125\n\\end{aligned}\n$$\n\nIn other words, the likelihood that somebody is drunk given they tested positive with a breathalyzer in this situation is only 1.25% - probably much lower than you'd guess. This is why, in practice, it's important to have a repeated test to confirm (the probability of two false positives in a row is $0.08 * 0.08 = 0.0064$, much lower), and Bayes' theorem has been relevant in court cases where proper consideration of evidence was important.\n\n\n\nSource: \n\n## Live Lecture - Deriving Bayes' Theorem, Calculating Bayesian Confidence\n\nNotice that $P(A|B)$ appears in the above laws - in Bayesian terms, this is the belief in $A$ updated for the evidence $B$. So all we need to do is solve for this term to derive Bayes' theorem. Let's do it together!\n\n\n```\n# Activity 2 - Use SciPy to calculate Bayesian confidence intervals\n# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bayes_mvs.html#scipy.stats.bayes_mvs\n```\n\n\n```\nfrom scipy import stats\nimport numpy as np\n\nnp.random.seed(seed=42)\n\ncoinflips = np.random.binomial(n=1, p=.5, size=100)\nprint(coinflips)\n```\n\n [0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 1 0 0 1 1 1 0\n 0 1 0 0 0 0 1 0 1 0 1 1 0 1 1 1 1 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 0 1\n 1 1 1 0 0 0 1 1 0 0 0 0 1 1 1 0 0 1 1 1 1 0 1 0 0 0]\n\n\n\n```\ndef confidence_interval(data, confidence=.95):\n n = len(data)\n mean = sum(data)/n\n data = np.array(data)\n stderr = stats.sem(data)\n interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n-1)\n return (mean , mean-interval, mean+interval)\n```\n\n\n```\nconfidence_interval(coinflips, confidence=.95)\n```\n\n\n\n\n (0.47, 0.3704689875017368, 0.5695310124982632)\n\n\n\n\n```\nmean_CI, _, _ = stats.bayes_mvs(coinflips, alpha=.95)\n \nmean_CI\n```\n\n\n\n\n Mean(statistic=0.47, minmax=(0.37046898750173674, 0.5695310124982632))\n\n\n\n\n```\n??stats.bayes_mvs\n```\n\n\n```\ncoinflips_mean_dist, _, _ = stats.mvsdist(coinflips)\ncoinflips_mean_dist\n```\n\n\n\n\n \n\n\n\n\n```\ncoinflips_mean_dist.rvs(1000)\n```\n\n## Assignment - Code it up!\n\nMost of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions!\n\nSpecific goals/targets:\n\n1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations\n2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week\n3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach\n4. In your own words, summarize the difference between Bayesian and Frequentist statistics\n\nIf you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar!\n\nStretch goals:\n\n- Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective\n- Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples\n- Take PyMC3 further - see if you can build something with it!\n\n\n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n```\n\n\n```\n# TODO - code!\n\ndef prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk):\n # reproduces the example from lecture, and use it to calculate and visualize a range of situations\n return prob_positive_drunk * prob_drunk_prior / prob_positive\n```\n\n\n```\ni = 1\ndf = pd.DataFrame(columns=['index','post_prob'])\nindex = []\npost_list = []\n```\n\n\n```\ndef prob_drunk_positive_recursive(prob_drunk_prior, false_positive, true_positive, n):\n# global result\n post_prob = true_positive * prob_drunk_prior / (false_positive + prob_drunk_prior)\n \n global i\n index.append(int(i))\n post_list.append(post_prob)\n# print(i, post_prob)\n i += 1\n \n while i < n:\n prob_drunk_positive_recursive(post_prob, false_positive, true_positive, n)\n# return result\n\n```\n\n\n```\nprob_drunk_positive_recursive(.001, .08, 1, 11)\n```\n\n\n```\nindex, post_list\n```\n\n\n\n\n ([1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n [0.012345679012345678,\n 0.1336898395721925,\n 0.6256256256256255,\n 0.8866254326732111,\n 0.9172378490200085,\n 0.9197784158727866,\n 0.9199822693409904,\n 0.9199985815221288,\n 0.9199998865216094,\n 0.9199999909217278])\n\n\n\n\n```\ndf['index'] = np.array(index)\ndf['post_prob'] = np.array(post_list)\n```\n\n\n```\ndf.head(10)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
indexpost_prob
010.012346
120.133690
230.625626
340.886625
450.917238
560.919778
670.919982
780.919999
890.920000
9100.920000
\n
\n\n\n\n\n```\ndf = df[df['index'] <= 10]\ndf.shape\n```\n\n\n\n\n (0, 2)\n\n\n\n\n```\nplt.style.use('fivethirtyeight')\nfig, ax = plt.subplots(figsize=(8,6))\nplt.ylim(-.05,1)\n\nax.axhline(y=0, color='black', linewidth=1.5, alpha=1)\nax.axhline(y=.92, color='black', linewidth=1.5, alpha=.5)\nax.axvline(x=0, color='black', linewidth=1.5, alpha=1)\nax.axvline(x=5, color='black', linewidth=1.5, alpha=.5)\nplt.plot(df['index'], df['post_prob'], linestyle='--', marker='o', linewidth=2.0)\n\nax.set_yticks([0, .25, .50, .75, 1])\nax.set_yticklabels(labels=['0','25','50','75','100%'], fontsize=14)\n```\n\nAfter multiple instances of repeating the experiment, the end result % being nearly identical to one calculated using the frequentist approach. So the confidence interval visualizations of a bayesian confidence interval look identical their frequentist counterpart.\n\n## Resources\n\n- [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator)\n- [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139)\n", "meta": {"hexsha": "23d3a9c9b03aa07b2f336acb60b097866831c7ee", "size": 43783, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "module3-introduction-to-bayesian-inference/gh_133_Introduction_to_Bayesian_Inference.ipynb", "max_stars_repo_name": "gyhou/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_stars_repo_head_hexsha": "d66ca4d2625635cb9176ca7fcc1d21fc9f1caa4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "module3-introduction-to-bayesian-inference/gh_133_Introduction_to_Bayesian_Inference.ipynb", "max_issues_repo_name": "gyhou/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_issues_repo_head_hexsha": "d66ca4d2625635cb9176ca7fcc1d21fc9f1caa4c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "module3-introduction-to-bayesian-inference/gh_133_Introduction_to_Bayesian_Inference.ipynb", "max_forks_repo_name": "gyhou/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_forks_repo_head_hexsha": "d66ca4d2625635cb9176ca7fcc1d21fc9f1caa4c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.9273216689, "max_line_length": 17752, "alphanum_fraction": 0.6759701254, "converted": true, "num_tokens": 3664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2658804730998169, "lm_q2_score": 0.48438008427698437, "lm_q1q2_score": 0.12878720596769377}} {"text": "

Introducci\u00f3n al estudio del M\u00e9todo de los Elementos Finitos

\n\n

Una aproximaci\u00f3n al an\u00e1lisis est\u00e1tico lineal

\n\n

Sesion 01 - Introducci\u00f3n

\n\n\n \n
\n Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) Carlos Alberto Alvarez Henao
\n\n*** \n\n***Docente:*** Carlos Alberto \u00c1lvarez Henao, I.C. D.Sc.\n\n***e-mail:*** carlosalvarezh@gmail.com\n\n***skype:*** carlos.alberto.alvarez.henao\n\n***Herramienta:*** [Jupyter Notebook](http://jupyter.org/)\n\n***Kernel:*** Python 3.8\n\n***MEDELL\u00cdN - COLOMBIA***\n\n***2021/02***\n\n***\n\n\n\n

Tabla de Contenidos

\n\n\n

\n \n

\n\n## Introducci\u00f3n\n\n

\n \n

\n\n### Mec\u00e1nica\n\nLa [Mec\u00e1nica](https://en.wikipedia.org/wiki/Mechanics) es el \u00e1rea de la f\u00edsica que estudia el movimiento y reposo de los cuerpos, y su evoluci\u00f3n con el tiempo, m\u00e1s espec\u00edficamente la relaci\u00f3n entre fuerza, materia y movimiento. Las fuerzas aplicadas a los objetos dan como resultados desplazamientos o cambios de la posici\u00f3n de un objeto en relaci\u00f3n con su entorno.\n\nLos problemas a ser tratados en este curso se enmarcan dentro de la [Mec\u00e1nica computacional](https://en.wikipedia.org/wiki/Computational_mechanics), an\u00e1lisis est\u00e1tico lineal, como se observa en el siguiente esquema:\n\n\\begin{equation*}\n\\text{Mec\u00e1nica} \\left \\{\n\\begin{aligned}\n&\\text{Te\u00f3rica} \\\\\n&\\text{Computacional} \\left \\{\n\\begin{aligned}\n&\\text{Nano/micro} \\\\\n&\\text{Continuo} \\left \\{\n\\begin{aligned}\n&\\text{Fluidos} \\\\\n&\\text{S\u00f3lidos} \\left \\{\n\\begin{aligned}\n&\\text{Est\u00e1tica} \\left \\{\n\\begin{aligned}\n&\\text{Lineal} \\\\\n&\\text{No Lineal}\n\\end{aligned}\n\\right.\n\\\\\n&\\text{Din\u00e1mica}\n\\end{aligned}\n\\right.\n\\\\\n&\\text{Multif\u00edsica}\n\\end{aligned}\n\\right.\n\\\\\n&\\text{Sistema}\n\\end{aligned}\n\\right.\n\\\\\n&\\text{Aplicada}\n\\end{aligned}\n\\right.\n\\end{equation*}\n\n#### Modelaci\u00f3n y Simulaci\u00f3n Computacional\n\n

\n \n

\n\nLa Ciencia Computacional e Ingenier\u00eda, es una disciplina que concierne al uso de m\u00e9todos y dispositivos computacionales para simular eventos f\u00edsicos y sistemas en ingenier\u00eda.\n\n#### Discretizaci\u00f3n\n\n

\n \n

\n\nEl proceso de [discretizaci\u00f3n](https://en.wikipedia.org/wiki/Discretization) consiste en transformar un modelo continuo en un finito n\u00famero de componentes discretos que pueden ser procesados por una m\u00e1quina. El [modelo Computacional](https://en.wikipedia.org/wiki/Computational_model) es la versi\u00f3n discretizada de un modelo matem\u00e1tico que se ha desarrollado para ser implementado en una m\u00e1quina de c\u00f3mputo. Esta discplina incluye la [modelaci\u00f3n matem\u00e1tica](https://en.wikipedia.org/wiki/Mathematical_model), que es un conjunto de ecuaciones algebraicas, diferenciales y/o integrales, que gobiernan un fen\u00f3meno f\u00edsico de un sistema en particular. El modelo se basa en una serie de suposiciones y restricciones impuestas al fen\u00f3meno y de las leyes f\u00edsicas gobernantes.\n\nAlgunos m\u00e9todos empleados en la discretizaci\u00f3n espacial son:\n\n- ***[Elementos Finitos](https://en.wikipedia.org/wiki/Finite_element_method) (FEM)***\n\n\n- ***[Elementos de Contorno](https://en.wikipedia.org/wiki/Boundary_element_method) (BEM)***\n\n\n- ***[Diferencias Finitas](https://en.wikipedia.org/wiki/Finite_difference_method) (FDM)***\n\n\n- ***[Spectral](https://en.wikipedia.org/wiki/Spectral_method)***\n\n\n- ***[Libres de Malla](https://en.wikipedia.org/wiki/Meshfree_methods) (Mesfree)***\n\nCada uno de ellos presenta ventajas y desventajas que deber\u00e1n ser aprovechadas seg\u00fan el tipo de problema a resolver. En este curso nos centraremos \u00fanicamente en el ***M\u00e9todo de los Elementos Finitos (FEM)***.\n\n#### Qu\u00e9 es un *Elemento Finito*?\n\nPara captar la idea de lo que es un elemento finito, veamos un ejemplo en donde se calcula el valor de $\\pi$ empleando el m\u00e9todo propuesto originalmente por [*Arquimedes*](https://en.wikipedia.org/wiki/Approximations_of_%CF%80). \u00c9l sab\u00eda que la relaci\u00f3n entre la circunferencia de un c\u00edrculo y su di\u00e1metro era $\\pi$. Ten\u00eda la idea de que pod\u00eda dibujar un pol\u00edgono regular inscrito dentro del c\u00edrculo para aproximarse a $\\pi$, y cuantos m\u00e1s lados dibujara en el pol\u00edgono, mejor aproximaci\u00f3n a $\\pi$ obtendr\u00eda. En matem\u00e1ticas modernas, dir\u00edamos que como el n\u00famero de lados del pol\u00edgono tiende a infinito, la circunferencia tiende a $2\\pi$. Sin embargo, en lugar de dibujar el pol\u00edgono, Arqu\u00edmedes calcul\u00f3 la longitud usando un argumento geom\u00e9trico, como el siguiente:\n\n

\n \n

\n\nDibuja un c\u00edrculo de radio $1$ centrado en $A$. Inscribe un pol\u00edgono de $N$ lados dentro de \u00e9l. Nuestra estimaci\u00f3n de $\\pi$ es la mitad de la circunferencia del pol\u00edgono (la circunferencia de un c\u00edrculo es $2\\pi r$, $r=1$, lo que da $2\\pi$). A medida que los lados del pol\u00edgono se hacen cada vez m\u00e1s peque\u00f1os, la circunferencia se acerca cada vez m\u00e1s a $2\\pi$.\n\nEl diagrama muestra un segmento del pol\u00edgono $ACE$. El lado del pol\u00edgono $CE$ tiene una longitud $d_n$. Suponiendo que conocemos $d$ para un pol\u00edgono de $N$ lados, si podemos encontrar una expresi\u00f3n para la longitud $CD=d_{2n}$, la longitud del borde de un pol\u00edgono con $2N$ lados, solo en t\u00e9rminos de $d_n$, entonces podemos mejorar nuestra estimaci\u00f3n de $\\pi$. Intentemos hacer eso.\n\nBisecamos el tri\u00e1ngulo $CAE$ para hacer que $CAD$ y $DAE$ sean los dos nuevos segmentos iguales del pol\u00edgono de $2N$ lados. Usando el teorema de Pit\u00e1goras en el tri\u00e1ngulo rect\u00e1ngulo $ABC$, tenemos\n\n$$AB^2+BC^2=1^2=1$$\n$$AB=\\sqrt{1+BC^2}$$\n\ndado que\n\n$$BC=\\frac{d_n}{2}$$\n\nsustituyendo,\n\n$$AB=\\sqrt{1-\\left( \\frac{d_n}{2}\\right)^2}$$\n$$BD=1-AB=1-\\sqrt{1-\\frac{d^2_n}{4}}$$\n\nUsando el teorema de Pit\u00e1goras en el tri\u00e1ngulo rect\u00e1ngulo $CBD$\n\n$$CD^2=BC^2+BD^2$$\n\nreemplazando\n\n$$CD=d_{2n}=\\sqrt{2-2\\sqrt{1-\\frac{d^2_n}{4}}}$$\n\n$d_{2n}$ es la longitud de un lado del pol\u00edgono con $2N$ lados.\n\nEsto significa que si conocemos la longitud de los lados de un pol\u00edgono con $N$ lados, entonces podemos calcular la longitud de los lados de un pol\u00edgono con $2N$ lados. \u00bfQu\u00e9 significa esto? Comencemos con un cuadrado. La inscripci\u00f3n de un cuadrado en un c\u00edrculo se ve as\u00ed, con la longitud del lado $\\sqrt{2}$. Esto da una estimaci\u00f3n de $\\pi$ como $2\\sqrt{2}$, que es pobre ($\\approx2.828$) pero es solo el comienzo del proceso.\n\n

\n \n

\n\nPodemos calcular la longitud del lado de un oct\u00e1gono, a partir de la longitud del lado de un cuadrado, y la longitud del lado de un $16$-gono a partir de un oct\u00e1gono, etc.\n\n

\n \n

\n\n\n\n```python\ndef piArchimedes(n):\n \"\"\"\n Calculate n iterations of Archimedes PI recurrence relation\n \"\"\"\n polygon_edge = 2.0\n polygon_sides = 4\n for i in range(n):\n polygon_edge = 2 - 2 * math.sqrt(1 - polygon_edge / 4)\n polygon_sides *= 2\n return polygon_sides * math.sqrt(polygon_edge) / 2\n```\n\n\n```python\nimport math\n\nprint(\"n \\t pi approx \\t Error(%)\")\nprint(\"1 \\t {0:6.8f} \\t -\".format(0, ))\n\nfor n in range(16):\n result = piArchimedes(n)\n error = abs(result - math.pi) / math.pi\n print(\"{0} \\t {1:4.8f} \\t {2:4.8f}\".format(n, result, error))\n\n```\n\n n \t pi approx \t Error(%)\n 1 \t 0.00000000 \t -\n 0 \t 2.82842712 \t 0.09968368\n 1 \t 3.06146746 \t 0.02550464\n 2 \t 3.12144515 \t 0.00641315\n 3 \t 3.13654849 \t 0.00160561\n 4 \t 3.14033116 \t 0.00040155\n 5 \t 3.14127725 \t 0.00010040\n 6 \t 3.14151380 \t 0.00002510\n 7 \t 3.14157294 \t 0.00000627\n 8 \t 3.14158773 \t 0.00000157\n 9 \t 3.14159142 \t 0.00000039\n 10 \t 3.14159235 \t 0.00000010\n 11 \t 3.14159258 \t 0.00000002\n 12 \t 3.14159263 \t 0.00000001\n 13 \t 3.14159265 \t 0.00000000\n 14 \t 3.14159265 \t 0.00000000\n 15 \t 3.14159261 \t 0.00000001\n\n\n\n\n### Qu\u00e9 es el FEM?\n\nEl [M\u00e9todo de los Elementos Finitos](https://en.wikipedia.org/wiki/Finite_element_method) (FEM, en ingl\u00e9s) es una t\u00e9cnica num\u00e9rica empleada principalmente para resolver [ecuaciones diferenciales](https://en.wikipedia.org/wiki/Differential_equation), que surgen en el [modelamiento matem\u00e1tico](https://en.wikipedia.org/wiki/Mathematical_model) en \u00e1reas como la ingenier\u00eda, ciencias b\u00e1sicas, sociales, entre muchas otras.\n\n#### FEM o FEA?\n\nEl m\u00e9todo de elementos finitos (FEM) es una t\u00e9cnica num\u00e9rica utilizada para realizar an\u00e1lisis de elementos finitos (FEA) de cualquier fen\u00f3meno f\u00edsico dado.\n\nEs necesario utilizar las matem\u00e1ticas para comprender y cuantificar de manera integral cualquier fen\u00f3meno f\u00edsico, como el comportamiento estructural o de los fluidos, el transporte t\u00e9rmico, la propagaci\u00f3n de ondas y el crecimiento de c\u00e9lulas biol\u00f3gicas. La mayor\u00eda de estos procesos se describen mediante ecuaciones diferenciales parciales (PDE). Sin embargo, para que una computadora resuelva estas PDE, se han desarrollado t\u00e9cnicas num\u00e9ricas durante las \u00faltimas d\u00e9cadas y una de las m\u00e1s destacadas en la actualidad es el m\u00e9todo de elementos finitos.\n\n#### Aplicaciones del FEM\n\nEl m\u00e9todo de elementos finitos comenz\u00f3 con una promesa significativa en el modelado de varias aplicaciones mec\u00e1nicas relacionadas con la ingenier\u00eda aeroespacial y civil. Las aplicaciones del m\u00e9todo de los elementos finitos reci\u00e9n ahora est\u00e1n comenzando a alcanzar su potencial. Una de las perspectivas m\u00e1s interesantes es su aplicaci\u00f3n en [problemas acoplados](https://en.wikipedia.org/wiki/Coupling_(physics)) tales como [interacci\u00f3n fluido-estructura](https://en.wikipedia.org/wiki/Fluid%E2%80%93structure_interaction)(FSI), problemas [termomec\u00e1nicos](https://en.wikipedia.org/wiki/Thermomechanical_analysis), [termoqu\u00edmicos](https://en.wikipedia.org/wiki/Thermochemical_cycle), [biomec\u00e1nica](https://en.wikipedia.org/wiki/Biomechanics), [ingenier\u00eda biom\u00e9dica](https://en.wikipedia.org/wiki/Biomedical_sciences), [piezoel\u00e9ctrica](https://en.wikipedia.org/wiki/Piezoelectricity), y [electromagn\u00e9tica](https://en.wikipedia.org/wiki/Computational_electromagnetics), entre muchos otros.\n\n#### Ecuaciones Diferenciales en Derivadas Parciales\n\nEn matem\u00e1ticas, una [Ecuaci\u00f3n Diferencial en Derivadas Parciales](https://en.wikipedia.org/wiki/Partial_differential_equation) (PDE, de sus siglas en ingl\u00e9s) es una ecuaci\u00f3n que impone relaciones entre las diversas derivadas parciales de una funci\u00f3n multivariable. Existen diferentes formas de escribir una PDE, pero de las m\u00e1s comunes son:\n\n$$u_x=\\frac{\\partial u}{\\partial x}, \\quad u_{xx}=\\frac{\\partial^2 u}{\\partial x^2}, \\quad u_{xy}=\\frac{\\partial^2 u}{\\partial x \\partial y}=\\frac{\\partial}{\\partial y} \\left( \\frac{\\partial u}{\\partial x}\\right),$$\n\nEs importante comprender los diferentes g\u00e9neros de PDE y su idoneidad para su uso con FEM. Comprender esto es particularmente importante para todos, independientemente de la motivaci\u00f3n para usar el an\u00e1lisis de elementos finitos. Es fundamental recordar que FEM es una herramienta y cualquier herramienta es tan buena como su usuario.\n\nAunque, como se ha indicado, el FEM es una herramienta para resolver PDEs, en el curso no se realizar\u00e1 un desarrollo te\u00f3rico de este tema, por lo que se deja al usuario la responsabilidad de su estudio y adecuado entendimiento. En el libro de [Kreyszig](https://soaneemrana.org/onewebmedia/ADVANCED%20ENGINEERING%20MATHEMATICS%20BY%20ERWIN%20ERESZIG1.pdf), cap\u00edtulo 12, encontrar\u00e1 informaci\u00f3n que puede ser una buena referencia para estudio. Tambi\u00e9n el notebook de [Ecuaciones Diferenciales con Python](https://relopezbriega.github.io/blog/2016/01/10/ecuaciones-diferenciales-con-python/) del profesor Ra\u00fal Lopez Briega, en espa\u00f1ol, en donde podr\u00e1 realizar algunas aplicaciones empleando programaci\u00f3n en `python`.\n\n#### C\u00f3mo trabaja el FEM?\n\nEl [principio de minimizaci\u00f3n de la energ\u00eda](https://en.wikipedia.org/wiki/Principle_of_minimum_energy) forma la columna vertebral principal del m\u00e9todo de elementos finitos. En otras palabras, cuando se aplica una condici\u00f3n de contorno espec\u00edfica a un cuerpo (como desplazamiento o fuerza), esto puede conducir a varias configuraciones, pero sin embargo, solo una configuraci\u00f3n particular es posible o lograda de manera realista. S\u00f3lo se elige aquella configuraci\u00f3n donde la energ\u00eda total es m\u00ednima. Incluso cuando la simulaci\u00f3n se realiza varias veces, prevalecen los mismos resultados. \n\n#### Historia del FEM\n\nDependiendo de la perspectiva hist\u00f3rica, se puede decir que el FEM tuvo sus or\u00edgenes en la obra de Euler, ya en el siglo XVI. Sin embargo, los primeros trabajos matem\u00e1ticos sobre FEM se pueden encontrar en los trabajos de Schellback(1851) y Courant(1943).\n\nEl FEM fue desarrollado de forma independiente por ingenieros para abordar problemas de mec\u00e1nica estructural relacionados con la ingenier\u00eda aeroespacial y civil. Los desarrollos comenzaron a mediados de la d\u00e9cada de 1950 con los art\u00edculos de [Turner, Clough, Martin y Topp](http://www.ce.memphis.edu/7117/notes/presentations/papers/Turner%20et%20al%20(1956)%20Stiffness%20and%20deflection%20analysis%20of%20comlex%20strucutres.pdf)(1956), [Argyris](https://en.wikipedia.org/wiki/John_Argyris)(1957) y [Babuska y Aziz](https://www.elsevier.com/books/the-mathematical-foundations-of-the-finite-element-method-with-applications-to-partial-differential-equations/aziz/978-0-12-068650-6)(1972). Los libros de [Zienkiewicz](https://en.wikipedia.org/wiki/Olgierd_Zienkiewicz)(1971) sentaron las bases para el desarrollo futuro en FEM.\n\nUna revisi\u00f3n interesante de estos desarrollos hist\u00f3ricos se puede encontrar en Oden(1991). Se puede encontrar una revisi\u00f3n del desarrollo de FEM durante los \u00faltimos 75 a\u00f1os en este art\u00edculo de blog: [75 Years of the Finite Element Method](https://www.simscale.com/blog/2015/11/75-years-of-the-finite-element-method-fem/). En el art\u00edculo de [Gupta y Meek](https://people.sc.fsu.edu/~jpeterson/history_fem.pdf)(1996) o en este documento de curso del profesor [Mohite](http://home.iitk.ac.in/~mohite/History_of_FEM.pdf), podr\u00e1 encontrar m\u00e1s informaci\u00f3n.\n\n\n\n### Diferentes tipos del FEM\n\nComo se discuti\u00f3 anteriormente, la tecnolog\u00eda FEM tradicional ha demostrado deficiencias en los problemas de modelado relacionados con la mec\u00e1nica de fluidos y la propagaci\u00f3n de ondas. Recientemente se han realizado varias mejoras para optimizar el proceso de soluci\u00f3n y ampliar la aplicabilidad del an\u00e1lisis de elementos finitos a una amplia gama de problemas. Algunos de los importantes que a\u00fan se utilizan incluyen:\n\n#### M\u00e9todo de elementos finitos extendido (XFEM)\n\nEl m\u00e9todo de Bubnov-Galerkin requiere continuidad de desplazamiento entre elementos. Aunque problemas como el contacto, la fractura y el da\u00f1o implican discontinuidades y saltos que no pueden manejarse directamente con el m\u00e9todo de elementos finitos. Para superar esta deficiencia, [XFEM](https://en.wikipedia.org/wiki/Extended_finite_element_method) naci\u00f3 en la d\u00e9cada de 1990. XFEM funciona mediante la expansi\u00f3n de las funciones de forma con las funciones de pasos de Heaviside. Se asignan grados de libertad adicionales a los nodos alrededor del punto de discontinuidad para que se puedan considerar los saltos.\n\n#### M\u00e9todo de elementos finitos generalizado (GFEM)\n\n[GFEM](https://www.sciencedirect.com/science/article/abs/pii/S0045782501001888) se introdujo casi al mismo tiempo que XFEM en los a\u00f1os 90. Combina las caracter\u00edsticas de los m\u00e9todos tradicionales FEM y sin malla. Las funciones de forma se definen principalmente por las coordenadas globales y luego se multiplican por partici\u00f3n de unidad para crear funciones de forma elementales locales. Una de las ventajas de GFEM es la prevenci\u00f3n de remallado alrededor de singularidades.\n\n#### M\u00e9todo mixto de elementos finitos\n\nEl [M\u00e9todo mixto de elementos finitos](https://en.wikipedia.org/wiki/Mixed_finite_element_method) se aplica en varios problemas, como el contacto o la incompresibilidad, en el que las restricciones se imponen utilizando multiplicadores de Lagrange. Estos grados de libertad extra que surgen de los multiplicadores de Lagrange se resuelven de forma independiente. El sistema de ecuaciones se resuelve como un sistema acoplado de ecuaciones.\n\n#### M\u00e9todo de elementos finitos hp\n\n[hp-FEM](https://en.wikipedia.org/wiki/Hp-FEM) es una combinaci\u00f3n de refinamiento autom\u00e1tico de malla (refinamiento h) y un aumento en el orden de polinomio (refinamiento p). Esto no es lo mismo que hacer refinamientos h y p por separado. Cuando se utiliza el refinamiento hp autom\u00e1tico y un elemento se divide en elementos m\u00e1s peque\u00f1os (refinamiento h), cada elemento tambi\u00e9n puede tener diferentes \u00f3rdenes polinomiales.\n\n#### M\u00e9todo de elementos finitos de Galerkin discontinuo (DG-FEM)\n\n[DG-FEM](https://en.wikipedia.org/wiki/Discontinuous_Galerkin_method) ha mostrado una promesa significativa para utilizar la idea de elementos finitos para resolver ecuaciones hiperb\u00f3licas, donde los m\u00e9todos tradicionales de elementos finitos han sido d\u00e9biles. Adem\u00e1s, tambi\u00e9n ha mostrado mejoras en la flexi\u00f3n y problemas incompresibles que se observan t\u00edpicamente en la mayor\u00eda de los procesos de materiales. Aqu\u00ed, se agregan restricciones adicionales a la forma d\u00e9bil que incluye un par\u00e1metro de penalizaci\u00f3n (para evitar la interpenetraci\u00f3n) y t\u00e9rminos para otro equilibrio de tensiones entre los elementos.\n\n\n\n### El proceso de an\u00e1lisis *FEM*\n\nEl comportamiento de un fen\u00f3meno en un sistema depende de la geometr\u00eda o dominio del sistema, la propiedad del material o medio y las condiciones de frontera, iniciales y de carga. Para un sistema de ingenier\u00eda, la geometr\u00eda o el dominio pueden ser muy complejos. Adem\u00e1s, la frontera y las condiciones iniciales tambi\u00e9n pueden ser complicadas. Por lo tanto, en general, es muy dif\u00edcil resolver la ecuaci\u00f3n diferencial gobernante por medios anal\u00edticos. En la pr\u00e1ctica, la mayor\u00eda de los problemas se resuelven mediante m\u00e9todos num\u00e9ricos. Entre estos, los m\u00e9todos de discretizaci\u00f3n de dominios propugnados por el FEM son los m\u00e1s populares, debido a su practicidad y versatilidad.\n\n#### Modelado\n\n

\n \n

\n\n\n\nEl procedimiento de modelado computacional que utiliza el FEM consta en general de cuatro pasos:\n\n\n- Modelado de la geometr\u00eda.\n\n\n- [Mallado](https://en.wikipedia.org/wiki/Mesh_generation) (discretizaci\u00f3n).\n\n\n- Especificaci\u00f3n de propiedad material.\n\n\n- Especificaci\u00f3n de condiciones de contorno, inicial y de carga.\n\n\n***Ejemplo:*** Elementos unidimensionales\n\n

\n \n

\n\n\n\n\n***Ejemplo:*** Elementos bidimensionales\n\n

\n \n

\n\n
Fuente: Springer.
\n\n#### Simulaci\u00f3n\n\nLas etapas del proceso de simulaci\u00f3n empleando la t\u00e9cnica FEM se puede resumir en la siguiente tabla:\n\n

\n \n

\n\n
Fuente: Miklos Kuczmann
\n\n#### Terminolog\u00eda FEM\n\n- ***Dominio:*** es el conjunto de variables independientes para las que se define una funci\u00f3n. En el *FEA*, un dominio es un sistema (regi\u00f3n) continuo sobre el cual gobiernan las leyes de la f\u00edsica. En la ingenier\u00eda estructural, un dominio podr\u00eda ser una viga o el marco de un edificio completo. En la ingenier\u00eda mec\u00e1nica, un dominio podr\u00eda ser una pieza de una m\u00e1quina o un campo t\u00e9rmico.\n\n\n- ***Ecuaciones gobernantes:*** son las ecuaciones derivadas de la f\u00edsica del sistema. Muchos sistemas de ingenier\u00eda pueden describirse mediante ecuaciones de gobierno, que determinan las caracter\u00edsticas y comportamientos del sistema.\n\n\n- ***Condiciones de frontera:*** son valores de la funci\u00f3n en el borde del rango de algunas de sus variables. Es necesario conocer algunas de las condiciones de contorno para resolver un problema de ingenier\u00eda o para encontrar una funci\u00f3n desconocida.\n\n\n- ***Elemento:*** es una parte del dominio del problema y, por lo general, tiene una forma simple como un tri\u00e1ngulo o cuadril\u00e1tero en 2D, o un tetraedro o un s\u00f3lido rectangular en 3D.\n\n\n- ***Nodo:*** es un punto en el dominio y, a menudo, es el v\u00e9rtice de varios elementos. Un nodo tambi\u00e9n se llama punto nodal.\n\n\n- ***Malla:*** los elementos y nodos, juntos, forman una malla (mesh o grid, en ingl\u00e9s), que es la estructura de datos central en FEA.\n\n

\n \n

\n\n\n- ***Generaci\u00f3n de malla:*** la mayor\u00eda del software FEA genera autom\u00e1ticamente una malla refinada para lograr resultados m\u00e1s precisos. Para an\u00e1lisis de elementos finitos complejos o a gran escala, a menudo es imperativo que las computadoras generen una malla de elementos finitos autom\u00e1ticamente. Hay muchos algoritmos diferentes para la generaci\u00f3n autom\u00e1tica de mallas. En [este enlace](http://www.argusone.com/MeshGeneration.html) podr\u00e1 ver algunas muestras de malla generadas autom\u00e1ticamente.\n\n\n- ***An\u00e1lisis lineal:*** se basa en los siguientes supuestos: (1) est\u00e1tico; (2) Peque\u00f1os desplazamientos; (3) El material es linealmente el\u00e1stico.\n\n\n- ***An\u00e1lisis no lineal:*** considera la no linealidad del material y/o la no linealidad geom\u00e9trica de un sistema de ingenier\u00eda. El an\u00e1lisis geom\u00e9trico no lineal tambi\u00e9n se denomina an\u00e1lisis de grandes deformaciones.\n\n\n- ***Grados de libertad:*** Los [grados de libertad](https://en.wikipedia.org/wiki/Degrees_of_freedom_(mechanics)) (DOF) son las principales inc\u00f3gnitas en las ecuaciones que constituyen un modelo de elementos finitos. La resoluci\u00f3n de las ecuaciones determina los valores de los grados de libertad para cada nodo del modelo. Estos valores se denominan \"datos primarios\" en la documentaci\u00f3n. Los datos derivados (tensiones, deformaciones, gradientes, flujos, etc.) se calculan a partir de la soluci\u00f3n DOF. Los DOF incluyen desplazamientos (UX, UY, UZ), rotaciones (ROTX, ROTY, ROTZ), temperatura (TEMP), entre otros. Los grados de libertad incluidos en cada tipo de elemento reflejan la f\u00edsica del problema subyacente. Debe elegir elementos que ofrezcan solo los grados de libertad que necesita, ya que los grados de libertad adicionales aumentan el tiempo de c\u00e1lculo y no proporcionan ning\u00fan beneficio. Por ejemplo, el an\u00e1lisis estructural (generalmente) no implica transferencia de calor o voltajes. Por lo tanto, los elementos utilizados para modelar estructuras no deben tener TEMP o VOLT DOF.\n\n A continuaci\u00f3n se presenta una tabla resumen con los DOF m\u00e1s comunes para diferentes tipos de elementos estructurales:\n\n|Elemento| DOFs |\n|:------:|:----------------------------------:|\n|Cercha |$$UX, UY, UZ$$ |\n|Viga |$$UX, UY, UZ; RotX, RotY, RotZ$$|\n|Ladrillo|$$UX, UY, UZ$$ |\n|Placa |$$UX, UY, UZ, Rot2DOFs-plano^*$$ |\n\n $^*$El *DOF* rotacional fuera del plano no se considera para elementos de placa. \n\n\n\n### Pasos en la soluci\u00f3n por el FEM\n\nEl desarrollo de un an\u00e1lisis por elementos finitos involucra una serie de pasos hasta determinar las fuerzas nodales y desplazamientos ocurridos en la estructura. A seguir se enuncian, de forma muy general, cada uno de ellos. A lo largo del curso se entrar\u00e1 m\u00e1s en detalle.\n\n#### Discretizar la estructura en elementos. \n\nEstos elementos est\u00e1n conectados entre s\u00ed a trav\u00e9s de nodos.\n\n

\n \n

\n\n
Fuente: mechanicalc.com
\n\n\nEn este primer paso, se compone un modelo matem\u00e1tico de la estructura. Este modelo es una aproximaci\u00f3n de la estructura, mientras que la estructura f\u00edsica es continua, el modelo consta de elementos discretos.\n\nEn este ejemplo se utiliza elementos tipo viga que se basan en la [teor\u00eda de vigas de Euler-Bernoulli](https://en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli_beam_theory). El elemento tiene $2$ nodos, cada uno de los cuales tiene $3$ grados de libertad: traslaci\u00f3n en $x$, traslaci\u00f3n en $y$ y rotaci\u00f3n.\n\n#### Determinar la matriz de rigidez local para cada elemento\n\nEsta matriz representa la rigidez de cada nodo en el elemento en un grado de libertad espec\u00edfico (es decir, determina el desplazamiento de cada nodo en cada grado de libertad bajo una carga dada). Debido a que cada uno de los nodos en el elemento de viga tiene $3$ grados de libertad, una matriz de $6 \\times 6$ puede describir completamente la rigidez del elemento.\n\n\\begin{equation}\nk^{(e)}=\n\\begin{bmatrix}\n\\frac{AE}{L} & 0 & 0 & -\\frac{AE}{L} & 0 & 0 \\\\\n0 & \\frac{12EI}{L^3} & \\frac{6EI}{L^2} & 0 & -\\frac{12EI}{L^3} & \\frac{6EI}{L^2}\\\\\n0 & \\frac{6EI}{L^2} & \\frac{4EI}{L} & 0 & -\\frac{6EI}{L^2} & \\frac{2EI}{L}\\\\\n-\\frac{AE}{L} & 0 & 0 & \\frac{AE}{L} & 0 & 0 \\\\\n0 & -\\frac{12EI}{L^3} & -\\frac{6EI}{L^2} & 0 & \\frac{12EI}{L^3} & -\\frac{6EI}{L^2}\\\\\n0 & \\frac{6EI}{L^2} & \\frac{2EI}{L} & 0 & -\\frac{6EI}{L^2} & \\frac{4EI}{L}\n\\end{bmatrix}\n\\end{equation}\n\n\n#### Ensamblar la matriz de rigidez global\n\nLa matriz de rigidez global para la estructura general se ensambla bas\u00e1ndose en la combinaci\u00f3n de las matrices de rigidez local. En un nivel alto, la matriz de rigidez global se crea sumando las matrices de rigidez locales:\n\n$$\\left[ K\\right]=\\sum_i \\left[ k_i^{(e)}\\right]$$\n\ndonde la matriz $[k^{(e)}_i]$ es la matriz de rigidez local del elemento $i$.\n\nLa matriz de rigidez global ser\u00e1 una matriz cuadrada $n \\times n$, donde $n$ es $3$ veces el n\u00famero de nodos en la malla (ya que cada nodo tiene $3$ grados de libertad). Al ensamblar la matriz de rigidez global, los t\u00e9rminos de rigidez para cada nodo en la matriz de rigidez elemental se colocan en la ubicaci\u00f3n correspondiente en la matriz global. Para cualquier elemento que comparta un nodo, las contribuciones de rigidez de ese nodo se sumar\u00e1n de cada elemento.\n\n#### C\u00e1lcular el vector de fuerza aplicada\n\nEl vector de fuerza aplicada ser\u00e1 un vector $n \\times 1$, donde $n$ es $3$ veces el n\u00famero de nodos en la malla. El vector de fuerza se ensambla incluyendo las fuerzas aplicadas en cada grado de libertad en cada nodo de la malla:\n\n$$\\left\\{ F\\right\\} =\\sum_i \\left\\{ f_i^{(e)} \\right\\}$$\n\nUna vez que se construyen la matriz de rigidez global y el vector de fuerza aplicada, se pueden resolver los desplazamientos nodales. La siguiente ecuaci\u00f3n relaciona las fuerzas y los desplazamientos en la estructura general:\n\n$$\\left\\{F\\right\\}+\\left\\{R\\right\\}=\\left[K\\right]\\left\\{U\\right\\}$$\n\ndonde \n\n- $\\left\\{F\\right\\}$ es el vector de fuerza aplicada, \n\n\n- $\\left\\{R\\right\\}$ es el vector de reacci\u00f3n externa, \n\n\n- $\\left[K\\right]$ es la matriz de rigidez global, y\n\n\n- $\\left\\{U\\right\\}$ es el vector de desplazamiento nodal.\n\nEn la ecuaci\u00f3n anterior, $\\left\\{R\\right\\}$ y $\\left\\{U\\right\\}$ son inc\u00f3gnitas. Para simplificar la soluci\u00f3n de esta ecuaci\u00f3n, se desea resolver las reacciones y los desplazamientos independientemente unos de otros. Para hacer esto, podemos usar el hecho de que para cada restricci\u00f3n que se aplic\u00f3 (ya sea una restricci\u00f3n en traslaci\u00f3n en $x$, traslaci\u00f3n en $y$, o rotaci\u00f3n), el desplazamiento asociado con esa restricci\u00f3n ser\u00e1 cero. Adem\u00e1s, las reacciones externas solo ocurrir\u00e1n donde se aplicaron restricciones. Por lo tanto, se sabe que para cada grado de libertad en cada nodo:\n\n- Si se aplica una restricci\u00f3n, el desplazamiento en esa restricci\u00f3n ser\u00e1 cero y la reacci\u00f3n externa puede ser distinta de cero.\n\n\n- Si no hay restricci\u00f3n, la reacci\u00f3n externa ser\u00e1 cero y el desplazamiento puede ser distinto de cero.\n\n#### Aplicar condiciones de contorno y resolver los desplazamientos nodales\n\nInicialmente resolveremos la ecuaci\u00f3n anterior para los desplazamientos nodales, $\\left\\{U\\right\\}$. Con base en el razonamiento anterior, es posible eliminar el vector $\\left\\{R\\right\\}$ de la ecuaci\u00f3n. Las condiciones de contorno se aplican a la ecuaci\u00f3n \"poniendo a cero\" las filas en las matrices correspondientes a las restricciones aplicadas. Al hacer esto, no hemos perdido ninguna informaci\u00f3n sobre los desplazamientos nodales, ya que se sabe que los desplazamientos ser\u00e1n cero para cada fila que se puso a cero. Ahora la ecuaci\u00f3n se ha reducido a:\n\n$$\\left\\{F\\right\\}=\\left[K\\right]\\left\\{U\\right\\}$$\n\n#### Resolver las reacciones externas\n\nLa ecuaci\u00f3n anterior se puede resolver para $\\left\\{U\\right\\}$, despu\u00e9s de lo cual se conocen todos los desplazamientos nodales. La \u00fanica inc\u00f3gnita que queda en la ecuaci\u00f3n original es $\\left\\{R\\right\\}$, y esto ahora se puede resolver para:\n\n$$\\left\\{R\\right\\}=\\left[K\\right]\\left\\{U\\right\\}-\\left\\{F\\right\\}$$\n\nAhora es posible resolver las fuerzas en cada nodo.\n\n#### Resolver las fuerzas nodales\n\nPara cualquier fuerza de reacci\u00f3n encontrada, la fuerza nodal es igual a la fuerza de reacci\u00f3n. Sin embargo, para los nodos donde no hay una reacci\u00f3n externa, las fuerzas en el nodo a\u00fan se desconocen. Estos se pueden encontrar extrayendo los desplazamientos apropiados del vector $\\left\\{U\\right\\}$ global para construir un vector $\\left\\{u\\right\\}$ local para cada elemento. Cada uno de estos vectores locales ser\u00e1 un $6 \\times 1$. Las fuerzas nodales en cada elemento se pueden resolver usando:\n\n$$\\left\\{f\\right\\} = \\left[k\\right] \\left\\{u\\right\\}$$\n\ndonde \n\n- $\\left\\{f\\right\\}$ es el vector de fuerza del elemento (local), \n\n\n- $\\left[k\\right]$ es la matriz de rigidez del elemento, y \n\n\n- $\\left\\{u\\right\\}$ es el vector de desplazamiento del elemento. \n\n#### Resolver las tensiones\n\nUna vez que se conocen las fuerzas en cada nodo de la malla, es posible resolver los esfuerzo en cada nodo:\n\n| Esfuerzo axial |Esfuerzo cortante|Esfuerzo a flexi\u00f3n|Esfuerzos de von Mises|\n|:--------------:|:---------------:|:----------------:|:--------------------:|\n|$$\\sigma_{ax}=\\frac{F_{ax}}{A}$$ |$$\\tau_{sh}=\\frac{F_{sh}}{A}$$|$$\\sigma_b=\\frac{Mc}{I}$$|$$\\sigma_{vm}=\\sqrt{(\\sigma_{ax}+\\sigma_b)^2+3\\tau_{sh}^2}$$|\n\n\n\n\n### Atributos de elementos\n\nExiste una \u00e1mplia tecnolog\u00eda de elementos en el FEM. Entre los tipos de elementos m\u00e1s comunes se tienen:\n\n

\n \n

\n\nen la siguiente im\u00e1gen se observa la comparaci\u00f3n entre un componente estructural f\u00edsico y su equivalente a un elemento finito idealizado\n\n

\n \n

\n\n\n\n\n\n### Aspectos computacionales del FEM\n\nComing soon...\n\n\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('./nb_style.css', 'r').read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9bd2004f60e6196752bd12ade16e4e09667243ea", "size": 55745, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Sesion01_Introduccion_al_FEM_202102.ipynb", "max_stars_repo_name": "carlosalvarezh/FEM", "max_stars_repo_head_hexsha": "831152face6dac3abb4a5dea85f04ad0467e1816", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Sesion01_Introduccion_al_FEM_202102.ipynb", "max_issues_repo_name": "carlosalvarezh/FEM", "max_issues_repo_head_hexsha": "831152face6dac3abb4a5dea85f04ad0467e1816", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-25T15:10:49.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-25T15:10:49.000Z", "max_forks_repo_path": "Sesion01_Introduccion_al_FEM_202102.ipynb", "max_forks_repo_name": "carlosalvarezh/FEM", "max_forks_repo_head_hexsha": "831152face6dac3abb4a5dea85f04ad0467e1816", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-13T02:08:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-11T04:20:33.000Z", "avg_line_length": 48.1390328152, "max_line_length": 6993, "alphanum_fraction": 0.6324154633, "converted": true, "num_tokens": 12019, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47268347662043286, "lm_q2_score": 0.27202456289736326, "lm_q1q2_score": 0.12858151611647928}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nnp.random.seed(1789)\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n# Statistical Data Modeling\n\nPandas, NumPy and SciPy provide the core functionality for building statistical models of our data. We use models to:\n\n- Concisely **describe** the components of our data\n- Provide **inference** about underlying parameters that may have generated the data\n- Make **predictions** about unobserved data, or expected future observations.\n\nThis section of the tutorial illustrates how to use Python to build statistical models of low to moderate difficulty from scratch, and use them to extract estimates and associated measures of uncertainty.\n\nEstimation\n==========\n\nAn recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data.\n\nIn **parametric** inference, we specify *a priori* a suitable distribution, then choose the parameters that best fit the data.\n\n* e.g. the mean $\\mu$ and the variance $\\sigma^2$ in the case of the normal distribution\n\n\n```python\nx = np.array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604,\n 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745,\n 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 ,\n 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357,\n 1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ])\n_ = plt.hist(x, bins=7)\n```\n\n### Fitting data to probability distributions\n\nWe start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by \u2018best fit\u2019. There are two commonly used criteria:\n\n* **Method of moments** chooses the parameters so that the sample moments (typically the sample mean and variance) match the theoretical moments of our chosen distribution.\n* **Maximum likelihood** chooses the parameters to maximize the likelihood, which measures how likely it is to observe our given sample.\n\n### Discrete Random Variables\n\n$$X = \\{0,1\\}$$\n\n$$Y = \\{\\ldots,-2,-1,0,1,2,\\ldots\\}$$\n\n**Probability Mass Function**: \n\nFor discrete $X$,\n\n$$Pr(X=x) = f(x|\\theta)$$\n\n\n\n***e.g. Poisson distribution***\n\nThe Poisson distribution models unbounded counts:\n\n
\n$$Pr(X=x)=\\frac{e^{-\\lambda}\\lambda^x}{x!}$$\n
\n\n* $X=\\{0,1,2,\\ldots\\}$\n* $\\lambda > 0$\n\n$$E(X) = \\text{Var}(X) = \\lambda$$\n\n### Continuous Random Variables\n\n$$X \\in [0,1]$$\n\n$$Y \\in (-\\infty, \\infty)$$\n\n**Probability Density Function**: \n\nFor continuous $X$,\n\n$$Pr(x \\le X \\le x + dx) = f(x|\\theta)dx \\, \\text{ as } \\, dx \\rightarrow 0$$\n\n\n\n***e.g. normal distribution***\n\n
\n$$f(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right]$$\n
\n\n* $X \\in \\mathbf{R}$\n* $\\mu \\in \\mathbf{R}$\n* $\\sigma>0$\n\n$$\\begin{align}E(X) &= \\mu \\cr\n\\text{Var}(X) &= \\sigma^2 \\end{align}$$\n\n### Example: Nashville Precipitation\n\nThe dataset `nashville_precip.txt` contains [NOAA precipitation data for Nashville measured since 1871](http://bit.ly/nasvhville_precip_data). \n\n\n\nThe gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case.\n\n\n```python\nprecip = pd.read_table(\"../data/nashville_precip.txt\", index_col=0, na_values='NA', delim_whitespace=True)\nprecip.head()\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
JanFebMarAprMayJunJulAugSepOctNovDec
Year
18712.764.585.014.133.302.981.582.360.951.312.131.65
18722.322.113.145.913.095.176.101.654.501.582.252.38
18732.967.144.113.596.314.204.632.361.814.284.365.94
18745.229.235.3611.841.492.872.653.523.122.636.124.19
18756.153.068.144.221.735.638.121.603.791.255.464.30
\n
\n\n\n\n\n```python\n_ = precip.hist(sharex=True, sharey=True, grid=False)\nplt.tight_layout()\n```\n\nThe first step is recognizing what sort of distribution to fit our data to. A couple of observations:\n\n1. The data are skewed, with a longer tail to the right than to the left\n2. The data are positive-valued, since they are measuring rainfall\n3. The data are continuous\n\nThere are a few possible choices, but one suitable alternative is the **gamma distribution**:\n\n
\n$$x \\sim \\text{Gamma}(\\alpha, \\beta) = \\frac{\\beta^{\\alpha}x^{\\alpha-1}e^{-\\beta x}}{\\Gamma(\\alpha)}$$\n
\n\n\n\nThe ***method of moments*** simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters.\n\nSo, for the gamma distribution, the mean and variance are:\n\n
\n$$ \\hat{\\mu} = \\bar{X} = \\alpha \\beta $$\n$$ \\hat{\\sigma}^2 = S^2 = \\alpha \\beta^2 $$\n
\n\nSo, if we solve for these parameters, we can use a gamma distribution to describe our data:\n\n
\n$$ \\alpha = \\frac{\\bar{X}^2}{S^2}, \\, \\beta = \\frac{S^2}{\\bar{X}} $$\n
\n\nLet's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values. We will learn more sophisticated methods for handling missing data later in the course.\n\n\n```python\nprecip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
JanFebMarAprMayJunJulAugSepOctNovDec
Year
18712.764.585.014.133.302.981.582.360.951.312.131.65
18722.322.113.145.913.095.176.101.654.501.582.252.38
18732.967.144.113.596.314.204.632.361.814.284.365.94
18745.229.235.3611.841.492.872.653.523.122.636.124.19
18756.153.068.144.221.735.638.121.603.791.255.464.30
18766.412.225.283.623.405.657.155.772.522.681.260.95
18774.051.064.989.471.256.023.254.165.402.614.932.49
18783.342.103.486.882.333.289.435.021.282.173.206.04
18796.323.133.812.882.882.508.474.625.182.905.859.15
18803.7412.378.165.264.133.975.692.225.397.245.773.32
18813.545.482.795.123.673.700.861.816.574.804.894.85
188214.518.619.383.597.382.544.065.541.611.113.601.52
18833.767.903.989.124.823.824.944.472.235.273.114.97
18847.208.188.893.513.586.533.182.812.362.431.573.78
18856.292.002.333.754.363.725.261.025.602.992.732.90
18865.183.824.762.362.107.691.905.503.680.515.761.48
18875.138.473.362.673.432.313.772.896.851.922.295.31
18886.293.786.464.182.974.682.367.033.822.824.331.77
18893.831.842.472.835.305.332.741.576.811.546.881.17
18908.1010.958.643.844.162.230.466.595.863.012.014.12
18916.156.9610.312.242.396.501.493.721.250.846.714.26
18922.812.734.107.454.035.015.133.394.780.253.916.43
18931.274.883.374.117.314.742.121.926.433.682.973.50
18944.288.652.694.052.533.555.452.433.070.531.922.81
18955.710.985.093.072.052.907.141.406.691.572.144.09
18961.373.656.452.924.051.827.331.402.740.985.711.79
18973.133.848.495.791.221.828.532.340.190.922.834.93
18989.460.635.363.161.804.974.506.564.873.213.092.41
18995.595.197.813.253.360.756.442.531.501.831.554.64
19002.613.802.204.041.8610.352.871.244.553.938.872.22
.......................................
19826.504.803.004.364.192.285.473.463.231.913.876.36
19832.562.933.446.8011.043.931.711.360.452.776.987.75
19841.792.385.148.419.684.496.632.420.976.006.202.38
19853.023.302.702.912.651.532.003.912.521.593.810.98
19860.193.592.290.523.362.380.773.382.192.197.433.31
19871.614.871.181.034.412.822.560.731.950.213.405.46
19883.732.022.182.091.860.453.262.392.451.545.493.95
19894.529.365.312.684.617.873.183.676.303.623.941.97
19902.764.733.261.602.802.374.863.122.134.414.2910.76
19912.925.444.253.355.631.252.821.795.473.882.877.27
19922.972.604.500.773.124.315.893.253.451.624.482.88
19932.763.335.503.334.505.313.641.762.902.202.536.62
19944.366.187.565.723.768.084.825.054.203.314.042.69
19955.611.813.873.957.663.691.953.405.005.603.982.32
19963.822.465.153.684.483.685.451.094.883.166.004.77
19974.193.109.642.424.926.663.263.525.752.716.592.19
19983.684.113.136.314.4611.954.632.931.391.591.306.53
19999.282.334.272.294.353.563.193.051.972.042.992.50
20003.523.753.346.237.661.742.251.951.900.266.393.44
20013.218.542.732.425.544.472.774.071.794.615.093.32
20024.931.999.404.313.983.765.643.136.294.482.915.81
20031.598.472.304.6910.737.082.873.888.701.804.173.19
20043.605.774.816.696.903.393.194.244.554.905.215.93
20054.423.843.906.931.032.702.396.891.440.023.292.46
20066.572.692.904.144.952.192.645.204.002.984.053.41
20073.321.842.262.753.302.371.471.381.994.956.203.83
20084.762.535.567.205.542.214.321.670.885.031.756.72
20094.592.852.924.138.454.536.032.1411.086.490.673.99
20104.132.773.523.4816.434.965.866.991.172.495.411.87
20112.315.544.597.514.385.043.461.786.200.936.154.25
\n

141 rows \u00d7 12 columns

\n
\n\n\n\nNow, let's calculate the sample moments of interest, the means and variances by month:\n\n\n```python\nprecip_mean = precip.mean()\nprecip_mean\n```\n\n\n\n\n Jan 4.523688\n Feb 4.097801\n Mar 4.977589\n Apr 4.204468\n May 4.325674\n Jun 3.873475\n Jul 3.895461\n Aug 3.367305\n Sep 3.377660\n Oct 2.610500\n Nov 3.685887\n Dec 4.176241\n dtype: float64\n\n\n\n\n```python\nprecip_var = precip.var()\nprecip_var\n```\n\n\n\n\n Jan 6.928862\n Feb 5.516660\n Mar 5.365444\n Apr 4.117096\n May 5.306409\n Jun 5.033206\n Jul 3.777012\n Aug 3.779876\n Sep 4.940099\n Oct 2.741659\n Nov 3.679274\n Dec 5.418022\n dtype: float64\n\n\n\nWe then use these moments to estimate $\\alpha$ and $\\beta$ for each month:\n\n\n```python\nalpha_mom = precip_mean ** 2 / precip_var\nbeta_mom = precip_var / precip_mean\n```\n\n\n```python\nalpha_mom, beta_mom\n```\n\n\n\n\n (Jan 2.953407\n Feb 3.043866\n Mar 4.617770\n Apr 4.293694\n May 3.526199\n Jun 2.980965\n Jul 4.017624\n Aug 2.999766\n Sep 2.309383\n Oct 2.485616\n Nov 3.692511\n Dec 3.219070\n dtype: float64, Jan 1.531684\n Feb 1.346249\n Mar 1.077920\n Apr 0.979219\n May 1.226724\n Jun 1.299403\n Jul 0.969593\n Aug 1.122522\n Sep 1.462581\n Oct 1.050243\n Nov 0.998206\n Dec 1.297344\n dtype: float64)\n\n\n\nWe can use the `gamma.pdf` function in `scipy.stats.distributions` to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January:\n\n\n```python\nfrom scipy.stats.distributions import gamma\n\nprecip.Jan.hist(normed=True, bins=20)\nplt.plot(np.linspace(0, 10), gamma.pdf(np.linspace(0, 10), alpha_mom[0], beta_mom[0]))\n```\n\nLooping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution:\n\n\n```python\naxs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False)\n\nfor ax in axs.ravel():\n \n # Get month\n m = ax.get_title()\n \n # Plot fitted distribution\n x = np.linspace(*ax.get_xlim())\n ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m]))\n \n # Annotate with parameter estimates\n label = 'alpha = {0:.2f}\\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m])\n ax.annotate(label, xy=(10, 0.2))\n \nplt.tight_layout()\n```\n\nMaximum Likelihood\n==================\n\n**Maximum likelihood** (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. \n\nThere is a ton of theory regarding ML. We will restrict ourselves to the mechanics here.\n\nSay we have some data $y = y_1,y_2,\\ldots,y_n$ that is distributed according to some distribution:\n\n
\n$$Pr(Y_i=y_i | \\theta)$$\n
\n\nHere, for example, is a **Poisson distribution** that describes the distribution of some discrete variables, typically *counts*: \n\n\n```python\ny = np.random.poisson(5, size=100)\nplt.hist(y, bins=12, normed=True)\nplt.xlabel('y'); plt.ylabel('Pr(y)')\n```\n\nThe product $\\prod_{i=1}^n Pr(y_i | \\theta)$ gives us a measure of how **likely** it is to observe values $y_1,\\ldots,y_n$ given the parameters $\\theta$. \n\nMaximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\\theta)$ to maximize for a given set of observations. We call this function the *likelihood function*, because it is a measure of how likely the observations are if the model is true.\n\n> Given these data, how likely is this model?\n\nIn the above model, the data were drawn from a Poisson distribution with parameter $\\lambda =5$.\n\n$$L(y|\\lambda=5) = \\frac{e^{-5} 5^y}{y!}$$\n\nSo, for any given value of $y$, we can calculate its likelihood:\n\n\n```python\npoisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod()\n\nlam = 6\nvalue = 10\npoisson_like(value, lam)\n```\n\n\n\n\n 0.041303093412337726\n\n\n\n\n```python\nnp.sum(poisson_like(yi, lam) for yi in y)\n```\n\n\n\n\n 11.499056250911673\n\n\n\n\n```python\nlam = 8\nnp.sum(poisson_like(yi, lam) for yi in y)\n```\n\n\n\n\n 7.8028592304816868\n\n\n\nWe can plot the likelihood function for any value of the parameter(s):\n\n\n```python\nlambdas = np.linspace(0,15)\nx = 5\nplt.plot(lambdas, [poisson_like(x, l) for l in lambdas])\nplt.xlabel('$\\lambda$')\nplt.ylabel('L($\\lambda$|x={0})'.format(x))\n```\n\nHow is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) *given the data*, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\\lambda=5$.\n\n\n```python\nlam = 5\nxvals = np.arange(15)\nplt.bar(xvals, [poisson_like(x, lam) for x in xvals], width=0.2)\nplt.xlabel('x')\nplt.ylabel('Pr(X|$\\lambda$=5)')\n```\n\n*Why are we interested in the likelihood function?*\n\nA reasonable estimate of the true, unknown value for the parameter is one which **maximizes the likelihood function**. So, inference is reduced to an optimization problem.\n\nGoing back to the rainfall data, if we are using a gamma distribution we need to maximize:\n\n$$\\begin{align}l(\\alpha,\\beta) &= \\sum_{i=1}^n \\log[\\beta^{\\alpha} x^{\\alpha-1} e^{-x/\\beta}\\Gamma(\\alpha)^{-1}] \\cr \n&= n[(\\alpha-1)\\overline{\\log(x)} - \\bar{x}\\beta + \\alpha\\log(\\beta) - \\log\\Gamma(\\alpha)]\\end{align}$$\n\n*N.B.: Its usually easier to work in the log scale*\n\nwhere $n = 2012 \u2212 1871 = 141$ and the bar indicates an average over all *i*. We choose $\\alpha$ and $\\beta$ to maximize $l(\\alpha,\\beta)$.\n\nNotice $l$ is infinite if any $x$ is zero. We do not have any zeros, but we do have an NA value for one of the October data, which we dealt with above.\n\n### Finding the MLE\n\nTo find the maximum of any function, we typically take the *derivative* with respect to the variable to be maximized, set it to zero and solve for that variable. \n\n$$\\frac{\\partial l(\\alpha,\\beta)}{\\partial \\beta} = n\\left(\\frac{\\alpha}{\\beta} - \\bar{x}\\right) = 0$$\n\nWhich can be solved as $\\beta = \\alpha/\\bar{x}$. However, plugging this into the derivative with respect to $\\alpha$ yields:\n\n$$\\frac{\\partial l(\\alpha,\\beta)}{\\partial \\alpha} = \\log(\\alpha) + \\overline{\\log(x)} - \\log(\\bar{x}) - \\frac{\\Gamma(\\alpha)'}{\\Gamma(\\alpha)} = 0$$\n\nThis has no closed form solution. We must use ***numerical optimization***!\n\nNumerical optimization alogarithms take an initial \"guess\" at the solution, and **iteratively** improve the guess until it gets \"close enough\" to the answer.\n\nHere, we will use *Newton-Raphson* method, which is a **root-finding algorithm**:\n\n
\n$$x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)}$$\n
\n\nwhich is available to us via SciPy:\n\n\n```python\nfrom scipy.optimize import newton\n```\n\nHere is a graphical example of how Newton-Raphson converges on a solution, using an arbitrary function:\n\n\n```python\n%run newton_raphson_plot.py\n```\n\nTo apply the Newton-Raphson algorithm, we need a function that returns a vector containing the **first and second derivatives** of the function with respect to the variable of interest. The second derivative of the gamma distribution with respect to $\\alpha$ is:\n\n$$\\frac{\\partial^2 l(\\alpha,\\beta)}{\\partial \\alpha^2} = \\frac{1}{\\alpha} - \\frac{\\partial}{\\partial \\alpha} \\left[ \\frac{\\Gamma(\\alpha)'}{\\Gamma(\\alpha)} \\right]$$\n\n\n```python\nfrom scipy.special import psi, polygamma\n\ndlgamma = lambda a, log_mean, mean_log: np.log(a) - psi(a) - log_mean + mean_log\ndl2gamma = lambda a, *args: 1./a - polygamma(1, a)\n```\n\nwhere `log_mean` and `mean_log` are $\\log{\\bar{x}}$ and $\\overline{\\log(x)}$, respectively. `psi` and `polygamma` are complex functions of the Gamma function that result when you take first and second derivatives of that function.\n\n\n```python\n# Calculate statistics\nlog_mean = precip.mean().apply(np.log)\nmean_log = precip.apply(np.log).mean()\n```\n\nTime to optimize!\n\n\n```python\n# Alpha MLE for December\nalpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1]))\nalpha_mle\n```\n\n\n\n\n 3.5189679152399647\n\n\n\nAnd now plug this back into the solution for beta:\n\n
\n$$ \\beta = \\frac{\\alpha}{\\bar{X}} $$\n
\n\n\n```python\nbeta_mle = alpha_mle/precip.mean()[-1]\nbeta_mle\n```\n\n\n\n\n 0.84261607548413797\n\n\n\nWe can compare the fit of the estimates derived from MLE to those from the method of moments:\n\n\n```python\ndec = precip.Dec\ndec.hist(normed=True, bins=10, grid=False)\nx = np.linspace(0, dec.max())\nplt.plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-', label='Moment estimator')\nplt.plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--', label='ML estimator')\nplt.legend()\n```\n\nFor some common distributions, SciPy includes methods for fitting via MLE:\n\n\n```python\nfrom scipy.stats import gamma\n\ngamma.fit(precip.Dec)\n```\n\n\n\n\n (2.2427517753152308, 0.65494604470188622, 1.570073932063466)\n\n\n\nThis fit is not directly comparable to our estimates, however, because SciPy's `gamma.fit` method fits an odd 3-parameter version of the gamma distribution.\n\n### Model checking\n\nAn informal way of checking the fit of our parametric model is to compare the observed quantiles of the data to those of the theoretical model we are fitting it to. If the model is a good fit, the points should fall on a 45-degree reference line. This is called a **probability plot**.\n\nSciPy includes a `probplot` function that generates probability plots based on the data and a specified distribution.\n\n\n```python\nfrom scipy.stats import probplot\n\nprobplot(precip.Dec, dist=gamma(3.51, scale=0.84), plot=plt);\n```\n\n### Example: truncated distribution\n\nSuppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then:\n\n$$ P(X \\le x) = P(Y \\le x|Y \\gt a) = \\frac{P(a \\lt Y \\le x)}{P(Y \\gt a)}$$\n\n(so, $Y$ is the original variable and $X$ is the truncated variable) \n\nThen X has the density:\n\n$$f_X(x) = \\frac{f_Y (x)}{1\u2212F_Y (a)} \\, \\text{for} \\, x \\gt a$$ \n\nSuppose $Y \\sim N(\\mu, \\sigma^2)$ and $x_1,\\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\\mu$ and $\\sigma$. \n\nFirst, we can simulate a truncated distribution using a `while` statement to eliminate samples that are outside the support of the truncated distribution.\n\n\n```python\nx = np.random.normal(size=10000)\n\n# Truncation point\na = -1\n\n# Resample until all points meet criterion\nx_small = x < a\nwhile x_small.sum():\n x[x_small] = np.random.normal(size=x_small.sum())\n x_small = x < a\n \n_ = plt.hist(x, bins=100)\n```\n\nWe can construct a log likelihood for this function using the conditional form:\n\n$$f_X(x) = \\frac{f_Y (x)}{1\u2212F_Y (a)} \\, \\text{for} \\, x \\gt a$$ \n\nThe denominator normalizes the truncated distribution so that it integrates to one.\n\n\n```python\nfrom scipy.stats.distributions import norm\n\ntrunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - \n np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()\n```\n\nFor this example, we will use an optimization algorithm, the **Nelder-Mead simplex algorithm**. It has a couple of advantages: \n\n- it does not require derivatives\n- it can optimize (minimize) a vector of parameters\n\nSciPy implements this algorithm in its `fmin` function:\n\n\n```python\nfrom scipy.optimize import fmin\n\nfmin(trunc_norm, np.array([1,2]), args=(-1, x))\n```\n\n Optimization terminated successfully.\n Current function value: 11084.885149\n Iterations: 47\n Function evaluations: 88\n\n\n\n\n\n array([ 0.00531007, 1.00652051])\n\n\n\nIn general, simulating data is a terrific way of testing your model before using it with real data.\n\n## Kernel density estimates\n\nIn some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution *non-parametrically* (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation.\n\n\n```python\n# Some random data\ny = np.random.normal(10, size=15)\ny\n```\n\n\n\n\n array([ 9.68444175, 6.86641455, 8.90236824, 8.48662651,\n 9.4599326 , 9.83248454, 9.45367613, 12.95585035,\n 10.58747989, 10.94829904, 10.48694903, 9.13200438,\n 9.73979452, 10.14852737, 8.62582257])\n\n\n\nThe kernel estimator is a sum of symmetric densities centered at each observation. The selected kernel function determines the shape of each component while the **bandwidth** determines their spread. For example, if we use a Gaussian kernel function, the variance acts as the bandwidth.\n\n\n```python\nx = np.linspace(7, 13, 100)\n# Smoothing parameter\ns = 0.3\n# Calculate the kernels\nkernels = np.transpose([norm.pdf(x, yi, s) for yi in y])\nplt.plot(x, kernels, 'k:')\nplt.plot(x, kernels.sum(1))\nplt.plot(y, np.zeros(len(y)), 'ro', ms=10)\n```\n\nSciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution:\n\n\n```python\n# Create a bi-modal distribution with a mixture of Normals.\nx1 = np.random.normal(0, 2, 50)\nx2 = np.random.normal(5, 1, 50)\n\n# Append by row\nx = np.r_[x1, x2]\n```\n\n\n```python\nplt.hist(x, bins=10, normed=True)\n```\n\n\n```python\nfrom scipy.stats import kde\n\ndensity = kde.gaussian_kde(x)\nxgrid = np.linspace(x.min(), x.max(), 100)\nplt.hist(x, bins=8, normed=True)\nplt.plot(xgrid, density(xgrid), 'r-')\n```\n\n### Exercise: Comparative Chopstick Effectiveness\n\nA few researchers set out to determine what the optimal length for chopsticks is. The dataset `chopstick-effectiveness.csv` includes measurements of \"Food Pinching Efficiency\" across a range of chopstick lengths for 31 individuals.\n\nUse the method of moments or MLE to calculate the mean and variance of food pinching efficiency for each chopstick length. This means you need to select an appropriate distributional form for this data.\n\n\n```python\n# Write your answer here\n```\n", "meta": {"hexsha": "4279f9ec24827f4900084bf9a6f035707f1baad2", "size": 339209, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/2. Density Estimation.ipynb", "max_stars_repo_name": "rouseguy/scipy2015_tutorial", "max_stars_repo_head_hexsha": "6d419dbda904d363192d516c1c8fe1c732c7f69d", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2015-06-20T02:59:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-04T22:58:08.000Z", "max_issues_repo_path": "notebooks/2. Density Estimation.ipynb", "max_issues_repo_name": "rouseguy/scipy2015_tutorial", "max_issues_repo_head_hexsha": "6d419dbda904d363192d516c1c8fe1c732c7f69d", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-07-05T10:16:00.000Z", "max_issues_repo_issues_event_max_datetime": "2015-07-09T17:18:44.000Z", "max_forks_repo_path": "notebooks/2. Density Estimation.ipynb", "max_forks_repo_name": "fonnesbeck/scipy2015_tutorial", "max_forks_repo_head_hexsha": "6d419dbda904d363192d516c1c8fe1c732c7f69d", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 54, "max_forks_repo_forks_event_min_datetime": "2015-07-03T20:50:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-21T16:04:01.000Z", "avg_line_length": 60.0795253277, "max_line_length": 336, "alphanum_fraction": 0.7303284995, "converted": true, "num_tokens": 15393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48828339529583464, "lm_q2_score": 0.26284184314569564, "lm_q1q2_score": 0.12834130759699547}} {"text": " \n#### Procesamiento Digital de Se\u00f1ales\n\n# Trabajo Pr\u00e1ctico N\u00ba0\n#### Yanina Corsaro\n\n\n# Introducci\u00f3n\nJupyter Notebook es una herramienta para la confecci\u00f3n de reportes t\u00e9cnicos, dado que permite la interacci\u00f3n en el mismo ambiente de: \n1. un procesador de texto elemental (formato Markdown) que permite resaltar texto, en forma de *it\u00e1lica* o **negrita** de manera muy legible (haciendo doble click en este texto podr\u00e1s ver el c\u00f3digo fuente estilo Markdown). Cuenta con estilos predefinidos:\n\n# T\u00edtulo 1\n## T\u00edtulo 2\n### T\u00edtulo 3\n\ny tambi\u00e9n la capacidad de incluir enlaces a otras p\u00e1ginas, como por ejemplo [esta p\u00e1gina](https://medium.com/ibm-data-science-experience/markdown-for-jupyter-notebooks-cheatsheet-386c05aeebed) donde encontrar\u00e1s m\u00e1s funcionalidades del lenguaje **Markdown**\n\n2. capacidad para incluir lenguaje matem\u00e1tico estilo LaTex, tanto de forma presentada\n\n\\begin{equation}\nT(z) = \\frac{Y(z)}{X(z)} = \\frac{ b_2 \\, z^{-2} + b_1 \\, z^{-1} + b_0 }\n{a_2 \\, z^{-2} + a_1 \\, z^{-1} + a_0}\n\\end{equation}\n\ncomo *inline* en el propio p\u00e1rrafo $y[k] = \\frac{1}{a_0} \\left( \\sum_{m=0}^{M} b_m \\; x[k-m] - \\sum_{n=1}^{N} a_n \\; y[k-n] \\right) $\n\n3. La posibilidad de incluir scripts en Python, como los que usaremos para las simulaciones en los TPs de la materia. En este caso usaremos el *testbench0.py* como ejemplo. Una vez que lo probamos y estamos seguros que funciona de forma esperada en *Spyder*, podemos incluir los resultados de la simulaci\u00f3n de manera casi transparente. Solo tenemos que agregar una celda de c\u00f3digo donde incluimos el c\u00f3digo, y los resultados directamente quedan incluidos en este documento.\n\n\n```python\n# M\u00f3dulos para Jupyter\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport matplotlib as mpl\n#%% Inicializaci\u00f3n de librer\u00edas\n# Setup inline graphics: Esto lo hacemos para que el tama\u00f1o de la salida, \n# sea un poco m\u00e1s adecuada al tama\u00f1o del documento\nmpl.rcParams['figure.figsize'] = (10,10)\n\nimport matplotlib.pyplot as plt\nimport pdsmodulos as pds\n\n#%% Esto tiene que ver con cuestiones de presentaci\u00f3n de los gr\u00e1ficos,\n# NO ES IMPORTANTE\nfig_sz_x = 14\nfig_sz_y = 13\nfig_dpi = 80 # dpi\n\nfig_font_family = 'Ubuntu'\nfig_font_size = 16\n\nplt.rcParams.update({'font.size':fig_font_size})\nplt.rcParams.update({'font.family':fig_font_family})\n\n##############################################\n#%% A partir de aqu\u00ed comienza lo IMPORTANTE #\n#############################################\n\ndef my_testbench( sig_type ):\n \n # Datos generales de la simulaci\u00f3n\n fs = 1000.0 # frecuencia de muestreo (Hz)\n N = 1000 # cantidad de muestras\n \n ts = 1/fs # tiempo de muestreo\n df = fs/N # resoluci\u00f3n espectral\n \n # grilla de sampleo temporal\n tt = np.linspace(0, (N-1)*ts, N).flatten()\n \n # grilla de sampleo frecuencial\n ff = np.linspace(0, (N-1)*df, N).flatten()\n\n # Concatenaci\u00f3n de matrices:\n # guardaremos las se\u00f1ales creadas al ir poblando la siguiente matriz vac\u00eda\n x = np.array([], dtype=np.float).reshape(N,0)\n ii = 0\n \n # estructuras de control de flujo\n if sig_type['tipo'] == 'senoidal':\n \n \n # calculo cada senoidal de acuerdo a sus par\u00e1metros\n for this_freq in sig_type['frecuencia']:\n # prestar atenci\u00f3n que las tuplas dentro de los diccionarios tambi\u00e9n pueden direccionarse mediante \"ii\"\n aux = sig_type['amplitud'][ii] * np.sin( 2*np.pi*this_freq*tt + sig_type['fase'][ii] )\n # para concatenar horizontalmente es necesario cuidar que tengan iguales FILAS\n x = np.hstack([x, aux.reshape(N,1)] )\n ii += 1\n \n elif sig_type['tipo'] == 'ruido':\n \n # calculo cada se\u00f1al de ruido incorrelado (blanco), Gausiano de acuerdo a sus par\u00e1metros\n # de varianza\n for this_var in sig_type['varianza']:\n aux = np.sqrt(this_var) * np.random.randn(N,1)\n # para concatenar horizontalmente es necesario cuidar que tengan iguales FILAS\n x = np.hstack([x, aux] )\n \n # Podemos agregar alg\u00fan dato extra a la descripci\u00f3n de forma program\u00e1tica\n # {0:.3f} significa 0: primer argunmento de format\n # .3f formato flotante, con 3 decimales\n # $ ... $ indicamos que incluiremos sintaxis LaTex: $\\hat{{\\sigma}}^2$\n sig_props['descripcion'] = [ sig_props['descripcion'][ii] + ' - $\\hat{{\\sigma}}^2$ :{0:.3f}'.format( np.var(x[:,ii])) for ii in range(0,len(sig_props['descripcion'])) ]\n \n else:\n \n print(\"Tipo de se\u00f1al no implementado.\") \n return\n \n #%% Presentaci\u00f3n gr\u00e1fica de los resultados\n \n plt.figure(1)\n line_hdls = plt.plot(tt, x)\n plt.title('Se\u00f1al: ' + sig_type['tipo'] )\n plt.xlabel('tiempo [segundos]')\n plt.ylabel('Amplitud [V]')\n # plt.grid(which='both', axis='both')\n \n # presentar una leyenda para cada tipo de se\u00f1al\n axes_hdl = plt.gca()\n \n # este tipo de sintaxis es *MUY* de Python\n axes_hdl.legend(line_hdls, sig_type['descripcion'], loc='upper right' )\n \n plt.show()\n\n```\n\nDado que nuestro *testbench* ha sido desarrollado de manera funcional, llamando a la funci\u00f3n *my_testbench()* con diferentes par\u00e1metros, podemos lograr funcionalidades diferentes, como mostramos a continuaci\u00f3n primero con una senoidal:\n\n\n```python\nsig_props = { 'tipo': 'senoidal', \n 'frecuencia': (3, 10, 20), # Uso de tuplas para las frecuencias \n 'amplitud': (1, 1, 1),\n 'fase': (0, 0, 0)\n } \nsig_props2 = { 'tipo': 'senoidal', \n 'frecuencia': (3, 10, 20), # Uso de tuplas para las frecuencias \n 'amplitud': (1, 1, 1),\n 'fase': (10, 5, 0)\n } \n# Como tambi\u00e9n puedo agregar un campo descripci\u00f3n de manera program\u00e1tica\n# este tipo de sintaxis es *MUY* de Python\nsig_props['descripcion'] = [ str(a_freq) + ' Hz' for a_freq in sig_props['frecuencia'] ]\nsig_props2['descripcion'] = [ str(a_freq) + ' Hz' for a_freq in sig_props['frecuencia'] ] \n# Invocamos a nuestro testbench exclusivamente: \nmy_testbench( sig_props )\nmy_testbench( sig_props2 )\n```\n\nY ahora con una se\u00f1al aleatoria, en este caso ruido blanco Gaussiano incorrelado de varianza $\\sigma^2$:\n\n\n```python\n# Usar CTRL+1 para comentar o descomentar el bloque de abajo.\nsig_props = { 'tipo': 'ruido', \n 'varianza': (1, 1, 1) # Uso de tuplas para las frecuencias \n } \nsig_props['descripcion'] = [ '$\\sigma^2$ = ' + str(a_var) for a_var in sig_props['varianza'] ]\n \n# Invocamos a nuestro testbench exclusivamente: \nmy_testbench( sig_props )\n\n```\n\nComo puede verse en la figura anterior, al samplear una distribuci\u00f3n estad\u00edstica de media nula y varianza $\\sigma^2=1$, obtenemos realizaciones cuyo par\u00e1metro $\\sigma^2$ estimado, es decir $\\hat\\sigma^2$, tienen una desviaci\u00f3n respecto al verdadero valor (sesgo). Nos ocuparemos de estudiar el sesgo y la varianza de algunos estimadores cuando veamos **Estimaci\u00f3n Espectral**.\n\n# Una vez terminado ...\nUna vez que hayas termiando con la confecci\u00f3n del documento, podemos utilizar una ventaja muy importante de este tipo de documentos que es la posibilidad de compartirlos *online* mediante la [p\u00e1gina de nbviewer](http://nbviewer.jupyter.org/). Para ello es necesario que tu notebook y todos los recursos asociados est\u00e9n alojados en un repositorio de [Github](https://github.com/). Como ejemplo, pod\u00e9s ver este mismo documento disponible [online](http://nbviewer.jupyter.org/github/marianux/pdstestbench/blob/master/notebook0.ipynb).\n\n# T\u00edtulo de prueba\n\nAgregu\u00e9 mi nombre y corr\u00ed el c\u00f3digo para que aparezcan las im\u00e1genes. Tambi\u00e9n agregue otra se\u00f1al cambiando arbitrariamente la fase.\n\n", "meta": {"hexsha": "373c25774a954eef031471805d2deac546f14e61", "size": 475665, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook.ipynb", "max_stars_repo_name": "yaninacorsaro/test", "max_stars_repo_head_hexsha": "2e0c35d1e2e875beb1537c78470ce7f56518b4cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook.ipynb", "max_issues_repo_name": "yaninacorsaro/test", "max_issues_repo_head_hexsha": "2e0c35d1e2e875beb1537c78470ce7f56518b4cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook.ipynb", "max_forks_repo_name": "yaninacorsaro/test", "max_forks_repo_head_hexsha": "2e0c35d1e2e875beb1537c78470ce7f56518b4cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1634.587628866, "max_line_length": 166068, "alphanum_fraction": 0.9602283119, "converted": true, "num_tokens": 2167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.26894142136999516, "lm_q1q2_score": 0.12817200875308135}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! **Bayesian inference is simply updating your beliefs after considering new evidence.** A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, **we can never be 100% sure that our code is bug-free unless we test it on every possible problem**; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by **preserving *uncertainty***. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: ***Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title)**. For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of ***belief***, or **confidence**, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, **I assigned the belief (probability) measure to an *individual*, not to Nature**. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, **we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability***.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\n**It's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence** (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, **for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty*** that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and **our uncertainty is proportional to the width of the curve**. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability **distribution** function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability **mass** function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\n**We will use this property often, so it's useful to remember**. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability **density** function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. **A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data**. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n Multiprocess sampling (4 chains in 4 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n Sampling 4 chains: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 60000/60000 [00:13<00:00, 4577.12draws/s]\n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\n**Notice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential**. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\nlambda_1_samples.mean(), lambda_2_samples.mean()\n```\n\n\n\n\n (17.760179819690123, 22.710926545663497)\n\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n(lambda_1_samples/lambda_2_samples).mean()\n```\n\n\n\n\n 0.7832074809697906\n\n\n\n\n```python\nlambda_1_samples.mean()/lambda_2_samples.mean()\n```\n\n\n\n\n 0.7820103589336522\n\n\n\nPS to myself: Note that the results returned are different !\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\nidx = tau_samples < 45\nlambda_1_samples[idx].mean()\n```\n\n\n\n\n 17.757353124712516\n\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "09c222405244e55dc877caa546e30a031e7b27f2", "size": 301422, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "abarbosa94/bayesian_methods_hackers", "max_stars_repo_head_hexsha": "55713602fb823a5524d2ed078ac070f75c9ef28d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-09T00:44:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-09T00:44:57.000Z", "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "abarbosa94/bayesian_methods_hackers", "max_issues_repo_head_hexsha": "55713602fb823a5524d2ed078ac070f75c9ef28d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "abarbosa94/bayesian_methods_hackers", "max_forks_repo_head_hexsha": "55713602fb823a5524d2ed078ac070f75c9ef28d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 262.56271777, "max_line_length": 90704, "alphanum_fraction": 0.8980134164, "converted": true, "num_tokens": 11586, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43014734858584286, "lm_q2_score": 0.2974699363766584, "lm_q1q2_score": 0.127955904416419}} {"text": "```python\n%run common-imports.ipynb\n```\n\n\n\n\n\n\n\n\n \n\n\n\n# Residuals variance ($\\sigma_\\epsilon$), RSS and $R^2$\n\n## Q:\n''*What is the the relationship between variance of the response and $R^2$?*''\n\n## Answer\n\n\n\nWe assume that this question is asked in the context of (ordinary) linear regression models, which we are currently studying. \n\nFirst, let us clarify our notation. We will treat $y_i$ to be the response value of a datum used to train a linear regression model. Once linear regression has learned an estimator $\\hat y = f(\\mathbf{x})$ from the training data, the estimation it provides for a given input $\\mathbf x_i$ is $y_i$. For a given dataset on $n$ instances of datum, $\\mathscr{D}$, one can then compute the residual error of each prediction as:\n\n\n\\begin{equation} \\epsilon_i = y_i - \\hat y_i \\end{equation}\n\nThen a measure of error as the residual errors summed and squared is:\n\n$$ \\begin{aligned} \nRSS &= \\sum_{i=1}^n \\epsilon_i^2 \\\\\n &= \\sum_{i=1}^n (y_i -\\hat y_i)^2 \\\\\n\\end{aligned}\n$$ \n\n\nThere is a close relationship between the variance of the **residuals** $\\sigma_\\epsilon^2$, rather than the variance of the response variable itself. Let us explore this systematically.\n\nOnce a linear regession model has been fit to the data, in general, the mean of the residuals $\\mu_\\epsilon$ will be close to zero, i.e. $\\mathbf{\\mu_\\epsilon \\approx 0}$. With this in hand, let us consider the variance of the residual errors over the training dataset $\\mathscr{D}$ of $n$ sample instances.Then:\n\n$$\n\\begin{aligned}\\sigma^2 &= \\frac{1}{n-1}\\sum_1^n (\\epsilon_i - \\mu_\\epsilon)^2 \\\\ \n& = \\frac{1}{n-1}\\sum_1^n \\epsilon_i^2 \\text{ since } \\mu_\\epsilon \\approx 0 \\\\\n\\end{aligned}\n$$\n\nNow consider the definition of TSS (total residual errors summed and squared) of the null-hypothesis $\\phi$, where $\\epsilon_{i,\\phi} = (y_i -\\hat y_i^\\phi), \\text{ here } y_i^\\phi \\text { is the prediction of the null hypothesis and thus always equal to } \\bar y$. Thus $\\epsilon_{i,\\phi} = (y_i - \\bar y)$.\n\n$$\\begin{aligned} \nTSS &= \\sum_{i=1}^n \\epsilon_\\phi^2 \\\\\n&=(n-1)\\sigma_\\phi^2 \\\\\n\\end{aligned}\n$$\n\nLikewise, the RSS of a given hypothesis $h$, with errors $\\epsilon_{i, h} = (y_i - \\hat y_{i,h})$, is:\n\n$$\\begin{aligned} \nRSS &= \\sum_{i=1}^n \\epsilon_{i,h}^2 \\\\\n&=(n-1)\\sigma_h^2 \\\\\n\\end{aligned}\n$$\n\nThe coefficient of determination, $R^2$, is defined as:\n\n\\begin{equation}\nR^2 = \\frac{TSS - RSS}{TSS}\n\\end{equation}\n\n\nSubstituting the values of TSS and RSS from the prior equations in terms of their variance of residuals, we get, therefore:\n\n\\begin{equation}\nR^2 = \\frac{\\sigma_\\phi^2 - \\sigma_h^2}{\\sigma_\\phi^2}\n\\end{equation}\n\n\nThus, we have arrived at the relationship between RSS, $R^2$ and the variance of the residual errors.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "01c6df90b6119bb7c2040130fe0f4c8a983e9b4f", "size": 10359, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook/others-01-rss-variance-r2.ipynb", "max_stars_repo_name": "praveenhm/machine-learning-tutorials", "max_stars_repo_head_hexsha": "0343004b3740f87d3636948428e8e28e6cf43ce0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook/others-01-rss-variance-r2.ipynb", "max_issues_repo_name": "praveenhm/machine-learning-tutorials", "max_issues_repo_head_hexsha": "0343004b3740f87d3636948428e8e28e6cf43ce0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/others-01-rss-variance-r2.ipynb", "max_forks_repo_name": "praveenhm/machine-learning-tutorials", "max_forks_repo_head_hexsha": "0343004b3740f87d3636948428e8e28e6cf43ce0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.5754716981, "max_line_length": 436, "alphanum_fraction": 0.5038131094, "converted": true, "num_tokens": 1645, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3380771374883919, "lm_q2_score": 0.3775406687981454, "lm_q1q2_score": 0.12763786859273005}} {"text": "```python\n%run ../../common/import_all.py\n\nfrom common.setup_notebook import set_css_style, setup_matplotlib, config_ipython\nconfig_ipython()\nsetup_matplotlib()\nset_css_style()\n```\n\n\n\n\n\n\n\n\n\n\n# The Maximum Likelihood, Maximum a Posteriori and Expectation-maximisation estimation methods\n\n## The likelihood\n\nImagine you have a statistical model, that is, a mathematical description of your data which depends on some parameters $\\theta$. The *likelihood function*, usually indicated as $\\mathcal{L}$, is a function of these parameters and represents the probability of observing evidence (observed data) $E$ given said parameters:\n\n$$\n\\mathcal{L} = P(E \\ | \\ \\theta)\n$$\n\nBecause it is a function of the parameters given the outcome, you write\n\n$$\n\\mathcal{L}(\\theta \\ | \\ E) = P(E \\ | \\ \\theta)\n$$\n\nThe difference between *probability* and *likelihood* is quite subtle in that in common language they are be casually swapped, but they represent different things. The probability mesaures the outcomes observed as a function of the parameters $\\theta$ of the underlying model. But in reality $\\theta$ are unknown and in fact, we go through the reverse process: estimating the parameters given the evidence we observe. For this, we use the likelihood, which is defined as above because we maximise it in such a way to respond to the equality above. This is exactly what the ML estimation does, as per below.\n\nBear in mind that the likelihood is a function of $\\theta$. \n\n## The MLE method\n\nThe Maximum Likelihood Estimation (MLE) is a procedure to find the parameters of a statistical model via the maximisation of the likelihood so as to maximise the agreement between the model and the observed data.\n\nThe maximisation of the likelihood is usually performed via the maximisation of its logarithm as it is much more convenient; the logarithm is a monotonic function so the procedure is legit.\n\n### Example: a Bernoulli distribution\n\nThe likelihood function for a [Bernoulli distribution](../distributions-measures/famous-distributions.ipynb#Bernoulli) ($x_i \\in {0, 1}$) is, for parameter $p$: \n \n\\begin{align}\n\\mathcal{L}(x_1, x_2, \\ldots, x_n \\ | \\ p) &= P(X_1 = x_1, X_2 = x_2, \\ldots, X_n = x_n \\ | \\ p) \\\\\n&= p^{x_1}(1-p)^{1-x_1} \\cdot \\ldots \\cdot p^{x_n}(1-p)^{1-x_n} \\\\\n&= p^{\\sum_i x_i}(1-p)^{\\sum_i(1-x_i)} \\\\\n&= p^{\\sum_i x_i}(1-p)^{n -\\sum_i x_i}\n\\end{align}\n\nso that if we take the logarithm, we get\n\n$$\n\\log \\mathcal{L} = \\sum_i x_i \\log p + \\Big(n - \\sum_i x_i\\Big) \\log (1-p) \\ .\n$$\n\nTo maximise it, we compute and nullify the first derivative\n\n$$\n\\frac{d \\log \\mathcal{L}}{d p} = \\frac{\\sum_i x_i}{p} - \\frac{n - \\sum_i x_i}{1-p} = 0\n$$\n\nwhich leads to\n\n$$\n\\sum_i x_i - p \\sum_i x_i = np - p \\sum_i x_i\n$$\n\nand finally to\n\n$$\np = \\frac{\\sum_i x_i}{n}\n$$\n\n### Example: estimating the best mean of some data\n\nThis example is reported from [[here]](#2). Let us assume we know the weights of women are normally distributed with a mean $\\mu$ and standard deviation $\\sigma$. A random sample of $10$ women is (in pounds):\n\n$$\n115, 122, 130, 124, 149, 160, 152, 138, 149, 180\n$$\n\nWe want to estimate $\\mu$. We know\n\n$$\nP(x_i ; \\mu, \\sigma) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} e^{- \\frac{(x_i - \\mu)^2}{2 \\sigma^2}}\n$$\n\nThe likelihood is (note that the $X_i$ are independent)\n\n\\begin{align}\n\\mathcal{L}(x_i | \\mu, \\sigma) &= P(X_1=x_1, \\ldots, X_n=x_n) \\\\\n&= \\Pi_i p(x_i; \\mu \\sigma) \\\\\n&= \\sigma^n (2 \\pi)^{-n/2} e^{- \\frac{1}{2 \\sigma^2} \\sum_i (x_i - \\mu)^2}\n\\end{align}\n\nNow, again it is easier to work with the logarithm:\n\n$$\n\\log \\mathcal{L} = -n \\log \\sigma \\frac{n}{2} \\log 2 \\pi - \\frac{1}{2 \\sigma^2} \\sum_i (x_i - \\mu)^2\n$$\n\nso that \n\n$$\n\\frac{d \\log \\mathcal{L}}{d \\mu} = -\\frac{1}{2 \\sigma^2} 2 \\sum_i (x_i - \\mu) = 0\n$$\n\n$$\n\\sum_i x_i - n \\mu = 0\n$$\n\n$$\n\\mu = \\frac{\\sum_i x_i}{n}\n$$\n\nand so the maximum likelihood estimate for a given sample is 142.2 and we can could do the same to estimate $\\sigma$, obtaining (can be proven through second derivative that it is a maximum)\n\n$$\n\\sigma^2 = \\frac{\\sum_i (x_i - \\mu)^2}{n}\n$$\n\n## The MAP method\n\nThis Maximim a Posteriori (MAP) estimation method uses the mode of the posterior to estimate the unknown population.\n\nFrom Bayes' theorem, the posterior is expressed as \n\n$$\nP(\\theta \\ | \\ x) = \\frac{P(x \\ | \\ \\theta) P(\\theta)}{\\int d \\theta' P(x \\ | \\ \\theta') P(\\theta')} \\ ,\n$$\n\nwith $\\theta$ being the parameters of the statistical model and $x$ the observed data. The MAP method estimates $\\theta$ as the one which maximises the posterior; note that the denominator is just a normalisation factor: \n\n$$\n\\hat{\\theta}_{MAP}(x) = arg \\max_\\theta P(\\theta \\ | \\ x) = arg \\max_\\theta P(x \\ | \\ \\theta) P(\\theta) \\ .\n$$\n\nThis means exactly taking the mode of the posterior distribution.\n\nIn the case of a uniform prior, the MAP estimation is equal to the ML estimation as we get to maximise the likelihood because the prior becomes just a factor. For the computation, conjugate priors are particularly handy. \n\nAs in the case of the MLE, what we really do is maximising the logarithm of the posterior rather than the posterior itself, so we do \n\n$$\n\\hat \\theta_{MAP}(x) = arg \\max_{\\theta} \\log P(\\theta \\ | \\ x) = arg \\max_{\\theta} [\\log P(x \\ | \\ \\theta) + \\log P(\\theta)] \\ .\n$$\n\n## MAP and ML\n\nIn the last equation, if we only had the first term to maximise, we would be doing a ML estimation. The second term is the one accounting for the presence of a prior: this is why the MAP method is considered as a regularised ML as prior knowledge is factored in the computation. \n\nWhile the ML method can be seen as responding to a frequentist approach, the MAP method responds to a Bayesian approach. \n\n## The Expectation-Maximisation algorithm\n\nThe EM algorithm can be used to find the solution of MLE or MAP when some data is missing, meaning there are some latent variables not observed.\n\nLet's say that for the random variable $x$ we have the $n$ observations\n\n$$\nx_1, \\ldots, x_n \\ ,\n$$\n\nwhich depend on parameters $\\theta$, and that the goal is to find the parameter $\\theta$ that maximises the likelihood which is of the form \n\n$$\nL = \\sum_z P_\\theta(x, z) \\ ,\n$$\n\nmeaning it is a sum over the latent variables $z$; this makes the problem difficult to solve analytically. \n\nThe EM algorithm updates the parameters in steps, which means it risks obtaining a local rather than a global maximum.\n\n### The E step\n\nIn the E phase (time $t$), the expected value of $L$ is computed with respect to the conditional distribution of $z$ given $x$ under the current estimate of parameters $\\theta$:\n\n$$\n\\bar L(\\theta | \\theta^t) = \\mathbb E_{z | x, \\theta^t} [\\log L (\\theta, x)]\n$$\n\nThis means that the log-likelihood is evaluated using the current state of the parameters.\n\n### The M step\n\nIn the M phase (time $t+1$), we find the parameters which maximises the log-likelihood found in the E step:\n\n$$\n\\theta^{t+1} = arg \\max_{\\theta} \\bar L(\\theta | \\theta^t)\n$$\n\n## References\n\n1. [Cross Validated on the difference between Probability and Likelihood](https://stats.stackexchange.com/questions/2641/what-is-the-difference-between-likelihood-and-probability)\n2. Some examples in [this course](https://onlinecourses.science.psu.edu/stat414/node/191) from Penn State\n3. [An assignement on the method from Carnegie Mellon](http://www.cs.cmu.edu/~aarti/Class/10601/homeworks/hw2Solutions.pdf)\n\n\n```python\n\n```\n", "meta": {"hexsha": "369777327017688ba2aa1b0aaae1eb6399b1852e", "size": 13593, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "prob-stats-data-analysis/methods-theorems-laws/mle-map-em.ipynb", "max_stars_repo_name": "walkenho/tales-science-data", "max_stars_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-11T09:39:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-11T09:39:10.000Z", "max_issues_repo_path": "prob-stats-data-analysis/methods-theorems-laws/mle-map-em.ipynb", "max_issues_repo_name": "walkenho/tales-science-data", "max_issues_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prob-stats-data-analysis/methods-theorems-laws/mle-map-em.ipynb", "max_forks_repo_name": "walkenho/tales-science-data", "max_forks_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6771653543, "max_line_length": 617, "alphanum_fraction": 0.5301993673, "converted": true, "num_tokens": 2720, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3886180267058489, "lm_q2_score": 0.3276682942552091, "lm_q1q2_score": 0.1273378059275308}} {"text": "# [ATM 623: Climate Modeling](../index.ipynb)\n[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany\n# Lecture 6: Elementary greenhouse models\n\n### About these notes:\n\nThis document uses the interactive [`IPython notebook`](http://ipython.org/notebook.html) format (now also called [`Jupyter`](https://jupyter.org)). The notes can be accessed in several different ways:\n\n- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware\n- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)\n- A complete snapshot of the notes as of May 2015 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).\n\nMany of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab\n\n## Contents\n\n1. [A single layer atmosphere](#section1)\n2. [Introducing the two-layer leaky greenhouse](#section2)\n3. [Tuning the leaky greenhouse model to observations](#section3)\n4. [Level of emission](#section4)\n5. [Radiative forcing in the 2-layer leaky greenhouse](#section5)\n6. [Radiative equilibrium in the 2-layer leaky greenhouse](#section6)\n\n____________\n\n\n## 1. A single layer atmosphere\n____________\n\nWe will make our first attempt at quantifying the greenhouse effect in the simplest possible greenhouse model: a single layer of atmosphere that is able to absorb and emit longwave radiation.\n\n\n```python\nfrom IPython.display import Image\nImage('../images/MarshallPlumbFig2.6.png')\n```\n\n*Figure reproduced from Marshall and Plumb (2008): Atmosphere, Ocean, and Climate Dynamics*\n\n### Assumptions\n\n- Atmosphere is a single layer of air at temperature $T_a$\n- Atmosphere is **completely transparent to shortwave** solar radiation.\n- Atmosphere is **completely opaque to infrared** radiation\n- Both surface and atmosphere emit radiation as **blackbodies**\n- Atmosphere radiates **equally up and down** ($A\\uparrow = A\\downarrow = \\sigma T_a^4$)\n- There are no other heat transfer mechanisms\n\nWe can now use the concept of energy balance to ask what the temperature need to be in order to balance the energy budgets at the surface and the atmosphere, i.e. the **radiative equilibrium temperatures**.\n\n### Energy balance at the surface\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n(1-\\alpha) Q + \\sigma T_a^4 &= \\sigma T_s^4 \\\\\n\\end{align}\n\nThe presence of the atmosphere above means there is an additional source term: downwelling infrared radiation from the atmosphere.\n\nWe call this the **back radiation**.\n\n### Energy balance for the atmosphere\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n\\sigma T_s^4 &= A\\uparrow + A\\downarrow = 2 \\sigma T_a^4 \\\\\n\\end{align}\n\nwhich means that \n$$ T_s = 2^\\frac{1}{4} T_a \\approx 1.2 T_a $$\n\nSo we have just determined that, in order to have a purely **radiative equilibrium**, we must have $T_s > T_a$. \n\nThe surface must be warmer than the atmosphere.\n\n### Solve for the radiative equilibrium surface temperature\n\nNow plug this into the surface equation to find\n\n$$ \\frac{1}{2} \\sigma T_s^4 = (1-\\alpha) Q $$\n\nand use the definition of the emission temperature $T_e$ to write\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n*In fact, in this model, $T_e$ is identical to the atmospheric temperature $T_a$, since all the OLR originates from this layer.*\n\nSolve for the surface temperature:\n$$ T_s = 2^\\frac{1}{4} T_e $$\n\nPutting in observed numbers, $T_e = 255$ K gives a surface temperature of \n$$T_s = 303 ~\\text{K}$$\n\nThis model is one small step closer to reality: surface is warmer than atmosphere, emissions to space generated in the atmosphere, atmosphere heated from below and helping to keep surface warm.\n\nBUT our model now overpredicts the surface temperature by about 15\u00baC (or K).\n\nIdeas about why?\n\nBasically we just need to read our list of assumptions above and realize that none of them are very good approximations:\n\n- Atmosphere absorbs some solar radiation.\n- Atmosphere is NOT a perfect absorber of longwave radiation\n- Absorption and emission varies strongly with wavelength (atmosphere does not behave like a blackbody).\n- Emissions are not determined by a single temperature $T_a$ but by the detailed vertical profile of air temperture.\n- Energy is redistributed in the vertical by a variety of dynamical transport mechanisms (e.g. convection and boundary layer turbulence).\n\n\n\n____________\n\n\n## 2. Introducing the two-layer leaky greenhouse\n____________\n\nLet's generalize the above model just a little bit to build a slighly more realistic model of longwave radiative transfer.\n\nWe will address two shortcomings of our single-layer model:\n1. No vertical structure\n2. 100% longwave opacity\n\nRelaxing these two assumptions gives us what turns out to be a very useful prototype model for **understanding how the greenhouse effect works**.\n\n### Assumptions\n\n- The atmosphere is **transparent to shortwave radiation** (still)\n- Divide the atmosphere up into **two layers of equal mass** (the dividing line is thus at 500 hPa pressure level)\n- Each layer **absorbs only a fraction $\\epsilon$ ** of whatever longwave radiation is incident upon it.\n- We will call the fraction $\\epsilon$ the **absorptivity** of the layer.\n- Assume $\\epsilon$ is the same in each layer\n\nNote that this last assumption is appropriate is the absorption is actually carried out by a gas that is **well-mixed** in the atmosphere.\n\nOut of our two most important absorbers:\n\n- CO$_2$ is well mixed\n- H$_2$O is not (mostly confined to lower troposphere due to strong temperature dependence of the saturation vapor pressure).\n\nBut we will ignore this aspect of reality for now.\n\nIn order to build our model, we need to introduce one additional piece of physics known as **Kirchoff's Law**:\n$$ \\text{absorptivity} = \\text{emissivity} $$\n\nSo if a layer of atmosphere at temperature $T$ absorbs a fraction $\\epsilon$ of incident longwave radiation, it must emit\n$$ \\epsilon ~\\sigma ~T^4 $$\nboth up and down.\n\n### A sketch of the radiative fluxes in the 2-layer atmosphere\n\n\n```python\nImage('../images/2layerAtm_sketch.png', retina=True)\n```\n\n- Surface temperature is $T_s$\n- Atm. temperatures are $T_0, T_1$ where $T_0$ is closest to the surface.\n- absorptivity of atm layers is $\\epsilon$\n- Surface emission is $\\sigma T_s^4$\n- Atm emission is $\\epsilon_0 \\sigma T_0^4, \\epsilon_1 \\sigma T_1^4$ (up and down)\n- Absorptivity = emissivity for atmospheric layers\n- a fraction $(1-\\epsilon)$ of the longwave beam is **transmitted** through each layer\n\n### Longwave emissions\nLet's denote the emissions from each layer as\n\\begin{align}\nE_s &= \\sigma T_s^4 \\\\\nE_0 &= \\epsilon \\sigma T_0^4 \\\\\nE_1 &= \\epsilon \\sigma T_1^4 \n\\end{align}\nrecognizing that $E_0$ and $E_1$ contribute to **both** the upwelling and downwelling beams.\n\n### Shortwave radiation\nSince we have assumed the atmosphere is transparent to shortwave, the incident beam $Q$ passes unchanged from the top to the surface, where a fraction $\\alpha$ is reflected upward out to space.\n\n### Upwelling beam\n\nLet $U$ be the upwelling flux of longwave radiation. \n\nThe upward flux from the surface to layer 0 is\n$$ U_0 = E_s $$\n(just the emission from the suface).\n\nFollowing this beam upward, we can write the upward flux from layer 0 to layer 1 as the sum of the transmitted component that originated below layer 0 and the new emissions from layer 0:\n$$ U_1 = (1-\\epsilon) U_0 + E_0 $$\n\nContinuing to follow the same beam, the upwelling flux above layer 1 is\n$$ U_2 = (1-\\epsilon) U_1 + E_1 $$\n\nSince there is no more atmosphere above layer 1, this upwelling flux is our Outgoing Longwave Radiation for this model:\n\n\\begin{align}\nOLR &= U_2 \\\\\n &= (1-\\epsilon) \\bigg((1-\\epsilon) \\Big( \\sigma T_s^4 \\Big) + \\epsilon \\sigma T_0^4 \\bigg) + \\epsilon T_1^4 \\\\\n &= (1-\\epsilon)^2 \\sigma T_s^4 + \\epsilon(1-\\epsilon) \\sigma T_0^4 + \\epsilon T_1^4\n\\end{align}\n\nNotice that the three terms in the OLR represent the **contributions to the total OLR that originate from each of the three levels**.\n\n### Downwelling beam\n\nLet $D$ be the downwelling longwave beam. Since there is no longwave radiation coming in from space, we begin with \n$$ D_2 = 0$$\n\nBetween layer 1 and layer 0 the beam contains emissions from layer 1:\n$$ D_1 = E_1 = \\epsilon \\sigma T_1^4$$\n( in general we can write $D_1 = (1-\\epsilon)D_2 + E_1$ if we are dealing with a non-zero $D_2$)\n\nFinally between layer 0 and the surface the beam contains a transmitted component and the emissions from layer 0:\n$$ D_0 = (1-\\epsilon) D_1 + E_0 = \\epsilon(1-\\epsilon) \\sigma T_1^4 + \\epsilon \\sigma T_0^4$$\n\nThis $D_0$ is what we call the **back radiation**, i.e. the longwave radiation from the atmosphere to the surface.\n\n____________\n\n\n## 3. Tuning the leaky greenhouse model to observations\n____________\n\nIn building our new model we have introduced exactly one parameter, the absorptivity $\\epsilon$. We need to choose a value for $\\epsilon$.\n\nWe will tune our model so that it **reproduces the observed global mean OLR** given **observed global mean temperatures**.\n\nTo get appropriate temperatures for $T_s, T_0, T_1$, let's revisit the [global, annual mean lapse rate plot from NCEP Reanalysis data](Lecture05 -- Radiation.ipynb) from the previous lecture.\n\n### Temperatures\nFirst, we set \n$$T_s = 288 \\text{ K} $$\n\nFrom the lapse rate plot, an average temperature for the layer between 1000 and 500 hPa is \n\n$$ T_0 = 275 \\text{ K}$$\n\nDefining an average temperature for the layer between 500 and 0 hPa is more ambiguous because of the lapse rate reversal at the tropopause. We will choose\n\n$$ T_1 = 230 \\text{ K}$$\n\nFrom the graph, this is approximately the observed global mean temperature at 275 hPa or about 10 km.\n\n### OLR\n\nFrom the [observed global energy budget](Lecture01 -- Planetary energy budget.ipynb) we set \n\n$$ OLR = 239 \\text{ W m}^{-2} $$\n\n### Solving for $\\epsilon$\n\nWe wrote down the expression for OLR as a function of temperatures and absorptivity in our model above. We just need to equate this to the observed value and solve a quadratic equation for $\\epsilon$.\n\nWe will use the symbolic math packaged called `sympy` to help us out here.\n\n\n```python\nimport sympy\nsympy.init_printing()\nepsilon, T_s, T_0, T_1, sigma = sympy.symbols('epsilon, T_s, T_0, T_1, sigma')\n\n# Define the contributions to OLR originating from each level\nOLR_s = (1-epsilon)**2 *sigma*T_s**4\nOLR_0 = epsilon*(1-epsilon)*sigma*T_0**4\nOLR_1 = epsilon*sigma*T_1**4\n\nOLR = OLR_s + OLR_0 + OLR_1\n\nprint 'The expression for OLR is'\nOLR\n```\n\nSubsitute in the numerical values we are interested in:\n\n\n```python\nOLR2 = OLR.subs([(sigma, 5.67E-8), (T_s, 288.), (T_0, 275.), (T_1, 230.)])\nOLR2\n```\n\nNow use the `sympy.solve` function to solve the quadratic equation for $\\epsilon$:\n\n\n```python\nsympy.solve(OLR2 - 239., epsilon)\n```\n\nThere are two roots, but the second one is unphysical since we must have $0 < \\epsilon < 1$.\n\nWe conclude that our tuned value is\n\n$$ \\epsilon = 0.58$$\n\nThis is the absorptivity that guarantees that our model reproduces the observed OLR given the observed tempertures.\n\n____________\n\n\n## 4. Level of emission\n____________\n\nEven in this very simple greenhouse model, there is **no single level** at which the OLR is generated.\n\nThe three terms in our formula for OLR tell us the contributions from each level.\n\nLet's make a row vector of these three terms:\n\n\n```python\nOLRterms = sympy.transpose(sympy.Matrix([OLR_s, OLR_0, OLR_1]))\nOLRterms\n```\n\nNow evaluate these expressions for our tuned temperature and absorptivity:\n\n\n```python\ntuned = [(T_s, 288), (T_0, 275), (T_1, 230), (epsilon, 0.58), (sigma, 5.67E-8)]\nOLRtuned = OLRterms.subs(tuned)\nOLRtuned\n```\n\nSo we are getting about 69 W m$^{-2}$ from the surface, 79 W m$^{-2}$ from layer 0, and 92 W m$^{-2}$ from the top layer.\n\nIn terms of fractional contributions to the total OLR, we have (limiting the output to two decimal places):\n\n\n```python\nsympy.N(OLRtuned / 239., 2)\n```\n\nNotice that the largest single contribution is coming from the top layer. This is in spite of that the emissions from this layer are weak, because it is so cold.\n\nComparing to observations, the actual contribution to OLR from the surface is about 22 W m$^{-2}$ (or about 9% of the total), not 69 W m$^{-2}$. So we certainly don't have all the details worked out yet!\n\nAs we will see later, to really understand what sets that observed 22 W m$^{-2}$, we will need to start thinking about the spectral dependence of the longwave absorptivity.\n\n____________\n\n\n## 5. Radiative forcing in the 2-layer leaky greenhouse\n____________\n\nAdding some extra greenhouse absorbers will mean that a greater fraction of incident longwave radiation is absorbed in each layer.\n\nThus **$\\epsilon$ must increase** as we add greenhouse gases.\n\nSuppose we have $\\epsilon$ initially, and the absorptivity increases to $\\epsilon_2 = \\epsilon + \\delta_\\epsilon$.\n\nSuppose further that this increase happens abruptly so that there is no time for the temperatures to respond to this change. **We hold the temperatures fixed** in the column and ask how the radiative fluxes change.\n\n**Do you expect the OLR to increase or decrease?**\n\nLet's use our two-layer leaky greenhouse model to investigate the answer.\n\nThe components of the OLR before the perturbation are\n\n\n```python\nOLRterms\n```\n\nAfter the perturbation we have\n\n\n```python\ndelta_epsilon = sympy.symbols('delta_epsilon')\nOLRterms_pert = OLRterms.subs(epsilon, epsilon+delta_epsilon)\nOLRterms_pert\n```\n\nLet's take the difference\n\n\n```python\ndeltaOLR = OLRterms_pert - OLRterms\ndeltaOLR\n```\n\nTo make things simpler, we will neglect the terms in $\\delta_\\epsilon^2$. This is perfectly reasonably because we are dealing with **small perturbations** where $\\delta_\\epsilon << \\epsilon$.\n\nTelling `sympy` to set the quadratic terms to zero gives us\n\n\n```python\ndeltaOLR_linear = sympy.expand(deltaOLR).subs(delta_epsilon**2, 0)\ndeltaOLR_linear\n```\n\nRecall that the three terms are the contributions to the OLR from the three different levels. In this case, the **changes** in those contributions after adding more absorbers.\n\nNow let's divide through by $\\delta_\\epsilon$ to get the normalized change in OLR per unit change in absorptivity:\n\n\n```python\ndeltaOLR_per_deltaepsilon = sympy.simplify(deltaOLR_linear / delta_epsilon)\ndeltaOLR_per_deltaepsilon\n```\n\nNow look at the **sign** of each term. Recall that $0 < \\epsilon < 1$. **Which terms in the OLR go up and which go down?**\n\n**THIS IS VERY IMPORTANT, SO STOP AND THINK ABOUT IT.**\n\nThe contribution from the **surface** must **decrease**, while the contribution from the **top layer** must **increase**.\n\n**When we add absorbers, the average level of emission goes up!**\n\n### \"Radiative forcing\" is the change in radiative flux at TOA after adding absorbers\n\nIn this model, only the longwave flux can change, so we define the radiative forcing as\n\n$$ R = - \\delta OLR $$\n\n(with the minus sign so that $R$ is positive when the climate system is gaining extra energy).\n\nWe just worked out that whenever we add some extra absorbers, the emissions to space (on average) will originate from higher levels in the atmosphere. \n\nWhat does this mean for OLR? Will it increase or decrease?\n\nTo get the answer, we just have to sum up the three contributions we wrote above:\n\n\n```python\nR = -sum(deltaOLR_per_deltaepsilon)\nR\n```\n\nIs this a positive or negative number? The key point is this:\n\n**It depends on the temperatures, i.e. on the lapse rate.**\n\n### Greenhouse effect for an isothermal atmosphere\n\nStop and think about this question:\n\nIf the **surface and atmosphere are all at the same temperature**, does the OLR go up or down when $\\epsilon$ increases (i.e. we add more absorbers)?\n\nUnderstanding this question is key to understanding how the greenhouse effect works.\n\n#### Let's solve the isothermal case\n\nWe will just set $T_s = T_0 = T_1$ in the above expression for the radiative forcing.\n\n\n```python\nR.subs([(T_0, T_s), (T_1, T_s)])\n```\n\nwhich then simplifies to\n\n\n```python\nsympy.simplify(R.subs([(T_0, T_s), (T_1, T_s)]))\n```\n\n#### The answer is zero\n\nFor an isothermal atmosphere, there is **no change** in OLR when we add extra greenhouse absorbers. Hence, no radiative forcing and no greenhouse effect.\n\nWhy?\n\nThe level of emission still must go up. But since the temperature at the upper level is the **same** as everywhere else, the emissions are exactly the same.\n\n### The radiative forcing (change in OLR) depends on the lapse rate!\n\nFor a more realistic example of radiative forcing due to an increase in greenhouse absorbers, we can substitute in our tuned values for temperature and $\\epsilon$. \n\nWe'll express the answer in W m$^{-2}$ for a 1% increase in $\\epsilon$.\n\nThe three components of the OLR change are\n\n\n```python\ndeltaOLR_per_deltaepsilon.subs(tuned) * 0.01\n```\n\nAnd the net radiative forcing is\n\n\n```python\nR.subs(tuned) * 0.01\n```\n\nSo in our example, **the OLR decreases by 2.2 W m$^{-2}$**, or equivalently, the radiative forcing is +2.2 W m$^{-2}$.\n\nWhat we have just calculated is this:\n\nGiven the observed lapse rates, a small increase in absorbers will cause a small decrease in OLR.\n\nThe greenhouse effect thus gets stronger, and energy will begin to accumulate in the system -- which will eventually cause temperatures to increase as the system adjusts to a new equilibrium.\n\n____________\n\n\n## 6. Radiative equilibrium in the 2-layer leaky greenhouse\n____________\n\nIn the previous section we made no assumptions about the processes that actually set the temperatures. We used the model to calculate radiative fluxes, **given observed temperatures**. We stressed the importance of knowing the lapse rates in order to know how an increase in emission level would affect the OLR, and thus determine the radiative forcing.\n\nA key question in climate dynamics is therefore this:\n\n**What sets the lapse rate?**\n\nIt turns out that lots of different physical processes contribute to setting the lapse rate. Understanding how these processes acts together and how they change as the climate changes is one of the key reasons for which we need more complex climate models.\n\nFor now, we will use our prototype greenhouse model to do the most basic lapse rate calculation: the **radiative equilibrium temperature**.\n\nWe assume that\n\n- the only exchange of energy between layers is longwave radiation\n- equilibrium is achieved when the net radiative flux convergence in each layer is zero.\n\n\n```python\nE_s = sigma*T_s**4\nE_0 = epsilon*sigma*T_0**4\nE_1 = epsilon*sigma*T_1**4\nE = sympy.Matrix([E_s, E_0, E_1])\nE\n```\n\n#### The upwelling beam\n\n\n```python\nU = sympy.Matrix([E_s, (1-epsilon)*E_s + E_0, (1-epsilon)*((1-epsilon)*E_s + E_0) + E_1])\nU\n```\n\n#### The downwelling beam\n\n\n```python\nfromspace = 0\nD = sympy.Matrix([(1-epsilon)*((1-epsilon)*fromspace + E_1) + E_0, (1-epsilon)*fromspace + E_1, fromspace])\nD\n```\n\n\n```python\n# Net flux, positive up\nF = sympy.simplify(U - D)\nF\n```\n\n\n```python\n# The absorption is then simply the flux convergence in each layer\n\n# define a vector of absorbed radiation -- same size as emissions\nA = E.copy()\n\n# absorbed radiation at surface\nA[0] = F[0]\n# Get the convergence\nfor n in range(2):\n A[n+1] = -(F[n+1]-F[n])\n\nA = sympy.simplify(A)\nA\n```\n\n\n```python\n# Solve for radiative equilibrium by setting this equal to zero\nT_e = sympy.symbols('T_e')\nsympy.solve(A - sympy.Matrix([sigma*T_e**4, 0, 0]),[T_s**4, T_1**4, T_0**4])\n```\n\nThe radiative equilibrium solution is thus\n\n\\begin{align} \nT_s &= T_e \\left( \\frac{2+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_1 &= T_e \\left( \\frac{1+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_2 &= T_e \\left( \\frac{ 1}{2 - \\epsilon} \\right)^{1/4}\n\\end{align}\n\nPlugging in $\\epsilon = 0.58$ gives\n\n\\begin{align}\nT_s &= 296 \\text{ K} \\\\\nT_0 &= 262 \\text{ K} \\\\\nT_1 &= 234 \\text{ K} \\\\\n\\end{align}\n\nCompare these to the values we derived from the observed lapse rates:\n\\begin{align}\nT_s &= 288 \\text{ K} \\\\\nT_0 &= 275 \\text{ K} \\\\\nT_1 &= 230 \\text{ K} \\\\\n\\end{align}\n\nThe **radiative equilibrium** solution is substantially **warmer at the surface** and **colder in the lower troposphere** than reality.\n\nThis is a very general feature of radiative equilibrium, and we will see it again very soon in this course.\n\n### A follow-up assignment\n\nYou are now ready to tackle [Assignment 5](../Assignments/Assignment05 -- Radiative forcing in a grey radiation atmosphere.ipynb), where you are asked to extend this grey-gas analysis to many layers. \n\nFor more than a few layers, the analytical approach we used here is no longer very useful. You will code up a numerical solution to calculate OLR given temperatures and absorptivity, and look at how the lapse rate determines radiative forcing for a given increase in absorptivity.\n\n
\n[Back to ATM 623 notebook home](../index.ipynb)\n
\n\n____________\n## Version information\n____________\n\n\n\n```python\n%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py\n%load_ext version_information\n%version_information numpy, sympy, climlab\n```\n\n Installed version_information.py. To use it, type:\n %load_ext version_information\n\n\n\n\n\n
SoftwareVersion
Python2.7.9 64bit [GCC 4.2.1 (Apple Inc. build 5577)]
IPython3.1.0
OSDarwin 14.3.0 x86_64 i386 64bit
numpy1.9.2
sympy0.7.6
climlab0.2.11
Thu May 14 16:02:40 2015 EDT
\n\n\n\n____________\n## Credits\n\nThe author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\nIt was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php), offered in Spring 2015.\n____________\n", "meta": {"hexsha": "cda06841e0b2185547d0b6d5aef551c60b0b8643", "size": 545154, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture06 -- Elementary greenhouse models.ipynb", "max_stars_repo_name": "gavin971/ClimateModeling_courseware", "max_stars_repo_head_hexsha": "9c8b446d6a274d88868c24570155f50c32d27b89", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-12-06T04:36:30.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-02T13:16:02.000Z", "max_issues_repo_path": "Lectures/Lecture06 -- Elementary greenhouse models.ipynb", "max_issues_repo_name": "gavin971/ClimateModeling_courseware", "max_issues_repo_head_hexsha": "9c8b446d6a274d88868c24570155f50c32d27b89", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture06 -- Elementary greenhouse models.ipynb", "max_forks_repo_name": "gavin971/ClimateModeling_courseware", "max_forks_repo_head_hexsha": "9c8b446d6a274d88868c24570155f50c32d27b89", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-09T04:03:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-20T11:28:17.000Z", "avg_line_length": 360.7902051621, "max_line_length": 287522, "alphanum_fraction": 0.9120560429, "converted": true, "num_tokens": 6024, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657965106367595, "lm_q2_score": 0.26588047891687405, "lm_q1q2_score": 0.12671322586684688}} {"text": "# Lab 1.1: Introduction to Labs\n\n\n```python\n%matplotlib widget\nimport rad.css as css\nimport rad.example as ex\nimport rad.quiz as qz\nfrom rad.const import c, k\nfrom rad.radar import to_db, from_db, deg2rad, rad2deg\nfrom math import sqrt, sin, asin, cos, acos, tan, atan2, pi, log, log10\ncss.add_custom_css()\n```\n\nWelcome class! To begin, there will be a short overview of the interactive tools that we will use throughout the course.\n\nThe backbones of our tutorials are called **Jupyter notebooks**[[1]](#ref_jupyter). A Jupyter notebook is a convenient way to weave together text, images, math formulae, and working, editable code. Behind the scenes of the notebook is a programming language of choice: MATLAB, Python, Julia, Java, C++, etc. For this course, we will use Python[[2]](#ref_python) as it is openly available. Do *not* worry if you have little or no experience with Python: the course will require very little coding, and we will cover what is needed.\n\n## Appearance\n\nBefore doing anything, we should start by making the notebooks easy to read. \n\n### Font Size\nFirst, we can change the font size: to increase the font size, go to *Settings > JupyterLab Theme > Increase Content Font Size*. This can be done repeatedly until the text is comfortable to read.\n\n
\n\n### Hidden Code\nTo make the notebooks easier to read, most background code needed for examples and quizzes has been hidden by default; anytime you see a large ellipsis , there is code being suppressed (e.g., one of them is right below the title at the top of this notebook). To see what is going on in the background, we can click on the ellipsis to reveal it. \n\n## Preparing Notebooks\n\nThe first thing we will want to do upon opening a new notebook is click the *Run All* button at the top; it will ask if it is okay to restart, we can select *Restart*. This ensures that all examples and interactive code will be loaded and ready. Go ahead and do it for this notebook.\n\n## Notebooks and Cells\n\nEach notebook is made up of a list of **cells**; each cell can be either text or code. The cell you are reading right now is an example of a **text cell** (also called a *Markdown* cell; Markdown[[3]](#ref_markdown) is a quick way to format text). Below this, we will see our first example of a **code cell**:\n\n\n```python\nx = 1\nprint(\"The value of x is \" + str(x) + \".\")\n```\n\nThe code cell above sets the value of a variable ``x`` to one and then prints its value. To run the code, first single-click on the cell to highlight it. Then, to **evaluate** the cell, you can either: \n\n * Click the *Run Cell* button on the top bar of the notebook\n \n * Press *Shift+Enter* \n\nAfter evaluating the cell, you can see the output printed below it. Also, in brackets to the left of the cell, you can see the evaluation number; if you evaluate the cell again, you can see the number go up. The evaluation numbers help you keep track of what you have run and in what order.\n\n
\n\nTo **edit** any cell, you can double-click on it; this means you can edit any code you want and rerun it to see what changed. As an example, you can double-click the code cell above and change the value of ``x`` to any number you want and then re-evaluate.\n\n
\n\nTo **add** a cell, single-click on an existing cell and then click the *Add Cell Below* button. A new cell will be created below the one you selected. To change the type of cell (i.e., code, text), select the cell by single-clicking and then use the dropdown menu at the top.\n\n
\n\n**Note:** You can also edit and evaluate text cells. Evaluating a text cell turns it from raw text into a properly formatted web snippet. This can be a helpful way to add notes to your personal version of a notebook.\n\n***\n\n### Try It Out\n\nBelow is an empty code cell. Enter code to assign a value of 3 to a variable ``y`` and then print its value. Try evaluating the code to make sure it works. *Hint: You can copy and paste from the code cell above.*\n\n\n```python\n# Enter code here\n```\n\n***\n\n## Interactive Plots\n\nThe primary tools for interaction in Jupyter notebooks are interactive plots with controls called *widgets*. To see how widgets work, we can first look at plotting a simple sine wave. There will be three variables to change: *Amplitude*, *Frequency*, and *Initial Phase* (we will discuss these more later). Try moving the sliders for each variable to see the plot adjust.\n\n\n```python\nex.ex_1_1_1()\n```\n\nThere are two useful features in the interactive plots: *zooming* and *live cursor data*. To zoom in on a particular part of the plot, you can use the *Zoom Box* button. Try clicking it and drawing a box around a part of the plot. To drag the plot around, you can use the *Pan* button. To return to the original plot view at any time, you can click the *Reset View* button. The current location of the cursor inside the plot is continuously displayed below the plot; this is very helpful for finding values and locations of points on a curve or image.\n\n## Quizzes\n\nThere will also be a number of quizzes to challenge you as the course progresses. These will most often be in the form of a blank text box and a submit button. For example, below see an example quiz that asks you to enter an answer of one.\n\n***\n\n### Question 1\n\nEnter a value of one.\n\n\n```python\nqz.quiz_1_1_1()\n```\n\n***\n\n**Note:** Unless otherwise stated, the tolerance for quiz answers will be within \u00b11% of the true answer.\n\n## Calculator\n\nFor some of the quizzes, it will be helpful to have a calculator handy. If one is not available, no problem! We can use the Jupyter notebook for calculations. In most spots where calculations are necessary, a *Scratch Space* code cell is provided that can be used. As discussed above, we can also simply add a code cell anywhere using the *Add Cell Below* button. \n\nMost mathematical operations are as you would expect. For instance, to calculate the value of $2\\sin(5 \\pi /2) + 1$, we would use the following code cell:\n\n\n```python\n2*sin(5*pi/2) + 1\n```\n\nHere is a short list of the mathematical operators:\n\n| Operation | Symbol |\n|---------------------|--------|\n| Addition | `+` |\n| Subtraction | `-` |\n| Multiplication | `*` |\n| Division | `/` |\n| Exponent | `**` |\n| Square root | `sqrt` |\n| Logarithm | `log` |\n| Logarithm (Base 10) | `log10`|\n\nHere is a list of trigonometric functions:\n\n| Function | Symbol |\n|----------------|-----------|\n| Sine | `sin` |\n| Arcsine | `asin` |\n| Cosine | `cos` |\n| Arccosine | `acos` |\n| Tangent | `tan` |\n| Arctangent | `atan2` |\n\nHere are some handy constants:\n\n| Constant | Symbol |\n|----------------|--------|\n| $\\pi$ | `pi` |\n| Speed of light | `c` |\n\n**Note:** All of these functions (along with useful definitions and formulae) are kept in the [Reference](Reference.ipynb) notebook.\n\n***\n\n### Question 2\n\nUsing the functions in the tables above, calculate the value of $\\log_{10}\\left(2.72^3\\right) + \\cos(4\\pi/7)$:\n\n\n```python\nqz.quiz_1_1_2()\n```\n\n\n```python\n# Scratch space\n```\n\n***\n\n### Engineering Notation\n\nFor very large or small numbers, it can be convenient to write them in the form $a \\times 10^b$, such as $0.000005 = 5 \\times 10^{-6}$. These can be represented in notebook calculations using *engineering notation*. All this means is instead of writing $a \\times 10^b$, you type `aEb`. For instance, if you wanted to use $5 \\times 10^{-6}$, you type `5E-6`. \n\n***\n\n### Question 3\n\nCalculate the value of: \n$$\\frac{7.76 \\times 10^{4}}{5.1 \\times 10^{-3}}$$\n\n\n```python\nqz.quiz_1_1_3()\n```\n\n\n```python\n# Scratch space\n```\n\n***\n\n**Note:** One of the easiest ways to satisfy the \u00b11% tolerance for quiz answers is to write answers in engineering notation with at least two decimal places defined.\n\nThere are a few preliminary topics to cover before we get to the radar course itself. In the following, we will review the basics of: \n\n- *Decibels*, a useful unit for simplifying radar analysis\n- *Sine waves*, the main structure of radar transmissions\n\n## Review: Decibels\n\nMany quantities dealing with radar systems can be extremely large or small. For instance, it is common to use megawatts of power for transmission ($10^6~\\mathrm{W}$), and receive signals on the order of picowatts ($10^{-12}~\\mathrm{W}$). Instead of dealing with that immense scale of numbers, the quantities are often converted to **decibels** (dB); this makes the values much easier to remember and/or manipulate without pen and paper. Here is a short list of example numbers and their corresponding decibel value:\n\n
\n\nTo convert a number, $x$, to the value in decibels, $x_d$, we use the following formula:\n$$ x_d = 10\\log_{10}(x) $$\nIn code for calculations, the conversion to decibels looks like `x_d = 10*log10(x)`. Going back to the example values for radar, instead of using $10^6~\\mathrm{W}$, we can write $60~\\mathrm{dBW}$; likewise, we can talk about receiving signals with a power of $-120~\\mathrm{dBW}$ instead of dealing with $10^{-12}~\\mathrm{W}$.\n\n**Note:** For this course, the original unit will be written at the end of $\\mathrm{dB}$ to keep track of it; for instance, $\\mathrm{dBW}$ means it is in decibels and the original unit was watts.\n\n***\n\n### Question 4\n\n**(a)** What is $1.5 \\times 10^5$ in decibels?\n\n\n```python\nqz.quiz_1_1_4a()\n```\n\n\n```python\n# Scratch space\n```\n\n \n\n**(b)** What is $7.2 \\times 10^{-7}$ in decibels?\n\n\n```python\nqz.quiz_1_1_4b()\n```\n\n\n```python\n# Scratch space\n```\n\n***\n\nTo convert from the value in decibels, $x_d$, back to the original value, $x$, we use:\n\n$$\n\\displaystyle x = 10^{x_d/10}\n$$\n\nIn other words: you first divide the number in decibels by ten, then take ten to that power. In code, this would be written as `x = 10**(x_d/10)`. You can verify this from the examples above.\n\n***\n\n### Question 5\n\n**(a)** What is $51.2~\\mathrm{dBW}$ in watts (W)?\n\n\n```python\nqz.quiz_1_1_5a()\n```\n\n\n```python\n# Scratch space\n```\n\n \n\n**(b)** What is $-20.1~\\mathrm{dBW}$ in watts (W)?\n\n\n```python\nqz.quiz_1_1_5b()\n```\n\n\n```python\n# Scratch space\n```\n\n***\n\nTo save time in calculations, we have added two functions that you can use in your notebook calculations: \n\n- `to_db()`, convert from original units to decibels\n- `from_db()`, convert from decibels to original units\n\nFor instance, to convert $567.2~\\mathrm{W}$ to $\\mathrm{dBW}$, we can write:\n\n\n```python\nto_db(567.2)\n```\n\nwhich gives us $27.5374~\\mathrm{dBW}$. Now to convert back, we write:\n\n\n```python\nfrom_db(27.5374)\n```\n\nA helpful property of decibels is *turning multiplication into addition*. This is convenient as it can be cumbersome to multiply very large or small numbers together quickly where adding their equivalent values in decibels. In general, \n$$\n\\begin{align}\nz &= xy\\\\\nz_{d} &= x_{d} + y_{d}\\\\\n\\end{align}\n$$\nand\n$$\n\\begin{align}\nz &= x/y\\\\\nz_{d} &= x_{d} - y_{d}\\\\\n\\end{align}\n$$\n\n\nThis means we can turn the following multiplication: $z = (4.4 \\times 10^{3})(2.8 \\times 10^{-4})$ into its decibel equivalent: $z_d = 36.4345 + (-35.5284)$, which is much easier to estimate quickly, i.e., it is going to be roughly $1~\\mathrm{dB}$.\n\nThe last property of decibels we will cover is *converting exponents into scalars*. Because the decibel is calculated using a logarithm, the following is true:\n\n$$\n\\begin{align}\nx &= a^b\\\\\nx_{d} &= b \\cdot a_d\n\\end{align}\n$$\n\nSo, if we want the a quantity $a^b$ in decibels, we simply find $a$ in decibels (i.e., $a_d$) and then multiply it by $b$. For example, the value $43^5$ in decibels can be calculated as $5 \\cdot 10\\log_{10}(43)$, or using code `5*to_db(43)`.\n\n***\n\n### Question 6\n\nWhat is $3.7^5$ in decibels? (Hint: $3.7$ in decibels is $5.682~\\mathrm{dB}$)\n\n\n```python\nqz.quiz_1_1_6()\n```\n\n\n```python\n# Scratch space\n```\n\n***\n\n## Review: Sine Waves\n\nSine waves form the main backbone for ranging sensor transmissions because they are easy to generate with hardware and are well understood. A sine wave as a function of time, $y(t)$, is described mathematically as\n\n$$\ny(t) = a\\sin(2\\pi f t + \\phi)\n$$\n\nwhere\n- $a$ is the **amplitude**, which decides the size of the wave\n- $f$ is the **frequency** of the wave (in Hz), which dictates how quickly the sine wave varies as a function of time, i.e., how many sine wave cycles per second\n- $\\phi$ is the initial **phase** (in radians), which defines where the sine wave starts at time $t = 0$\n\nAnother important property is the **period**, $T$, which is the inverse of the frequency (i.e., $T = 1/f$) and the duration of a sine wave cycle. This can be measured by calculating the time from peak-to-peak or valley-to-valley.\n\nBy eye, you can see the period and the amplitude of an example sine wave below.\n\n
\n\nWe can see one peak at $t = 1~\\mathrm{s}$ and the next one at $t = 6~\\mathrm{s}$, so the period of this sine wave is $T = 6 - 1= 5~\\mathrm{s}$, since the frequency is the inverse of the period, this gives a frequency of $f = 1/5~\\mathrm{Hz}$. The amplitude is the distance from the zero to the peak (or valley) of a wave; in this case, the amplitude is $a = 5$.\n\nWe can now revisit the interactive plot from above, showing how amplitude, phase, and frequency affect a sine wave. Try putting in the values we just calculated, i.e., $a = 5, f = 0.2~\\mathrm{Hz}, \\phi = 0~\\mathrm{deg}$, and verifying that the sine wave matches the picture above.\n\n\n```python\nex.ex_1_1_1()\n```\n\n***\n\n### Question 7\n\nFrom the following plot of a sine wave, estimate its amplitude, period, and frequency. *Hint: Remember you can zoom in, pan, and use your cursor to find values of the curve.*\n\n\n```python\nex.ex_1_1_2()\n```\n\n**(a)** What is the amplitude?\n\n\n```python\nqz.quiz_1_1_7a()\n```\n\n\n```python\n# Scratch space\n```\n\n**(b)** What is the period?\n\n\n```python\nqz.quiz_1_1_7b()\n```\n\n\n```python\n# Scratch space\n```\n\n**(c)** What is the frequency?\n\n\n```python\nqz.quiz_1_1_7c()\n```\n\n\n```python\n# Scratch space\n```\n\n***\n\n## Summary\n\nIn this lab, we covered the basics of Jupyter notebooks, along with reviews of decibels and sine wave properties. The main operations for interacting with Jupyter notebooks are:\n- Click *Run All* button to prepare a newly-opened notebook\n- Create a new cell using the *Add Cell Below* button\n- Evaluate a cell using the *Run Cell* button\n- Interact with plots using the *Zoom Box* , *Pan*, and *Reset View* buttons\n\nDecibels are a great way to take extremely large and small numbers and convert them to a scale that is much easier to remember and manipulate. Conversion to decibels is $x_d = 10\\log_{10}(x)$ and conversion back is $x = 10^{x_d/10}$. You can convert multiplications $z = xy$ into additions $z_d = x_d + y_d$ and exponentiation $x = a^b$ to scaling $x_d = b \\cdot a_d$.\n\nSine waves are very important to radar as they provide the basis for transmissions. A sine wave is defined by its amplitude, frequency, and initial phase. The frequency of a sine wave is the number of cycles that occurs within a seconds and its period is the time between cycles; the period is the inverse of the frequency.\n\n## Footnotes\n\nn/a\n\n## References\n\n[1] *Project Jupyter* [https://jupyter.org](https://jupyter.org)\n\n[2] *Python* [https://python.org](https://python.org)\n\n[3] *Markdown Guide* [https://www.markdownguide.org/](https://www.markdownguide.org/)\n", "meta": {"hexsha": "a683267143a4fe29177f0288a2d27018236a9167", "size": 30658, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1_1_Introduction_to_Labs.ipynb", "max_stars_repo_name": "mit-ll/radar-intro", "max_stars_repo_head_hexsha": "9ff2ac5d263c9ceddafed320a65396fb8258fb32", "max_stars_repo_licenses": ["FSFULLR", "FSFUL"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-17T21:08:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-17T21:08:58.000Z", "max_issues_repo_path": "1_1_Introduction_to_Labs.ipynb", "max_issues_repo_name": "mit-ll/radar-intro", "max_issues_repo_head_hexsha": "9ff2ac5d263c9ceddafed320a65396fb8258fb32", "max_issues_repo_licenses": ["FSFULLR", "FSFUL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1_1_Introduction_to_Labs.ipynb", "max_forks_repo_name": "mit-ll/radar-intro", "max_forks_repo_head_hexsha": "9ff2ac5d263c9ceddafed320a65396fb8258fb32", "max_forks_repo_licenses": ["FSFULLR", "FSFUL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.0591350397, "max_line_length": 761, "alphanum_fraction": 0.5550590384, "converted": true, "num_tokens": 4489, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.34864512179822543, "lm_q2_score": 0.3629692124105861, "lm_q1q2_score": 0.12654744526989475}} {"text": "# ... and so we begin\n\n## Critical information\n\n# First steps\n\n## Order of the day\n\n* Learn to use Jupyter / iPython Notebook\n\n* Get familiar with basic Python\n\n* Start with Spyder, a traditional editor\n\n* Fundamental Python-in-Science skills\n\n# What is Jupyter\n## (previously iPython Notebook)\n\n\nAn interactive Q&A-style Python prompt, with output in formatted text, images, graphs and more (and it even works with other languages too)\n\nA bit like a cross between Mathematica and Wolfram Alpha, that runs in your browser, but you can save all your worksheets locally. We will explore this, as it is very useful for doing quick calculations, collaborating on research, as a whiteboard, nonlinear discussions where you can adjust graphs or calculations, as teaching tool (I hope), or simply storing your train of thought in computations and notes. This series of slides was prepared in Jupyter, which is why I can do this...\n\nIt lets you do things like...\n\n\n```python\nimport datetime\nprint(datetime.date.today())\n```\n\n \n\n\nYou want to add in .weekday()\n\n## Lets you output LaTeX-style (formatted) maths\n\nExample calculating the output of $ \\int x^3 dx $:\n\n\n```python\nfrom sympy import *\ninit_printing()\nx = Symbol(\"x\")\nintegrate(x ** 3, x)\n```\n\nand just to prove I'm not making it up... (change 3 to 6 and Ctrl+Enter)\n\n# How we're going to approach this\n\n* Motivated by analysing scientific data\n\n* Learning about basic debugging and standard bits of the language\n\n* Using Etherpad for discussion, group notes and info\n\n* For a more detailed intro to Python in science, definitely check out [swcarpentry.github.io/python-novice-inflammation/](swcarpentry.github.io/python-novice-inflammation/)\n\n* Also, highly recommended, UCL's resources: [http://development.rc.ucl.ac.uk/training/introductory/](http://development.rc.ucl.ac.uk/training/introductory/)\n\nThe dataset used in this course is that generated by the Software Carpentry team, ensuring it has been well tested across a large number of sessions internationally. They make their resources freely available under a Creative Commons license, so you should definitely check it out. I should mention that, while it has been helpful for preparing these sessions, this course is endorsed by or affiliated with Software Carpentry.\n\n# Today's tools\n\n## *Highly* **technical** ***sticky*** notes!\n\n* Stars say **I'm just working away over here** - put them up when you start\n\n - use them to show you're busy\n\n* Arrows say **I've completed the task** - put them up when you finish\n\nhelps us see when most people are ready to move on\n\n# Linux\n\nYou all have machines running Linux - common in scientific computing, easier to manage libraries/programs, easier for me to help you.\n\n1. Click on the word `Activities` (*top-left*)\n2. Click the Firefox button (1/2 way down on left)\n\n This isn't a Linux course, so we're going to keep it very basic, and stick to programming\n\n# Etherpad\n## Live questions, notes and comments\n\nIn Firefox, go to https://etherpad.mozilla.org and enter\n\n**qub-python-course-23Yn9**\n\n1. Enter your name at the top-right\n2. In the chat window (bottom right) say, **hi**, **hello**, **bout ye** *<Return>*\n3. In the big window, stick your name at the end, with some info\n\nUse the name box - you can create a new pad, but you *will* be talking to yourself.\n2 - , or anything else (polite, obviously)\n3 - on a new line - suggested info is on that page\n4 - don't forget to put up your arrow sticker when done\n5 - you can even pick your favourite colour by clicking the little box next to your name\n\n(Etherpad name: **qub-python-course-23Yn9**)\nhttps://etherpad.mozilla.org\n\n
\n\n# Jupyter\n\n## Today's first Python tool\n\n1. Press Alt-F2 and type **jupyter notebook** *<Return>*\n2. When the new window appears, click on **Basic control structures** to open it\n\nThis is our complicated command for the day - the rest we get to do by point-and-click\n\nnote the 'y'\n", "meta": {"hexsha": "0a6dcd08161139f19965c5e88e8598e4eb76aee6", "size": 11034, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python Course - 002a - And so we begin.ipynb", "max_stars_repo_name": "flaxandteal/python-course-lecturer-notebooks", "max_stars_repo_head_hexsha": "f0dc602c52c1741d1f9962a3283ed73054306d4d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Python Course - 002a - And so we begin.ipynb", "max_issues_repo_name": "flaxandteal/python-course-lecturer-notebooks", "max_issues_repo_head_hexsha": "f0dc602c52c1741d1f9962a3283ed73054306d4d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python Course - 002a - And so we begin.ipynb", "max_forks_repo_name": "flaxandteal/python-course-lecturer-notebooks", "max_forks_repo_head_hexsha": "f0dc602c52c1741d1f9962a3283ed73054306d4d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.5183673469, "max_line_length": 860, "alphanum_fraction": 0.5654341127, "converted": true, "num_tokens": 945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3451052574867685, "lm_q2_score": 0.3665897501624599, "lm_q1q2_score": 0.12651205012182587}} {"text": "# Chapter 8: Tree-Based Methods\n- **Chapter 8 from the book [An Introduction to Statistical Learning](https://www.statlearning.com/)**\n- **By Gareth James, Daniela Witten, Trevor Hastie and Rob Tibshirani**\n- **Pages from $332$ to $333$**\n- **By [Mosta Ashour](https://www.linkedin.com/in/mosta-ashour/)**\n\n\n**Exercises:**\n- **[1.](#1)**\n- **[2.](#2)**\n- **[3.](#3)**\n- **[4.](#4) [(a)](#4a) [(b)](#4b)**\n- **[5.](#5)**\n- **[6.](#6)**\n\n# 8.4 Exercises \n## Conceptual \n\n\n### $1.$ Draw an example (of your own invention) of a partition of two-dimensional feature space that could result from recursive binary splitting. Your example should contain at least six regions. Draw a decision tree corresponding to this partition. Be sure to label all aspects of your figures, including the regions $R_1, R_2, \\dots,$ the cutpoints $t_1, t_2, \\dots,$ and so forth.\n### *Hint: Your result should look something like Figures $8.1$ and $8.2$.*\n\n\n [x1 > t1]\n | |\n [x2 > t3] [x2 < t5]\n | | | |\n [x2>t2] R4 [x1t4] R3 R5 R6 R7 [x1\n### $2.$ It is mentioned in Section $8.2.3$ that boosting using depth-one trees (or *stumps*) leads to an *additive* model: that is, a model of the form\n\n$$f(X) = \\sum_{j=1}^{p}{f_j(X_j)}$$\n\n### Explain why this is the case. You can begin with $(8.12)$ in Algorithm $8.2$.\n\n**Answer:**\n>- Boosting has three tuning parameters which the third one is: the number $d$ of splits in each tree, which controls the complexity of the boosted ensemble.\n>- When $d = 1$, each tree is a *stump*, consisting of a *single split*.\n>- In this case, the boosted ensemble is fitting an additive model, since each term involves only a single variable.\n>- With this setting the Boosting for Regression Trees algorithm becomes:\n> 1. Set $\\hat{f}(x) = 0$ and $r_i = y_i$ for all $i$ in training set.\n> 2. For $b = 1, 2, ..., B,$ repeat:

\n> $(a)$ Fit a tree $\\hat{f^b}$ with **1** split to the training data $(X, r)$.

\n> $(b)$ Update $\\hat{f}$ by adding the new tree:

\n> $$\\hat{f}(x) \\leftarrow \\hat{f}(x) + \\hat{f}^b(X_j)$$

**Note:** it can't be shrunken because it is already as small as it can possibly be, also note that it is a function of a single variable.

\n> $(c)$ Update the residuals,\n> $$r_i \\leftarrow r_i - \\hat{f}^b(X_j)$$\n> 3. Output the boosted model,\n$$\\hat{f}(x) = \\sum_{b=1}^{B}{\\hat{f}^b(X_j)}$$\n>- Because the model is using depth-one trees, each $\\hat{f}^b$ generated in step $2$ is a function of a single feature $\\hat{f}^b(X_j)$ and so the model output in step $3$ is a sum of functions of a given single variable which is presented as the additive model.\n\n\n### 3. Consider the Gini index, classification error, and cross-entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of $\\hat{p}_{m1}$. The $x$-axis should display $\\hat{p}_{m1}$, ranging from $0$ to $1$, and the $y$-axis should display the value of the Gini index, classification error, and entropy.\n\n### *Hint: In a setting with two classes, $\\hat{p}_{m1} = 1 - \\hat{p}_{m2}$. You could make this plot by hand, but it will be much easier to make in R.*\n\n**Answer:**\n- Recall these settings from the book page $312$. For $k= \\{1,2\\}$:\n\n>- $\\text{Classification Error}:$\n> $$ \\begin{align}\nE &= 1 - \\mathop{max}_k(\\hat{p}_{mk}) \\tag{8.5}\\\\\n &= 1 - \\mathop{max}_k\\{\\hat{p}_{m1}, \\hat{p}_{m2}\\}\n\\end{align}$$\n\n>- $\\text{Gini Index}:$\n> $$ \\begin{align}\nG &= \\sum_{k=1}^K \\hat{p}_{mk}(1 - \\hat{p}_{mk}), \\tag{8.6}\\\\\n &= \\hat{p}_{m1}(1 - \\hat{p}_{m1}) + \\hat{p}_{m2}(1 - \\hat{p}_{m2})\n\\end{align}$$\n\n>- $\\text{Entropy}:$\n> $$ \\begin{align}\nD &= - \\sum_{k=1}^K \\hat{p}_{mk} log \\hat{p}_{mk}. \\tag{8.7}\\\\\n &= - \\hat{p}_{m1} log \\hat{p}_{m1} - \\hat{p}_{m2} log \\hat{p}_{m2}\n\\end{align}$$\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\npm1 = np.arange(0.01, 1, 0.01)\npm2 = 1 - pm1\n\n# Classification Error:\nclf_err = 1 - np.maximum(pm1, pm2)\n\n# Gini:\ngini = (pm1 * (1 - pm1)) + (pm2 * (1 - pm2))\n\n# Entropy:\nentropy = - (pm1 * np.log(pm1)) - (pm2 * np.log(pm2))\n\n# dataframe\ndf = pd.DataFrame({'Classification_Error': clf_err,\n 'Gini': gini,\n 'Entropy': entropy}).set_index(pm1)\n\nplt.figure(figsize=(10, 8))\nsns.lineplot(data=df);\n```\n\n\n### $4.$ This question relates to the plots in Figure $8.12$.\n\n\n**$(a)$ Sketch the tree corresponding to the partition of the predictor space illustrated in the left-hand panel of Figure $8.12$. The numbers inside the boxes indicate the mean of $Y$ within each region.**\n [X1 < 1]\n | |\n [X2 < 1] 5\n | |\n[ X1 < 0 ] 15\n| |\n3 [X2 < 0]\n | |\n 10 0\n\n**$(b)$ Create a diagram similar to the left-hand panel of Figure $8.12$, using the tree illustrated in the right-hand panel of the same figure. You should divide up the predictor space into the correct regions, and indicate the mean for each region.**\n\n\n```python\n# dividing up the predictor space into regions\nplt.figure(figsize=(8, 8))\nplt.xlim([-1, 3])\nplt.ylim([-1, 3])\n\nplt.axhline(1)\nplt.axhline(2)\nplt.axvline(1, ymax=.5)\nplt.axvline(0, ymin=.5, ymax=.75)\n\n# plot the mean for each region\nmeans = {'-1.08': (0, 0),\n '0.63': (2, 0),\n '-1.06': (-0.5, 1.5),\n '0.21': (1.5, 1.5),\n '2.49': (1, 2.5)}\n\nfor k, (p1, p2) in means.items():\n plt.text(p1, p2, k, fontsize='xx-large', ha='center');\n```\n\n\n### $5.$ Suppose we produce ten bootstrapped samples from a data set containing red and green classes. We then apply a classification tree to each bootstrapped sample and, for a specific value of $X$, produce $10$ estimates of $\\text{P(Class is Red|X)}$:

$$0.1, 0.15, 0.2, 0.2, 0.55, 0.6, 0.6, 0.65, 0.7, \\text{and } 0.75.$$

There are two common ways to combine these results together into a single class prediction. One is the majority vote approach discussed in this chapter. The second approach is to classify based on the average probability. In this example, what is the final classification under each of these two approaches?\n\n\n```python\nres = np.array([0.1, 0.15, 0.2, 0.2, 0.55, 0.6, 0.6, 0.65, 0.7, 0.75])\n\n# Majority vote approach\nmajority = (0.5 < res).sum()\n\n# The average Probability\naverage = np.mean(res)\n\nprint(f'Is red (by Majority): {majority > .5}, majority = {majority}')\nprint(f'Is red (by Average) : {average > .5}, avg = {average}')\n```\n\n Is red (by Majority): True, majority = 6\n Is red (by Average) : False, avg = 0.45\n\n\n\n### $6.$ Provide a detailed explanation of the algorithm that is used to fit a regression tree.\n**Answer:**\n> The Regression Tree algorithm is shown as Algorithm $8.1$ in the book. The algorithm is all about building a single tree for predictors, therefore we tend to select the best individual tree by applying **cost complexity pruning** and **K-fold cross-validation** to choose the best $\\alpha$.

It requires $4$ steps to build a Regression Tree, as shown in Algorithm $8.1$:\n \n> $\\text{Step 1:}$ Use **recursive binary splitting** to grow a large tree on the training data, stopping only when each terminal node has fewer than some minimum number of observations.\n>- We take a top-down, greedy approach that is known as **recursive binary splitting**. The approach is *top-down* because it begins at the top of the tree (at which point all observations belong to a single region) and then successively splits the predictor space; each split is indicated via two new branches further down on the tree. It is *greedy* because at each step of the tree-building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better tree in some future step.\n>- We first select the predictor $X_j$ and the cutpoint $s$ such that splitting the predictor space into the regions $\\{X|X_j < s\\}$ and $\\{X|X_j \\geq s\\}$ leads to the greatest possible\nreduction in $\\text{RSS}$, and then choose the predictor and cutpoint such that the resulting tree has the lowest $\\text{RSS}$.\n>- Next, we repeat the process, looking for the best predictor and best cutpoint in order to split the data further so as to minimize the $\\text{RSS}$ within each of the resulting regions. \n>- However, this time, instead of splitting the entire predictor space, we split one of the two previously identified regions.\n>- We now have three regions. Again, we look to split one of these three regions further, so as to minimize the $\\text{RSS}$. The process continues until a stopping criterion is reached; for instance, we may continue until no region contains more than five observations \"as in Figure $8.3$\".\n>- Once the regions $R_1, \\dots , R_J$ have been created, we predict the response for a given test observation using the mean of the training observations in the region to which that test observation belongs.\n>- And finally, this process may produce good predictions on the training set, but is likely to **overfit** the data, leading to poor test set performance.\n\n> $\\text{Step 2:}$ Apply **cost complexity pruning** to the large tree in order to obtain a sequence of best subtrees, as a function of $\\alpha$.\n>- One possible strategy is that we can build the tree only so long as the decrease in the $\\text{RSS}$ due to each split exceeds some (high) threshold.\n>- Therefore, a better strategy is to grow a very large tree $T_0$, and then $\\text{prune}$ it back in order to obtain a $\\text{subtree}$.\n>- To select a subtree with the lowest test error rate, we can estimate its test error using cross-validation or the validation set approach. But estimating the CV error for every possible subtree would be computationally expensive \"Cumbersome\".\n>- So, Cost $\\text{complexity pruning}$ also known as $\\text{weakest link pruning}$ gives us a way to do just this. Rather than considering every possible subtree, we consider a sequence of trees indexed by a nonnegative tuning parameter $\\alpha$.\n>- For each value of \u03b1 there corresponds a subtree $T \\subset T_0$ is as small as possible such that\n$$\n\\sum_{m=1}^{|T|} \\sum_{i: x_i \\in R_m} (y_i - \\hat{y}_{Rm})^2 + \\alpha |T| \\tag{8.4}\n$$\n>- Here the tuning parameter $\\alpha$ controls a trade-off between the subtree's complexity and its fit to the training data. \n> - **Quick reminder:** (Equation $(8.4)$ is a similar $\\text{cost + penalty}$ format as regularization methods as the **lasso** $(6.7)$ from Chapter $6$, in which a similar formulation was used in order to control the complexity of a linear model.)\n>- When $\\alpha$ = 0, then the subtree $T$ will simply equal to $T_0$, because then $(8.4)$ just measures the training error \"cost\". \n>- As we start to increase $\\alpha$, branches get pruned from the tree in a nested and predictable fashion, so obtaining the whole sequence of subtrees as a function of $\\alpha$ is easy.\n\n> $\\text{Step 3:}$ We can select a value of $\\alpha$ using a **validation set** or using **cross-validation**. We then return to the full data set and obtain the subtree corresponding to $\\alpha$ as following.\n>- Use **K-fold cross-validation** to choose $\\alpha$. That is, divide the training observations into $K$ folds. For each $k = 1, \\dots ,K$:\n> - $(a)$ Repeat Steps $1$ and $2$ on all but the $k$th fold of the training data.\n> - $(b)$ Evaluate the mean squared prediction error on the data in the left-out $k$th fold, as a function of $\\alpha$. Average the results for each value of $\\alpha$, and pick $\\alpha$ to minimize the average error.\n\n> $\\text{Step 4:}$ Return the subtree from Step $2$ that corresponds to the chosen value of $\\alpha$, re-fit a regression tree on the full dataset to obtain the subtree, and use this final form for prediction.\n\n# Done!\n", "meta": {"hexsha": "83db04dbe8bed5504f481df3dd9899eae0d4f53b", "size": 84206, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/8_4_0_Tree-Based_Methods_Conceptual.ipynb", "max_stars_repo_name": "MostaAshour/ISL-in-python", "max_stars_repo_head_hexsha": "87255625066f88d5d4625d045bdc6427a4ad9193", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebooks/8_4_0_Tree-Based_Methods_Conceptual.ipynb", "max_issues_repo_name": "MostaAshour/ISL-in-python", "max_issues_repo_head_hexsha": "87255625066f88d5d4625d045bdc6427a4ad9193", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebooks/8_4_0_Tree-Based_Methods_Conceptual.ipynb", "max_forks_repo_name": "MostaAshour/ISL-in-python", "max_forks_repo_head_hexsha": "87255625066f88d5d4625d045bdc6427a4ad9193", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 233.2576177285, "max_line_length": 53140, "alphanum_fraction": 0.8889390305, "converted": true, "num_tokens": 3598, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.34158248603300034, "lm_q2_score": 0.3702254064929193, "lm_q1q2_score": 0.12646251474242948}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n\n```python\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)//2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000, step=step, return_inferencedata=False)\n```\n\n Multiprocess sampling (4 chains in 4 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n\n\n\n\n

\n \n \n 100.00% [60000/60000 00:06<00:00 Sampling 4 chains, 0 divergences]\n
\n\n\n\n Sampling 4 chains for 5_000 tune and 10_000 draw iterations (20_000 + 40_000 draws total) took 7 seconds.\n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nlen(tau_samples)\n```\n\n\n\n\n 40000\n\n\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\nprint(\"lambda_1 mean: \", lambda_1_samples.mean())\nprint(\"lambda_2 mean: \", lambda_2_samples.mean())\n```\n\n lambda_1 mean: 17.765772805654386\n lambda_2 mean: 22.722578782820666\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\nprint(\"Expected percentage increase: \", (lambda_1_samples / lambda_2_samples).mean())\n```\n\n Expected percentage increase: 0.7830307437472597\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\nprint(\"lambda_1 mean when tau is less than 45: \", lambda_1_samples[tau_samples < 45].mean())\n```\n\n lambda_1 mean when tau is less than 45: 17.766456411899487\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9b5c40f1d6b5c7a7edca2241200d95ae41c8a8b0", "size": 298620, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "minoring/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "6b03b8810975bf27a22efcf060cde89d5786a85b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "minoring/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "6b03b8810975bf27a22efcf060cde89d5786a85b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "minoring/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "6b03b8810975bf27a22efcf060cde89d5786a85b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 241.9935170178, "max_line_length": 84320, "alphanum_fraction": 0.8954925993, "converted": true, "num_tokens": 11548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.2628418431456957, "lm_q1q2_score": 0.1262899013303192}} {"text": "**Francisco Jos\u00e9 Navarro-Brull**\n\n*T\u00c9CNICAS DE C\u00c1LCULO NUM\u00c9RICO APLICADAS A LA F\u00cdSICA Y A LA QU\u00cdMICA - M\u00e1ster en Ciencia de Materiales de la Universidad de Alicante*\n\n# OriginLab vs Python (comparaci\u00f3n de gr\u00e1ficas)\n\nEl software [OriginLab\u00ae](http://www.originlab.com/) es uno de lo m\u00e1s utilizados en el mundo acad\u00e9mico gracias a su versatilidad y potencia de c\u00e1lculo. En concreto, OriginLab\u00ae tiene una serie de ventajas frente a sus *competidores* (Excel\u00ae, MATLAB\u00ae...) ya que logra unir ambos mundos permitiendo a alguien acostumbrado a la interfaz del primero realizar tareas de c\u00e1lculo num\u00e9rico m\u00e1s comunes del segundo. Pero si hay algo que verdaderamente es \u00fatil para un investigador que utiliza OriginLab\u00ae, son sus gr\u00e1ficas \"listas para publicar\".\n\nEn este contexto, [Python](https://www.python.org/) cuenta con magn\u00edficas librer\u00edas capaces de llevar a cabo tareas de c\u00e1lculo num\u00e9rico (NumPy, SciPy) as\u00ed para obtener gr\u00e1ficas (matplotlib) de calidad equivalente o superior a OriginLab.\n\nSi has le\u00eddo con detenimiento hasta aqu\u00ed y te suena el software citado hasta ahora, la pregunta que te vendr\u00e1 a la mente es \u00bfc\u00f3mo un lenguaje de programaci\u00f3n como Python va a sustituir a OrginLab\u00ae y su interfaz tipo Excel\u00ae?\n\nVeamos, Python junto a sus librer\u00edas permite a d\u00eda de hoy resolver el ciclo de trabajo propio tal y como permite OriginLab\u00ae. \u00c9stos son:\n1. Importar datos (xlrd, NumPy, csv, pandas)\n2. Procesado de datos y c\u00e1lculo (SciPy, NumPy)\n3. Visualizaci\u00f3n (matplotib)\n4. Iteraci\u00f3n pasos 2 y 3 (IPython Notebook, Spyder)\n5. Publicaci\u00f3n de resultados (matplotib)\n\nEntonces, **\u00bfpor qu\u00e9 no todo el mundo est\u00e1 usando Python? \u00bfcu\u00e1l es el problema?**\n\nBien, en todo este planteamiento Python *falla* en un concepto. OriginLab\u00ae no requiere conocimientos de programaci\u00f3n para su uso m\u00e1s b\u00e1sico, Python s\u00ed.\n\nPor norma general:\n\n* Muchos cient\u00edficos no tienen conocimientos de programaci\u00f3n (o \u00e9stos son muy reducidos)\n* No tienen tiempo y quieren realizar estas tareas de la forma m\u00e1s r\u00e1pida posible\n* Representar gr\u00e1ficas por comandos de texto puede ser frustrante\n\n
Un fan\u00e1tico de Python (u otro lenguaje) te intentar\u00e1 convencer habl\u00e1ndote de las ventajas de aprender a programar, que ciertamente son muchas, pero esto te llevar\u00e1 tiempo (que *no* tienes) y m\u00e1s problemas (cuando t\u00fa lo que buscabas era una soluci\u00f3n).
\n\n## \"Use the right tool\"\n\nSi te sientes c\u00f3modo utilizando OriginLab\u00ae y no le encuentras limitaciones, sigue utiliz\u00e1ndolo y amortiza los 850 o 1800 d\u00f3lares que vale su licencia para uso acad\u00e9mico o profesional.\n\nSi por el contrario, quieres (y dispones de tiempo para ello):\n* Automatizar el importado y procesado de datos\n* Tener a tu alcance algoritmos avanzados (y gratuitos) de estad\u00edstica, machine learning, inteligencia artificial, computer vision...\n* Ahorrar en licencias y poder usar este software en la empresa a la que vayas\n* Aprender a programar en uno de los lenguajes m\u00e1s vers\u00e1tiles que existen\n* Hacer tu investigaci\u00f3n reproducible a\u00f1adiendo a tu paper no s\u00f3lo la informaci\u00f3n, si no adem\u00e1s las herramientas poder reutilizar tu trabajo y ganar m\u00e1s impacto\n* Obtener gr\u00e1ficas para tus art\u00edculos de calidad igual o superior a Origin\n\n\u00a1Eres bienvenido al mundo de Python!\n\n## \u00bfDe qu\u00e9 va este Notebook?\n\nComo prueba de concepto, se utilizar\u00e1 IPython Notebook para llevar a cabo un proceso de reproducci\u00f3n de muchas de las gr\u00e1ficas que OriginLab\u00ae publicita. Puedes dar un vistazo a la [Galer\u00eda de OriginLab\u00ae](http://www.originlab.com/www/products/graphgallery.aspx) y [Galer\u00eda de matplotib](http://matplotlib.org/gallery.html) para ver lo similares que pueden llegar a ser.\n\nIPython Notebook un formato y soluci\u00f3n elegante que ademas de c\u00f3digo puede contener texto, f\u00f3rmulas mediante $\\LaTeX$, v\u00eddeos, im\u00e1genes y gr\u00e1ficas.\n\n
Nota: La mejor manera de trabajar con matplotlib es buscar en su [galer\u00eda](http://matplotlib.org/gallery.html) una gr\u00e1fica similar al resultado que queramos conseguir y utilizar dicho c\u00f3digo como gu\u00eda o plantilla de trabajo
\n\n## \u00bfDe qu\u00e9 _NO_ va este Notebook?\n\nPese a que se comentar\u00e1n brevemente ciertas l\u00edneas de c\u00f3digo, este notebook no pretende ser un tutorial de matplotlib y/o Python. Para una introducci\u00f3n a los mismos te recomendamos este [Curso online (gratuito) de introducci\u00f3n a Python para cient\u00edficos e ingenieros](http://cacheme.org/curso-online-python-cientifico-ingenieros/) por [@Pybonacci](http://pybonacci.wordpress.com) y organizado por [CAChemE.org](http://cacheme.org).\n\nPor cierto, tambi\u00e9n puedes echarle un vistazo a [Avoplot](https://www.youtube.com/watch?v=_Bm8M9IwuFk), un proyecto muy interesante que pretende simiplificar la vida a muchos cient\u00edficos pero que de momento se encuentra en fase de desarrollo.\n\n## Librer\u00edas gr\u00e1ficas en Python:\n\nLos siguientes ejemplos har\u00e1n uso de matplotlib y ajustar\u00e1n el estilo de las gr\u00e1ficas para hacerlas similares las de OriginLab\u00ae. Matplotlib puede parecer viejo, est\u00e1tico y tener una configuraci\u00f3n por defecto cutre. Sin embargo, su uso es muy, muy extenso y ha demostrado ser una librer\u00eda a prueba de todo. Tal y como mencionaba [@Pybonacci](http://twitter.com/ptbonacci), es v\u00e1lido para el 99% de las personas. El 1% restante tiene varias alternativas que elegir. Muchas de ellas se basan matplotlib, lo que demuestra su robustez, como [seaborn](http://www.stanford.edu/~mwaskom/software/seaborn/), [vincent](http://vincent.readthedocs.org/en/latest/), [ggplot-py](http://blog.yhathq.com/posts/ggplot-for-python.html), [prettyplot](http://olgabot.github.io/prettyplotlib/), [plot.ly](https://plot.ly/), [bearcart](https://github.com/wrobstory/bearcart) o una de las m\u00e1s recientes e interesantes [mpld3](http://mpld3.github.io/). Por otro lado existen alternativas que han empezado un motor gr\u00e1fico de Python desde cero c\u00f3mo por ejemplo [Bokeh](http://bokeh.pydata.org/).\n\nEn cualquier caso matplotlib otorga las siguientes funcionalidades.\n\n* API: Tipo MATLAB (por estados, tersa, menos poderosa) u orientada a objetos (sin estados, verbosa, m\u00e1s poderosa)\n* Abstracci\u00f3n: B\u00e1sicamente un potente modelo interno de objetos SVG (vectoriales)\n* Gr\u00e1ficos de salida:\n * Gr\u00e1ficos est\u00e1ticos finales (backends) de los cuales puedas necesitar: pdf, png, svg, eps, ps, pgf, jpeg...\n * As\u00ed como interfaces gr\u00e1ficas (GUI backends): Tk, Agg, OSX, GTK, Qt4, WebAgg...\n \nPara aprender m\u00e1s sobre matplotib es recomendable leer [Python in the browser age](http://nbviewer.ipython.org/urls/raw.githubusercontent.com/jakevdp/OpenVisConf2014/master/Index.ipynb) por Jake VanderPlas (fuente original de lo descrito en este \u00faltimo p\u00e1rrafo).\n\n# Comparaci\u00f3n de capacidades gr\u00e1ficas (Origin vs Python)\n\nEn primer lugar, queremos que las gr\u00e1ficas aparezcan en este mismo notebook por lo que damos la siguiente instrucci\u00f3n:\n\n\n```python\n%matplotlib inline\n```\n\nActo seguido importamos las librer\u00edas necesarias para la generaci\u00f3n y representaci\u00f3n los resultados:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nEn caso de trabajar con Python 2.7:\n\n\n```python\nfrom __future__ import division\n```\n\n### Potencia de matplotlib: Resultados combinados con $\\LaTeX$\n\nAntes de empezar veamos un ejemplo de la capacidad de matplotlib para generar figuras \"listas para publicar\" y su magn\u00edfica inegraci\u00f3n con $\\LaTeX$. Adem\u00e1s de representar valores num\u00e9ricos, matplotlib permite la a\u00f1adir notas y texto en este formato. Un ejemplo de esta funcionalidad es el siguiente:\n\n\n```python\n# Los comentarios en Python se escriben empezando con una almohadilla\n# estas l\u00edneas ser\u00e1n ignoradas en la ejecuci\u00f3n \n\n\n# Creamos la funci\u00f3n que queremos representar (e integrar)\ndef func(x):\n return (x - 3) * (x - 5) * (x - 7) + 85\n\n# Especificamos los l\u00edmtes de integraci\u00f3n\na, b = 2, 9 \n\n# Crea un vector con valores de 0 a 10\nx = np.linspace(0, 10)\n\n# Obtenemos los valores de y correspondientes\ny = func(x)\n\n# Utilizamos matplotlib (este formato es orientado a clases)\n# matplotlib tambi\u00e9n puede usarse tipo MATLAB(R)\nfig, ax = plt.subplots()\n\n# Dibuja con una l\u00ednea roja de espesor determinado los resultados\nplt.plot(x, y, 'r', linewidth=2)\n\n# Establece el l\u00edmite inferior del eje y\nplt.ylim(ymin=0)\n\n# Crea la regi\u00f3n sombreada que representar\u00e1 la integral\nix = np.linspace(a, b)\niy = func(ix)\n\n# Coordenadas de los puntos del pol\u00edgono de la integral a representar\nverts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]\n\n# Carga la funci\u00f3n Polygon de la librer\u00eda\nfrom matplotlib.patches import Polygon\n\n# Representa dicho pol\u00edgono cierto 90 y 50% de transparencia para el relleno y borde.\npoly = Polygon(verts, facecolor='0.9', edgecolor='0.5')\nax.add_patch(poly)\n\n# A\u00f1ade texto en latex con la siguiente instruccion\n# plt.text(coordenada_x, coordenada_y, texto, opciones)\nplt.text(0.5 * (a + b), 30, r\"$\\int_a^b f(x)\\mathrm{d}x$\",\n horizontalalignment='center', fontsize=20)\n\n# A\u00f1ade texto con posici\u00f3n relativa para indicar los ejes\nplt.figtext(0.9, 0.05, '$x$')\nplt.figtext(0.1, 0.9, '$y$')\n\n# Ocutlamos los bordes superior y derecho del cuadro de la gr\u00e1fica\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\n\n# A\u00f1ade marcas en el eje x con los l\u00edmites de integraci\u00f3n\n# Coordenadas\nax.set_xticks((a, b))\n# Texto\nax.set_xticklabels(('$a$', '$b$'))\n\n# Establece marcas s\u00f3lo en el eje x inferior\nax.xaxis.set_ticks_position('bottom')\n\n# Elimina marcas del eje y\nax.set_yticks([])\n\n# Muestra figura\nplt.show()\n\n# Guarda figura en png o pdf\n# M\u00e1s formatosy y/o opciones mirar en la documentaci\u00f3n\n# http://matplotlib.org/faq/howto_faq.html#save-multiple-plots-to-one-pdf-file\n\n# Dado que llamamos a la figura \"fig\" cuando la creamos,\n# llamamos al m\u00e9todo .savefig\n\n#fig.savefig(r'figuras/mi_figura.pdf')\n#fig.savefig(r'figuras/mi_figura.png')\n```\n\n### Representaci\u00f3n de errores\n\nA pesar de que Excel\u00ae permite representar errores en los gr\u00e1ficos, esta es una funcionalidad importante. Veamos como podemos hacer esto con matplotib.\n\n\n```python\nn_grupos = 11\n\nvalores = (2.5, 7.5, 20, 26, 16, 11, 22.5, 24, 29, 25, 20)\nerrores = (0.5, 1, 2, 2, 1.5, 1, 2, 2, 2, 2, 2)\n\nfig, ax = plt.subplots()\n\nax.yaxis.grid()\n\nindex = np.arange(n_grupos)\nancho_banda = 0.8\n\nopacity = 0.6\nerror_config = {'ecolor': '0.3'}\n\ndistancia_inicio = 0.5\n\ncoordenadas_x_barras = index+distancia_inicio\n\nrects1 = plt.bar(coordenadas_x_barras, valores, ancho_banda,\n alpha=opacity,\n color='orange',\n yerr=errores,\n error_kw=error_config,\n )\n\nplt.xlabel('Bin')\nplt.ylabel('Count')\n\netiquetas_eje_x = ('7-8', '9-10', '11-12', '13-14', '15-16',\n '17-18', '19-20', '21-22', '23-24', '25-26', '27-28')\n\ncoordenadas_etiquetas_x = index + ancho_banda/2 + distancia_inicio\n\nplt.xticks(coordenadas_etiquetas_x, etiquetas_eje_x)\nplt.tight_layout()\n\n# Opcional si queremos salvar las figuras como archivos\n#fig.savefig(r'figuras/barras_errores.pdf')\n#fig.savefig(r'figuras/barras_errores.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/Error_Bar_Scatter_and_Column_Plot.gif).\n\n\n\n### Diagramas de cajas (Whiskey)\n\nVeamos algo m\u00e1s interesante, realizar [diagramas de cajas y bigotes](http://es.wikipedia.org/wiki/Diagrama_de_caja) con Excel\u00ae ya no es tan sencillo. Sin embargo en matplotlib:\n\n\n```python\ndef fakeBootStrapper(n):\n '''\n Devuelve una mediana arbitraria e intervalos de confianza\n en una tupla\n '''\n if n == 1:\n med = 0.1\n CI = (-0.25, 0.25)\n else:\n med = 0.2\n CI = (-0.35, 0.50)\n\n return med, CI\n\n\n# Fija una semilla para el random\nnp.random.seed(2)\ninc = 0.1\ne1 = np.random.normal(0, 1, size=(500,))\ne2 = np.random.normal(0, 1, size=(500,))\ne3 = np.random.normal(0, 1 + inc, size=(500,))\ne4 = np.random.normal(0, 1 + 2*inc, size=(500,))\n\ntratamientos = [e1,e2,e3,e4]\nmed1, CI1 = fakeBootStrapper(1)\nmed2, CI2 = fakeBootStrapper(2)\nmedianas = [None, None, med1, med2]\nconf_intervalos = [None, None, CI1, CI2]\n\nfig, ax = plt.subplots()\n\n# Posiciones de las cajas\npos = np.array(range(len(tratamientos)))+1\n\n# Representa el diagrama\nbp = ax.boxplot(tratamientos, sym='k+', positions=pos,\n notch=1, bootstrap=5000,\n usermedians=medianas,\n conf_intervals=conf_intervalos)\n\n# Etiquetas para los ejes\nax.set_xlabel('Tratamiento')\nax.set_ylabel('Respuesta')\n\n# Tipo de diagrama de cajas (Whisker)\nplt.setp(bp['whiskers'], color='k', linestyle='-' )\nplt.setp(bp['fliers'], markersize=5.0)\n\n# Opcional si queremos salvar las figuras como archivos\n# fig.savefig(r'figuras/diagra-cajas.pdf')\n# fig.savefig(r'figuras/diagra-cajas.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/Box_Width_by_Variable.png).\n\n\n\n### Matriz de dispersi\u00f3n (Scatter Matrix)\n\n\n```python\n# La forma m\u00e1s sencilla de realizar esta visualizaci\u00f3n es\n# mediante la librer\u00eda pandas que permite trabajar de forma similar a R\n\nimport pandas as pd\n\n# Lee los datos (deben de estar en la carpeta de este Notebook)\n# http://en.wikipedia.org/wiki/Iris_flower_data_set\n\niris = pd.read_csv(\"data/iris.csv\")\n\n# Establece el dataframe\ndf = pd.DataFrame(iris, columns=['Longitudo-sepalo', 'Anchura-sepalo',\n 'Longitud-petalo', 'Anchura-petalo'])\n\n# Calcula y representa resultados\npd.scatter_matrix(df, alpha=0.6, figsize=(10,10))\n\n# Opcional si queremos salvar las figuras como archivos\n#plt.savefig(r'figuras/matriz-dispersion.pdf')\n#plt.savefig(r'figuras/matriz-dispersion.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries/Scatter_Matrix_1127.png).\n\n\n\n### Diagramas con coordenadas polares:\n\n\n```python\nN = 50\nradio = 2 * np.random.rand(N)\ntheta = 2 * np.pi * np.random.rand(N)\narea = 75 * radio**2 * np.random.rand(N)\ncolores = np.random.rand(N)\n\n# Crea una figura con coordenadas polares\nax = plt.subplot(111, polar=True)\n\n# Representa una diagrama de dispersi\u00f3n\nc = plt.scatter(theta, radio, c=colores,\n s=area, alpha=0.70)\n\n# Opcional si queremos salvar las figuras como archivos\n#plt.savefig(r'figuras/coordenadas-polares.pdf')\n#plt.savefig(r'figuras/coordenadas-polares.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin:\n\n\n\n### Diagramas de contorno:\n\nOtro tipo de gr\u00e1fico que Excel\u00ae presenta limtaciones son los diagramas de contorno. Veamos c\u00f3mo hacerlo con matplotlib:\n\n\n```python\n# Funci\u00f3n que queremos representar\ndef g(x, y):\n return -(np.cos(x) * np.sin(y))**3\n\n# Genera dos vectores para la malla\n# valor num\u00e9rico alto para ver resultados de la curva suaves\nx = np.linspace(-2, 4, 1000)\ny = np.linspace(-2, 3, 1000)\n\n# Creamos la malla \nxx, yy = np.meshgrid(x, y)\n\n# Calculamos los valores de la funci\u00f3n para cada punto de la malla\nzz = g(xx, yy)\n\n# Ajusta el tama\u00f1o de la figura con figsize\nfig, axes = plt.subplots(figsize=(10, 8))\n\n# Asigna la salida a la variable cs para luego crear el colorbar\ncs = axes.contourf(xx, yy, zz, np.linspace(-1, 2, 100), cmap=plt.cm.BrBG)\n\n# Con `colors='k'` dibujamos todas las l\u00edneas negras\n# Asigna la salida a la variable cs2 para crear las etiquetas\ncs2 = axes.contour(xx, yy, zz, np.linspace(-1, 2, 9), colors='k')\n\n# Crea las etiquetas sobre las l\u00edneas\naxes.clabel(cs2)\n\n# Crea la barra de colores (n\u00f3tese que pertenece a fig)\nfig.colorbar(cs)\n\n# Ponemos las etiquetas de los ejes\naxes.set_xlabel(\"Eje x\")\naxes.set_ylabel(\"Eje y\")\naxes.set_title(u\"Funci\u00f3n representada: $g(x, y) = - (\\cos{x} \\, \\sin{y})^3$\",fontsize=20)\n\n\n# Opcional si queremos salvar las figuras en archivos\n#fig.savefig(r'figuras/diagrama-contorno.pdf')\n#fig.savefig(r'figuras/diagrama-contorno.png',dpi=150)\n\n# Muestra figura\nplt.show()\n\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/Overlay_Contour_Plot.png).\n\n\n\n## Gr\u00e1ficas en 3D\n\nUna de las cosas m\u00e1s llamativas que publicita OriginLab\u00ae es el uso de su herramienta para graficar resultados en 3D. Ciertamente, Excel\u00ae dispone de pocas herramientas en ese aspecto y MATLAB\u00ae tiene el mismo problema que Python (hay que aprender a programar) con el a\u00f1adido del coste de licencia extra. Por ello la soluci\u00f3n m\u00e1s c\u00f3moda es usar OriginLab\u00ae. No te ahorras la licencia pero te evitas tener que aprender programar.\n\nExiste una regla no escrita que dice algo as\u00ed: **_\"si la \u00fanica forma de representar tus resultados es con una gr\u00e1fica 3D, es que algo est\u00e1s haciendo mal\"_**\n\nSin \u00e1nimo de ser tan radicales, vivimos en un mundo 2D. El papel o pantalla donde se ver\u00e1n tus resultados son en 2D y por ello es mejor evitar representar resultados mediante gr\u00e1ficas 3D. No obstante, para exploraci\u00f3n y visualizaci\u00f3n de resultados interactiva, son una herramienta muy potente. Cabe destacar que matplotlib empez\u00f3 a dar soporte a figuras en 3D recientemente. Si se requieren visualizaciones m\u00e1s complejas es recomendable usar [Mayavi](http://docs.enthought.com/mayavi/mayavi/auto/examples.html)\n\nVeamos como hacer esto con matplotlib:\n\n\n```python\n# Cargamos la librer\u00eda 3D de matplotlib\nfrom mpl_toolkits.mplot3d import axes3d\n\n# Acceso r\u00e1pido a los mapas de colores (colormap)\nfrom matplotlib import cm\n```\n\n## Superficie en 3D\n\nEmpecemos con algo sencillo. Vamos a representar la superficie de un parboloide.\n\n\n```python\n# Crea los vectores x e y\nx = np.arange(-2, 2, 0.05)\ny = np.arange(-2, 2, 0.05)\n\n# Genera el mallado 2D\nX, Y = np.meshgrid(x,y)\n\n# Calcula Z: Paraboloide\nZ = (X)**2-(Y)**2\n\n# Tama\u00f1o de figura\nfig = plt.figure(figsize=(8,8))\n\n# Proyecci\u00f3n de la figura en 3D\nax = fig.gca(projection='3d')\n\n# Dibuja los resultados\nax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.jet,\n linewidth=0, antialiased=False)\n\n# Texto de las etiquetas\nax.set_xlabel(\"Eje X\")\nax.set_ylabel(\"Eje Y\")\nax.set_zlabel(\"Eje Z\")\nax.set_title(\"Paraboloide\", fontsize=16)\n\n# Opcional si queremos salvar las figuras en archivos\n#fig.savefig(r'figuras/superficie-3D.pdf')\n#fig.savefig(r'figuras/superficie-3D.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/3d_surface_from_virtual_matrix_opengl.png).\n\n\n\n### Superficies 3D combinadas con diagramas de contorno\n\n\n```python\nfig = plt.figure(figsize=(8,6))\n\nax = fig.gca(projection='3d')\n\n# Carga datos de para test\nX, Y, Z = axes3d.get_test_data(0.01)\n\nsurf = ax.plot_surface(X, Y, Z, rstride=10, cstride=10, alpha=1,\n cmap=cm.jet, linewidth=0.1)\n\n# Proyecci\u00f3n en la base\ncset = ax.contourf(X, Y, Z, 25, zdir='z', offset=-100, cmap=cm.jet, )\n\n# Proyecci\u00f3n de l\u00edneas en el lateral\ncset = ax.contour(X, Y, Z, zdir='y', offset=-30, cmap=cm.jet)\n\n# A\u00f1ade barra de color con valores de Z\nfig.colorbar(surf, shrink=0.5, aspect=10)\n\n# Configuraci\u00f3n de ejes y etiquetas\nax.set_xlabel('X')\nax.set_xlim(-30, 30)\nax.set_ylabel('Y')\nax.set_ylim(-30, 30)\nax.set_zlabel('Z')\nax.set_zlim(-100, 100)\n\nax.set_title(\"Generado con matplotlib\", fontsize=16)\n\n# Opcional si queremos salvar las figuras en archivos\n#fig.savefig(r'figuras/superficie-3D-combinado.pdf')\n#fig.savefig(r'figuras/superficie-3D-combinado.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/3D_Surface_Plot.png).\n\n\n\n### Cascada 3D\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.collections import PolyCollection\nfrom matplotlib.colors import colorConverter\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Para poder crear las se\u00f1ales usaremos una funcion de Gauss\n# y le a\u00f1adiremos ruido blanco\n\ndef gauss_ruido(x,posicion,desviacion):\n ''' Genera una curva de Gauss con ruido blanco\n con la posicion y desviacion est\u00e1ndar de entrada'''\n y = np.exp( -(x-posicion)**2) / (2*np.sqrt(desviacion))\n ruido = 0.01* np.random.randn(len(x))\n senyal = y + ruido\n return senyal\n\n\nfig = plt.figure(figsize=(10,6))\nax = fig.gca(projection='3d')\n\ncc = lambda arg: colorConverter.to_rgba(arg, alpha=0.6)\n\n\n\nposicion= [50, 60, 70, 80]\n\n\nverts = []\nfor pos in posicion:\n x = np.linspace(0,150,100)\n \n y = np.abs(gauss_ruido(x,pos,5))\n \n y[0], y[-1] = 0, 0\n verts.append(list(zip(x, y)))\n\n\npoly = PolyCollection(verts, facecolors = [cc('r'), cc('g'),\n cc('b'), cc('y')])\npoly.set_alpha(0.7)\n\n# Posicion de z\nzs = [0.1, 0.25, 0.50, 0.75]\n\n# Dibuja los poligonos generados seleccionado\n# la se\u00f1al y como eje vertical\nax.add_collection3d(poly, zs=zs, zdir='y')\n\nax.set_xlabel('Espectro')\nax.set_xlim3d(0, 150)\nax.set_ylabel('t muestra / s')\nax.set_ylim3d(0, 1)\nax.set_zlabel(u'C / mg/ml')\nax.set_zlim3d(0, 0.25)\n\n# \u00c1ngulo\nax.view_init(20,280)\n\n# Opcional si queremos salvar las figuras en archivos\n#fig.savefig(r'figuras/cascada-3D.pdf')\n#fig.savefig(r'figuras/cascada-3D.png',dpi=150)\n\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries/3D_waterfall_w_4plane_2.png).\n\n\n\n\n### Diagramas de dispersi\u00f3n en 3D:\n\nPor simplicidad, el siguiente ejemplo crear\u00e1 los datos con funciones de n\u00fameros aleatorios. No obstante, se podr\u00eda hacer el agrupamiento de datos [Iris](http://en.wikipedia.org/wiki/Iris_flower_data_set) con Python y [scikit-learn](http://claudiovz.github.io/scipy-lecture-notes-ES/packages/scikit-learn/index.html#k-means-clustering).\n\n\n```python\nfig = plt.figure(figsize=(10,8))\nax = fig.gca(projection='3d')\n\n# Genereamos los puntos con una distribuci\u00f3n est\u00e1ndar normal de (n_puntos, dimensiones) \n# y su repreesntaci\u00f3n 3d junto a la proyecci\u00f3n en el plano x-y\nz_offset = 3\n\ncoordenadas_clase1 = np.array([2,1,5])\nclase1 = 0.15 * np.random.standard_normal((50,3)) + coordenadas_clase1\nax.plot(clase1[:,0],clase1[:,1],clase1[:,2],'ko', alpha=0.6, label='Setosa')\n\nax.plot(clase1[:,0], clase1[:,1], np.zeros_like(clase1[:,2])+z_offset, 'ko')\n\ncoordenadas_clase2 = np.array([3.5,2.5,6])\nclase2 = 0.3 * np.random.standard_normal((50,3)) + coordenadas_clase2\nax.plot(clase2[:,0],clase2[:,1],clase2[:,2],'ro', alpha=0.6, label='Versicolor')\nax.plot(clase2[:,0], clase2[:,1], np.zeros_like(clase2[:,2])+z_offset, 'ro')\n\ncoordenadas_clase3 = np.array([6,3,7])\nclase3 = 0.4 * np.random.standard_normal((50,3)) + coordenadas_clase3\nax.plot(clase3[:,0],clase3[:,1],clase3[:,2],'go', alpha=0.6, label='Virginica')\nax.plot(clase3[:,0], clase3[:,1], np.zeros_like(clase3[:,2])+z_offset, 'go')\n\n# Generamos las esferas\n\nu1 = np.linspace(0, 2 * np.pi, 100)\nv1 = np.linspace(0, np.pi, 100)\n\nx_esfera_1 = 1 * np.outer(np.cos(u1), np.sin(v1)) + coordenadas_clase1[0]\ny_esfera_1 = 0.5 * np.outer(np.sin(u1), np.sin(v1)) + coordenadas_clase1[1]\nz_esfera_1 = 1.5 * np.outer(np.ones(np.size(u1)), np.cos(v1)) + coordenadas_clase1[2]\nax.plot_surface(x_esfera_1, y_esfera_1, z_esfera_1,\n rstride=10, cstride=10, linewidth=0.1, color='b', alpha=0.1)\n\nu2 = np.linspace(0, 2 * np.pi, 100)\nv2 = np.linspace(0, np.pi, 100)\n\nx_esfera_2 = 1.5 * np.outer(np.cos(u2), np.sin(v2)) + coordenadas_clase2[0]\ny_esfera_2 = 1 * np.outer(np.sin(u2), np.sin(v2)) + coordenadas_clase2[1]\nz_esfera_2 = 1.8 * np.outer(np.ones(np.size(u2)), np.cos(v2)) + coordenadas_clase2[2]\nax.plot_surface(x_esfera_2, y_esfera_2, z_esfera_2,\n rstride=10, cstride=10, linewidth=0.1, color='r', alpha=0.1)\n\nu3 = np.linspace(0, 2 * np.pi, 100)\nv3 = np.linspace(0, np.pi, 100)\n\nx_esfera_3 = 1.5 * np.outer(np.cos(u3), np.sin(v3)) + coordenadas_clase3[0]\ny_esfera_3 = 1 * np.outer(np.sin(u3), np.sin(v3)) + coordenadas_clase3[1]\nz_esfera_3 = 2 * np.outer(np.ones(np.size(u3)), np.cos(v3)) + coordenadas_clase3[2]\nax.plot_surface(x_esfera_3, y_esfera_3, z_esfera_3,\n rstride=10, cstride=10, linewidth=0.1, color='g', alpha=0.1)\n\n# Establcemos l\u00edmites en los ejes y los etiquetamos\nax.set_xlim3d(0, 8)\nax.set_ylim3d(0, 4)\nax.set_zlim3d(z_offset, 9)\n\nax.set_xlabel(u'Longitud del p\u00e9talo (cm)')\nax.set_ylabel(u'Anchura del p\u00e9talo (cm)')\nax.set_zlabel(u'Longitud del s\u00e9palo (cm)')\n\n# Muestra leyenda\nax.legend()\n\n# Opcional si queremos salvar las figuras en archivos\n#fig.savefig(r'figuras/dispersion-3D.pdf')\n#fig.savefig(r'figuras/dispersion-3D.png',dpi=150)\n\nplt.show()\n```\n\nResultado reproducido con Origin [(ver online)](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/3D_Scatter_combined_with_Parametric_Surfaces.png).\n\n\n\n## Un paso m\u00e1s all\u00e1\n\nComo se ha comentado, Python dispone de diversas librer\u00edas. Una de las m\u00e1s interesante es [SymPy](http://sympy.org/en/index.html) que proporciona poderosas funciones sistemas de \u00e1lgebra computacional (CAS, computer algebraic system). El siguiente c\u00f3digo se puede ejecutar sin necesidad de las anteriores celdas ya que es independiente. El prop\u00f3sito es una demostraci\u00f3n de la interactivdad con los nuevos [widgets de IPython Notebook 2.0](http://nbviewer.ipython.org/github/ipython/ipython/blob/2.x/examples/Interactive%20Widgets/Index.ipynb).\n\n\n```python\n# Cargamos widgets interactivos de IPython Notebook\n\nfrom IPython.html.widgets import interact\nfrom IPython.display import display\n```\n\n\n```python\n# Cargamos las partes de SymPy que vamos a utilizar\n\nfrom sympy import Symbol, Eq, factor, init_printing\ninit_printing(use_latex='mathjax')\n```\n\n\n```python\n# Creamos la variable simb\u00f3lica x\nx = Symbol('x')\n```\n\n\n```python\n# Creamos la funci\u00f3n de factorizaci\u00f3n para el ejemplo\ndef factorit(n):\n display(Eq(x**n-1, factor(x**n-1)))\n```\n\nCuando ejectuemos la funci\u00f3n, nos devolver\u00e1 un resultado en $\\LaTeX$\n\n\n```python\nfactorit(12)\n```\n\n\n$$x^{12} - 1 = \\left(x - 1\\right) \\left(x + 1\\right) \\left(x^{2} + 1\\right) \\left(x^{2} - x + 1\\right) \\left(x^{2} + x + 1\\right) \\left(x^{4} - x^{2} + 1\\right)$$\n\n\n\n```python\ninteract(factorit, n=(2,20));\n```\n\n\n$$x^{11} - 1 = \\left(x - 1\\right) \\left(x^{10} + x^{9} + x^{8} + x^{7} + x^{6} + x^{5} + x^{4} + x^{3} + x^{2} + x + 1\\right)$$\n\n\nEs hora de hacer algo m\u00e1s interesante. Hagamos una expansi\u00f3n por serie de Taylor de $\\frac{sin{x}}{x}$\n\n\n```python\nfrom sympy import Symbol, sin, series, exp\nx = Symbol('x')\n\necuacion_ejemplo = sin(x)/x\norden = 5\n\nseries(ecuacion_ejemplo, x, n=orden)\n\n```\n\n\n\n\n$$1 - \\frac{x^{2}}{6} + \\frac{x^{4}}{120} + \\mathcal{O}\\left(x^{5}\\right)$$\n\n\n\nVamos a representar los resultados de forma interactiva\n\n\n```python\nfrom sympy.plotting import plot\n\n# Representa las figuras en l\u00ednea con el documento (no en una ventana emergente)\n%matplotlib inline\n```\n\n\n```python\ndef taylor_graf(n):\n \n e = exp(1)\n ecuacion = e**x\n \n #calculamos la expansion elimnando el termino del error\n ecuacion_aprox = series(ecuacion, x, n=n+1).removeO()\n \n p1 = plot(ecuacion, (x, -3, 3), show=False, line_color='b', label='ecuacion')\n p2 = plot(ecuacion_aprox, (x, -3, 3), show=False, line_color='r', label='aprox')\n \n # Haz la segunda funci\u00f3n parte de la primera\n p1.extend(p2)\n # Nota: Este c\u00f3digo se debe a que representar dos funciones juntas con diferentes colores \n # aun no est\u00e1 implmentado\n # http://stackoverflow.com/questions/21429866/change-color-implicit-plot-sympy\n \n p1.show()\n```\n\nFunci\u00f3n exponencial $e^x$ (en azul) y la suma de los primeros n+1 t\u00e9rminos\n\n\nde su serie de Taylor en torno a cero (en rojo).\n\n\n```python\ninteract(taylor_graf, n=(0,10));\n```\n\nSi est\u00e1s viendo este IPython Notebook offline, este es el resultado:\n\n\nPor \u00faltimo, SymPy tambi\u00e9n representa expresiones matem\u00e1ticas en 3D:\n\n\n```python\nfrom sympy import symbols\nfrom sympy.plotting import plot3d\n\nx, y = symbols('x y')\n```\n\n\n```python\nplot3d(abs(x*y), (x, -5, 5), (y, -5, 5))\n```\n\nEste IPython Notebook puede imprimirse en PDF generando (previamente) un archivo \".tex\" $\\LaTeX$ mediante [pandoc](http://johnmacfarlane.net/pandoc/). Por ejemplo para obtener el archivo basta con ejecutar en la l\u00ednea de comandos el siguiente c\u00f3digo\n\n`ipython nbconvert --to latex originlab-python.ipynb`\n\nSi se desea obtener m\u00e1s informaci\u00f3n, consultar [documentaci\u00f3n](http://ipython.org/ipython-doc/stable/notebook/nbconvert.html).\n\n\n", "meta": {"hexsha": "dbba2e00dd538232199069cd95a743f4aa47ce54", "size": 1022184, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "originlab-python.ipynb", "max_stars_repo_name": "franktoffel/origin", "max_stars_repo_head_hexsha": "a7fae554aab79678769500ff26346c77119a5997", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-02-21T22:30:21.000Z", "max_stars_repo_stars_event_max_datetime": "2017-02-21T22:30:21.000Z", "max_issues_repo_path": "originlab-python.ipynb", "max_issues_repo_name": "franktoffel/origin", "max_issues_repo_head_hexsha": "a7fae554aab79678769500ff26346c77119a5997", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "originlab-python.ipynb", "max_forks_repo_name": "franktoffel/origin", "max_forks_repo_head_hexsha": "a7fae554aab79678769500ff26346c77119a5997", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.2021712907, "max_line_length": 1081, "alphanum_fraction": 0.8252203126, "converted": true, "num_tokens": 8603, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41489883132727684, "lm_q2_score": 0.30404168127272885, "lm_q1q2_score": 0.1261465382348356}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 1000]\np_true = 0.6\ndata = stats.bernoulli.rvs(p_true, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(p_true, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of p = p_true (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\n Applied log-transform to lambda_1 and added transformed lambda_1_log to model.\n Applied log-transform to lambda_2 and added transformed lambda_2_log to model.\n\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", mu=lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(draws=10000, tune=5000,step=step)\n```\n\n [-----------------100%-----------------] 10000 of 10000 complete in 3.9 sec\n\n\n```python\ntrace['lambda_1'].size\n```\n\n\n\n\n 10000\n\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\nprint('lambda 1 mean: {}'.format(lambda_1_samples.mean()))\nprint('lambda_2_mean: {}'.format(lambda_2_samples.mean()))\n```\n\n lambda 1 mean: 17.747493644\n lambda_2_mean: 22.6835346022\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\nwith model:\n increase = pm.Deterministic('perc_increase', (lambda_2 - lambda_1) / lambda_1)\n \nwith model:\n step = pm.Metropolis()\n trace = pm.sample(draws=50000, tune=5000,step=step)\n```\n\n [-----------------100%-----------------] 50000 of 50000 complete in 20.6 sec\n\n\n```python\nprint('percent increase: {}'.format(trace['perc_increase'].mean() * 100))\n```\n\n percent increase: 27.8453634501\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\nless_than_45 = tau_samples < 45\nprint('Expected value of lambda_1 prior to day 45: {}'.format(lambda_1_samples[less_than_45].mean()))\n\n```\n\n Expected value of lambda_1 prior to day 45: 17.7647471828\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7145540b37150f8aedb350dd7a798bd64f230a97", "size": 313042, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_annotated_and_altered.ipynb", "max_stars_repo_name": "tblazina/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "0ae511a74d934fb304de0e15b77882824ac40552", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_annotated_and_altered.ipynb", "max_issues_repo_name": "tblazina/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "0ae511a74d934fb304de0e15b77882824ac40552", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_annotated_and_altered.ipynb", "max_forks_repo_name": "tblazina/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "0ae511a74d934fb304de0e15b77882824ac40552", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 273.6381118881, "max_line_length": 91340, "alphanum_fraction": 0.8928737997, "converted": true, "num_tokens": 11416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4225046493573919, "lm_q2_score": 0.29746993014852224, "lm_q1q2_score": 0.12568242853176925}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: Pyro\n-----\n\n\n\n```python\nimport torch\nimport pyro\nimport pyro.distributions as dist\n```\n\n\n```python\ncount_data = torch.Tensor(count_data)\n```\n\n\n```python\ndef model(data):\n alpha = (1. / data.mean())\n lambda1 = pyro.sample(\"lambda_1\", dist.Exponential(rate=alpha))\n lambda2 = pyro.sample(\"lambda_2\", dist.Exponential(rate=alpha))\n\n tau = pyro.sample(\"tau\", dist.Uniform(0, 1))\n lambda1_size = (tau * data.size(0) + 1).long()\n lambda2_size = data.size(0) - lambda1_size\n lambda_ = torch.cat([lambda1.expand((lambda1_size,)),\n lambda2.expand((lambda2_size,))])\n \n with pyro.plate(\"data\", data.size(0)):\n pyro.sample(\"obs\", dist.Poisson(lambda_), obs=data)\n```\n\n\n```python\n# hmc_kernel = pyro.infer.HMC(model, jit_compile=True, ignore_jit_warnings=True) # Really Slow\nnuts_kernel = pyro.infer.NUTS(model, jit_compile=True, ignore_jit_warnings=True) # Slow move to numpyro\nposterior = pyro.infer.MCMC(nuts_kernel, num_samples=1000, warmup_steps=50) # 10000, 5000 too slow\nposterior.run(count_data)\n```\n\n Sample: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1050/1050 [01:05, 16.06it/s, step size=2.76e-02, acc. prob=0.464]\n\n\n\n```python\nlambda_1_samples = posterior.get_samples()['lambda_1']\nlambda_2_samples = posterior.get_samples()['lambda_2']\ntau_samples = posterior.get_samples()['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\ntau_samples = np.array((tau_samples * count_data.size(0) + 1), dtype=np.int32)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "855d5d14788978415cc88a4bbeb74806b0852793", "size": 288428, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_Pyro.ipynb", "max_stars_repo_name": "davinnovation/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "7280687a4cadb2d273a41f037aeb8d4619cc160a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_Pyro.ipynb", "max_issues_repo_name": "davinnovation/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "7280687a4cadb2d273a41f037aeb8d4619cc160a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_Pyro.ipynb", "max_forks_repo_name": "davinnovation/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "7280687a4cadb2d273a41f037aeb8d4619cc160a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 277.8689788054, "max_line_length": 84324, "alphanum_fraction": 0.9005228341, "converted": true, "num_tokens": 10783, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43782351378493656, "lm_q2_score": 0.28457600421652673, "lm_q1q2_score": 0.12459406610495666}} {"text": "# CS109A Introduction to Data Science \n\n\n## Lab 3: plotting, K-NN Regression, Simple Linear Regression\n\n**Harvard University**
\n**Fall 2019**
\n**Instructors:** Pavlos Protopapas, Kevin Rader, and Chris Tanner
\n\n**Material prepared by**: David Sondak, Will Claybaugh, Pavlos Protopapas, and Eleni Kaxiras.\n\n## Extended Edition\n\nSame as the one done in class with the following additions/clarifications:\n\n* I added another example to illustrate the difference between `.iloc` and `.loc` in `pandas` -- > [here](#iloc)\n* I added some notes on why we are adding a constant in our linear regression model --> [here](#constant)\n* How to run the solutions: Uncomment the following line and run the cell:\n\n```python\n# %load solutions/knn_regression.py\n```\nThis will bring up the code in the cell but WILL NOT RUN it. You need to run the cell again in order to actually run the code\n\n---\n\n\n```python\n#RUN THIS CELL \nimport requests\nfrom IPython.core.display import HTML\nstyles = requests.get(\"https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css\").text\nHTML(styles)\n```\n\n\n\n\n\n\n\n\n\n\n## Learning Goals\n\nBy the end of this lab, you should be able to:\n* Review `numpy` including 2-D arrays and understand array reshaping\n* Use `matplotlib` to make plots\n* Feel comfortable with simple linear regression\n* Feel comfortable with $k$ nearest neighbors\n\n**This lab corresponds to lectures 4 and 5 and maps on to homework 2 and beyond.**\n\n## Table of Contents\n\n#### HIGHLIGHTS FROM PRE-LAB \n\n* [1 - Review of numpy](#first-bullet)\n* [2 - Intro to matplotlib plus more ](#second-bullet)\n\n#### LAB 3 MATERIAL \n\n* [3 - Simple Linear Regression](#third-bullet)\n* [4 - Building a model with `statsmodels` and `sklearn`](#fourth-bullet)\n* [5 - Example: Simple linear regression with automobile data](#fifth-bullet)\n* [6 - $k$Nearest Neighbors](#sixth-bullet)\n\n\n```python\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport time\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\n#import seaborn as sns\nimport warnings\nwarnings.filterwarnings('ignore')\n# Displays the plots for us.\n%matplotlib inline\n```\n\n\n```python\n# Use this as a variable to load solutions: %load PATHTOSOLUTIONS/exercise1.py. It will be substituted in the code\n# so do not worry if it disappears after you run the cell.\nPATHTOSOLUTIONS = 'solutions'\n```\n\n\n## 1 - Review of the `numpy` Python library\n\nIn lab1 we learned about the `numpy` library [(documentation)](http://www.numpy.org/) and its fast array structure, called the `numpy array`. \n\n\n```python\n# import numpy\nimport numpy as np\n```\n\n\n```python\n# make an array\nmy_array = np.array([1,4,9,16])\nmy_array\n```\n\n\n```python\nprint(f'Size of my array: {my_array.size}, or length of my array: {len(my_array)}')\nprint (f'Shape of my array: {my_array.shape}')\n```\n\n#### Notice the way the shape appears in numpy arrays\n\n- For a 1D array, .shape returns a tuple with 1 element (n,)\n- For a 2D array, .shape returns a tuple with 2 elements (n,m)\n- For a 3D array, .shape returns a tuple with 3 elements (n,m,p)\n\n\n```python\n# How to reshape a 1D array to a 2D\nmy_array.reshape(-1,2)\n```\n\nNumpy arrays support the same operations as lists! Below we slice and iterate. \n\n\n```python\nprint(\"array[2:4]:\", my_array[2:4]) # A slice of the array\n\n# Iterate over the array\nfor ele in my_array:\n print(\"element:\", ele)\n```\n\nRemember `numpy` gains a lot of its efficiency from being **strongly typed** (all elements are of the same type, such as integer or floating point). If the elements of an array are of a different type, `numpy` will force them into the same type (the longest in terms of bytes)\n\n\n```python\nmixed = np.array([1, 2.3, 'eleni', True])\nprint(type(1), type(2.3), type('eleni'), type(True))\nmixed # all elements will become strings\n```\n\nNext, we push ahead to two-dimensional arrays and begin to dive into some of the deeper aspects of `numpy`.\n\n\n```python\n# create a 2d-array by handing a list of lists\nmy_array2d = np.array([ [1, 2, 3, 4], \n [5, 6, 7, 8], \n [9, 10, 11, 12] \n])\n\nmy_array2d\n```\n\n### Array Slicing (a reminder...)\n\nNumpy arrays can be sliced, and can be iterated over with loops. Below is a schematic illustrating slicing two-dimensional arrays. \n\n \n \nNotice that the list slicing syntax still works! \n`array[2:,3]` says \"in the array, get rows 2 through the end, column 3]\" \n`array[3,:]` says \"in the array, get row 3, all columns\".\n\n\n### Pandas Slicing (a reminder...)\n\n`.iloc` is by position (position is unique), `.loc` is by label (label is not unique)\n\n\n```python\n# import cast dataframe \ncast = pd.read_csv('../data/cast.csv', encoding='utf_8')\ncast.head()\n```\n\n\n```python\n# get me rows 10 to 13 (python slicing style : exclusive of end) \ncast.iloc[10:13]\n```\n\n\n```python\n# get me columns 0 to 2 but all rows - use head()\ncast.iloc[:, 0:2].head()\n```\n\n\n```python\n# get me rows 10 to 13 AND only columns 0 to 2\ncast.iloc[10:13, 0:2]\n```\n\n\n```python\n# COMPARE: get me rows 10 to 13 (pandas slicing style : inclusive of end)\ncast.loc[10:13]\n```\n\n\n```python\n# give me columns 'year' and 'type' by label but only for rows 5 to 10\ncast.loc[5:10,['year','type']]\n```\n\n#### Another example of positioning with `.iloc` and `loc`\n\nLook at the following data frame. It is a bad example because we have duplicate values for the index but that is legal in pandas. It's just a bad practice and we are doing it to illustrate the difference between positioning with `.iloc` and `loc`. To keep rows unique, though, internally, `pandas` has its own index which in this dataframe runs from `0` to `2`.\n\n\n```python\nindex = ['A', 'Z', 'A']\nfamous = pd.DataFrame({'Elton': ['singer', 'Candle in the wind', 'male'],\n 'Maraie': ['actress' , 'Do not know', 'female'],\n 'num': np.random.randn(3)}, index=index)\nfamous\n```\n\n\n```python\n# accessing elements by label can bring up duplicates!!\nfamous.loc['A'] # since we want all rows is the same as famous.loc['A',:]\n```\n\n\n```python\n# accessing elements by position is unique - brings up only one row\nfamous.iloc[1]\n```\n\n\n## 2 - Plotting with matplotlib and beyond\n
\n \n\n`matplotlib` is a very powerful `python` library for making scientific plots. \n\nWe will not focus too much on the internal aspects of `matplotlib` in today's lab. There are many excellent tutorials out there for `matplotlib`. For example,\n* [`matplotlib` homepage](https://matplotlib.org/)\n* [`matplotlib` tutorial](https://github.com/matplotlib/AnatomyOfMatplotlib)\n\nConveying your findings convincingly is an absolutely crucial part of any analysis. Therefore, you must be able to write well and make compelling visuals. Creating informative visuals is an involved process and we won't cover that in this lab. However, part of creating informative data visualizations means generating *readable* figures. If people can't read your figures or have a difficult time interpreting them, they won't understand the results of your work. Here are some non-negotiable commandments for any plot:\n* Label $x$ and $y$ axes\n* Axes labels should be informative\n* Axes labels should be large enough to read\n* Make tick labels large enough\n* Include a legend if necessary\n* Include a title if necessary\n* Use appropriate line widths\n* Use different line styles for different lines on the plot\n* Use different markers for different lines\n\nThere are other important elements, but that list should get you started on your way.\n\nWe will work with `matplotlib` and `seaborn` for plotting in this class. `matplotlib` is a very powerful `python` library for making scientific plots. `seaborn` is a little more specialized in that it was developed for statistical data visualization. We will cover some `seaborn` later in class. In the meantime you can look at the [seaborn documentation](https://seaborn.pydata.org)\n\nFirst, let's generate some data.\n\n#### Let's plot some functions\n\nWe will use the following three functions to make some plots:\n\n* Logistic function:\n \\begin{align*}\n f\\left(z\\right) = \\dfrac{1}{1 + be^{-az}}\n \\end{align*}\n where $a$ and $b$ are parameters.\n* Hyperbolic tangent:\n \\begin{align*}\n g\\left(z\\right) = b\\tanh\\left(az\\right) + c\n \\end{align*}\n where $a$, $b$, and $c$ are parameters.\n* Rectified Linear Unit:\n \\begin{align*}\n h\\left(z\\right) = \n \\left\\{\n \\begin{array}{lr}\n z, \\quad z > 0 \\\\\n \\epsilon z, \\quad z\\leq 0\n \\end{array}\n \\right.\n \\end{align*}\n where $\\epsilon < 0$ is a small, positive parameter.\n\nYou are given the code for the first two functions. Notice that $z$ is passed in as a `numpy` array and that the functions are returned as `numpy` arrays. Parameters are passed in as floats.\n\nYou should write a function to compute the rectified linear unit. The input should be a `numpy` array for $z$ and a positive float for $\\epsilon$.\n\n\n```python\nimport numpy as np\n\ndef logistic(z: np.ndarray, a: float, b: float) -> np.ndarray:\n \"\"\" Compute logistic function\n Inputs:\n a: exponential parameter\n b: exponential prefactor\n z: numpy array; domain\n Outputs:\n f: numpy array of floats, logistic function\n \"\"\"\n \n den = 1.0 + b * np.exp(-a * z)\n return 1.0 / den\n\ndef stretch_tanh(z: np.ndarray, a: float, b: float, c: float) -> np.ndarray:\n \"\"\" Compute stretched hyperbolic tangent\n Inputs:\n a: horizontal stretch parameter (a>1 implies a horizontal squish)\n b: vertical stretch parameter\n c: vertical shift parameter\n z: numpy array; domain\n Outputs:\n g: numpy array of floats, stretched tanh\n \"\"\"\n return b * np.tanh(a * z) + c\n\ndef relu(z: np.ndarray, eps: float = 0.01) -> np.ndarray:\n \"\"\" Compute rectificed linear unit\n Inputs:\n eps: small positive parameter\n z: numpy array; domain\n Outputs:\n h: numpy array; relu\n \"\"\"\n return np.fmax(z, eps * z)\n```\n\nNow let's make some plots. First, let's just warm up and plot the logistic function.\n\n\n```python\nx = np.linspace(-5.0, 5.0, 100) # Equally spaced grid of 100 pts between -5 and 5\n\nf = logistic(x, 1.0, 1.0) # Generate data\n```\n\n\n```python\nplt.plot(x, f)\nplt.xlabel('x')\nplt.ylabel('f')\nplt.title('Logistic Function')\nplt.grid(True)\n```\n\n#### Figures with subplots\n\nLet's start thinking about the plots as objects. We have the `figure` object which is like a matrix of smaller plots named `axes`. You can use array notation when handling it. \n\n\n```python\nfig, ax = plt.subplots(1,1) # Get figure and axes objects\n\nax.plot(x, f) # Make a plot\n\n# Create some labels\nax.set_xlabel('x')\nax.set_ylabel('f')\nax.set_title('Logistic Function')\n\n# Grid\nax.grid(True)\n```\n\nWow, it's *exactly* the same plot! Notice, however, the use of `ax.set_xlabel()` instead of `plt.xlabel()`. The difference is tiny, but you should be aware of it. I will use this plotting syntax from now on.\n\nWhat else do we need to do to make this figure better? Here are some options:\n* Make labels bigger!\n* Make line fatter\n* Make tick mark labels bigger\n* Make the grid less pronounced\n* Make figure bigger\n\nLet's get to it.\n\n\n```python\nfig, ax = plt.subplots(1,1, figsize=(10,6)) # Make figure bigger\n\n# Make line plot\nax.plot(x, f, lw=4)\n\n# Update ticklabel size\nax.tick_params(labelsize=24)\n\n# Make labels\nax.set_xlabel(r'$x$', fontsize=24) # Use TeX for mathematical rendering\nax.set_ylabel(r'$f(x)$', fontsize=24) # Use TeX for mathematical rendering\nax.set_title('Logistic Function', fontsize=24)\n\nax.grid(True, lw=1.5, ls='--', alpha=0.75)\n```\n\nNotice:\n* `lw` stands for `linewidth`. We could also write `ax.plot(x, f, linewidth=4)`\n* `ls` stands for `linestyle`.\n* `alpha` stands for transparency.\n\nThe only thing remaining to do is to change the $x$ limits. Clearly these should go from $-5$ to $5$.\n\n\n```python\n#fig.savefig('logistic.png')\n\n# Put this in a markdown cell and uncomment this to check what you saved.\n# \n```\n\n#### Resources\nIf you want to see all the styles available, please take a look at the documentation.\n* [Line styles](https://matplotlib.org/2.0.1/api/lines_api.html#matplotlib.lines.Line2D.set_linestyle)\n* [Marker styles](https://matplotlib.org/2.0.1/api/markers_api.html#module-matplotlib.markers)\n* [Everything you could ever want](https://matplotlib.org/2.0.1/api/lines_api.html#matplotlib.lines.Line2D.set_marker)\n\nWe haven't discussed it yet, but you can also put a legend on a figure. You'll do that in the next exercise. Here are some additional resources:\n* [Legend](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html)\n* [Grid](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.grid.html)\n\n`ax.legend(loc='best', fontsize=24);`\n\n
Exercise
\n\nDo the following:\n* Make a figure with the logistic function, hyperbolic tangent, and rectified linear unit.\n* Use different line styles for each plot\n* Put a legend on your figure\n\nHere's an example of a figure:\n\n\n\n```python\n# your code here\n\n# First get the data\nf = logistic(x, 2.0, 1.0)\ng = stretch_tanh(x, 2.0, 0.5, 0.5)\nh = relu(x)\n\nfig, ax = plt.subplots(1,1, figsize=(10,6)) # Create figure object\n\n# Make actual plots\n# (Notice the label argument!)\nax.plot(x, f, lw=4, ls='-', label=r'$L(x;1)$')\nax.plot(x, g, lw=4, ls='--', label=r'$\\tanh(2x)$')\nax.plot(x, h, lw=4, ls='-.', label=r'$relu(x; 0.01)$')\n\n# Make the tick labels readable\nax.tick_params(labelsize=24)\n\n# Set axes limits to make the scale nice\nax.set_xlim(x.min(), x.max())\nax.set_ylim(h.min(), 1.1)\n\n# Make readable labels\nax.set_xlabel(r'$x$', fontsize=24)\nax.set_ylabel(r'$h(x)$', fontsize=24)\nax.set_title('Activation Functions', fontsize=24)\n\n# Set up grid\nax.grid(True, lw=1.75, ls='--', alpha=0.75)\n\n# Put legend on figure\nax.legend(loc='best', fontsize=24);\n\nfig.savefig('../images/nice_plots.png')\n```\n\n
Exercise
\n\nThese figures look nice in the plot and it makes sense for comparison. Now let's put the 3 different figures in separate plots.\n\n* Make a separate plot for each figure and line them up on the same row.\n\n\n```python\n# your code here\n\n```\n\n\n```python\n# %load solutions/three_subplots.py\n\n```\n\n
Exercise
\n\n* Make a grid of 2 x 3 separate plots, 3 will be empty. Just plot the functions and do not worry about cosmetics. We just want you ro see the functionality.\n\n\n```python\n# your code here\n\n```\n\n\n```python\n%load solutions/six_subplots.py\n\n```\n\n\n## 3 - Simple Linear Regression\n\nLinear regression and its many extensions are a workhorse of the statistics and data science community, both in application and as a reference point for other models. Most of the major concepts in machine learning can be and often are discussed in terms of various linear regression models. Thus, this section will introduce you to building and fitting linear regression models and some of the process behind it, so that you can 1) fit models to data you encounter 2) experiment with different kinds of linear regression and observe their effects 3) see some of the technology that makes regression models work.\n\n\n### Linear regression with a toy dataset\nWe first examine a toy problem, focusing our efforts on fitting a linear model to a small dataset with three observations. Each observation consists of one predictor $x_i$ and one response $y_i$ for $i = 1, 2, 3$,\n\n\\begin{align*}\n(x , y) = \\{(x_1, y_1), (x_2, y_2), (x_3, y_3)\\}.\n\\end{align*}\n\nTo be very concrete, let's set the values of the predictors and responses.\n\n\\begin{equation*}\n(x , y) = \\{(1, 2), (2, 2), (3, 4)\\}\n\\end{equation*}\n\nThere is no line of the form $\\beta_0 + \\beta_1 x = y$ that passes through all three observations, since the data are not collinear. Thus our aim is to find the line that best fits these observations in the *least-squares sense*, as discussed in lecture.\n\n
Exercise (for home)
\n\n* Make two numpy arrays out of this data, x_train and y_train\n* Check the dimentions of these arrays\n* Try to reshape them into a different shape\n* Make points into a very simple scatterplot\n* Make a better scatterplot\n\n\n```python\n# your code here\n```\n\n\n```python\n# solution\nx_train = np.array([1,2,3])\ny_train = np.array([2,3,6])\ntype(x_train)\n```\n\n\n```python\nx_train.shape\n```\n\n\n```python\nx_train = x_train.reshape(3,1)\nx_train.shape\n```\n\n\n```python\n# %load solutions/simple_scatterplot.py\n# Make a simple scatterplot\nplt.scatter(x_train,y_train)\n\n# check dimensions \nprint(x_train.shape,y_train.shape)\n\n```\n\n\n```python\n# %load solutions/nice_scatterplot.py\ndef nice_scatterplot(x, y, title):\n # font size\n f_size = 18\n \n # make the figure\n fig, ax = plt.subplots(1,1, figsize=(8,5)) # Create figure object\n\n # set axes limits to make the scale nice\n ax.set_xlim(np.min(x)-1, np.max(x) + 1)\n ax.set_ylim(np.min(y)-1, np.max(y) + 1)\n\n # adjust size of tickmarks in axes\n ax.tick_params(labelsize = f_size)\n \n # remove tick labels\n ax.tick_params(labelbottom=False, bottom=False)\n \n # adjust size of axis label\n ax.set_xlabel(r'$x$', fontsize = f_size)\n ax.set_ylabel(r'$y$', fontsize = f_size)\n \n # set figure title label\n ax.set_title(title, fontsize = f_size)\n\n # you may set up grid with this \n ax.grid(True, lw=1.75, ls='--', alpha=0.15)\n\n # make actual plot (Notice the label argument!)\n #ax.scatter(x, y, label=r'$my points$')\n #ax.scatter(x, y, label='$my points$')\n ax.scatter(x, y, label=r'$my\\,points$')\n ax.legend(loc='best', fontsize = f_size);\n \n return ax\n\nnice_scatterplot(x_train, y_train, 'hello nice plot')\n\n```\n\n\n#### Formulae\nLinear regression is special among the models we study because it can be solved explicitly. While most other models (and even some advanced versions of linear regression) must be solved itteratively, linear regression has a formula where you can simply plug in the data.\n\nFor the single predictor case it is:\n \\begin{align}\n \\beta_1 &= \\frac{\\sum_{i=1}^n{(x_i-\\bar{x})(y_i-\\bar{y})}}{\\sum_{i=1}^n{(x_i-\\bar{x})^2}}\\\\\n \\beta_0 &= \\bar{y} - \\beta_1\\bar{x}\\\n \\end{align}\n \nWhere $\\bar{y}$ and $\\bar{x}$ are the mean of the y values and the mean of the x values, respectively.\n\n### Building a model from scratch\nIn this part, we will solve the equations for simple linear regression and find the best fit solution to our toy problem.\n\n\n```python\n\n```\n\nThe snippets of code below implement the linear regression equations on the observed predictors and responses, which we'll call the training data set. Let's walk through the code.\n\nWe have to reshape our arrrays to 2D. We will see later why.\n\n
Exercise
\n\n* make an array with shape (2,3)\n* reshape it to a size that you want\n\n\n```python\n# your code here\n\n```\n\n\n```python\n#solution\nxx = np.array([[1,2,3],[4,6,8]])\nxxx = xx.reshape(-1,2)\nxxx.shape\n```\n\n\n```python\n# Reshape to be a proper 2D array\nx_train = x_train.reshape(x_train.shape[0], 1)\ny_train = y_train.reshape(y_train.shape[0], 1)\n\nprint(x_train.shape)\n```\n\n\n```python\n# first, compute means\ny_bar = np.mean(y_train)\nx_bar = np.mean(x_train)\n\n# build the two terms\nnumerator = np.sum( (x_train - x_bar)*(y_train - y_bar) )\ndenominator = np.sum((x_train - x_bar)**2)\n\nprint(numerator.shape, denominator.shape) #check shapes\n```\n\n* Why the empty brackets? (The numerator and denominator are scalars, as expected.)\n\n\n```python\n#slope beta1\nbeta_1 = numerator/denominator\n\n#intercept beta0\nbeta_0 = y_bar - beta_1*x_bar\n\nprint(\"The best-fit line is {0:3.2f} + {1:3.2f} * x\".format(beta_0, beta_1))\nprint(f'The best fit is {beta_0}')\n```\n\n
Exercise
\n\nTurn the code from the above cells into a function called `simple_linear_regression_fit`, that inputs the training data and returns `beta0` and `beta1`.\n\nTo do this, copy and paste the code from the above cells below and adjust the code as needed, so that the training data becomes the input and the betas become the output.\n\n```python\ndef simple_linear_regression_fit(x_train: np.ndarray, y_train: np.ndarray) -> np.ndarray:\n \n return\n```\n\nCheck your function by calling it with the training data from above and printing out the beta values.\n\n\n```python\n# Your code here\n```\n\n\n```python\n# %load solutions/simple_linear_regression_fit.py\ndef simple_linear_regression_fit(x_train: np.ndarray, y_train: np.ndarray) -> np.ndarray:\n \"\"\"\n Inputs:\n x_train: a (num observations by 1) array holding the values of the predictor variable\n y_train: a (num observations by 1) array holding the values of the response variable\n\n Returns:\n beta_vals: a (num_features by 1) array holding the intercept and slope coeficients\n \"\"\"\n \n # Check input array sizes\n if len(x_train.shape) < 2:\n print(\"Reshaping features array.\")\n x_train = x_train.reshape(x_train.shape[0], 1)\n\n if len(y_train.shape) < 2:\n print(\"Reshaping observations array.\")\n y_train = y_train.reshape(y_train.shape[0], 1)\n\n # first, compute means\n y_bar = np.mean(y_train)\n x_bar = np.mean(x_train)\n\n # build the two terms\n numerator = np.sum( (x_train - x_bar)*(y_train - y_bar) )\n denominator = np.sum((x_train - x_bar)**2)\n \n #slope beta1\n beta_1 = numerator/denominator\n\n #intercept beta0\n beta_0 = y_bar - beta_1*x_bar\n\n return np.array([beta_0,beta_1])\n\n```\n\n* Let's run this function and see the coefficients\n\n\n```python\nx_train = np.array([1 ,2, 3])\ny_train = np.array([2, 2, 4])\n\nbetas = simple_linear_regression_fit(x_train, y_train)\n\nbeta_0 = betas[0]\nbeta_1 = betas[1]\n\nprint(\"The best-fit line is {0:8.6f} + {1:8.6f} * x\".format(beta_0, beta_1))\n```\n\n
Exercise
\n\n* Do the values of `beta0` and `beta1` seem reasonable?\n* Plot the training data using a scatter plot.\n* Plot the best fit line with `beta0` and `beta1` together with the training data.\n\n\n```python\n# Your code here\n```\n\n\n```python\n# %load solutions/best_fit_scatterplot.py\nfig_scat, ax_scat = plt.subplots(1,1, figsize=(10,6))\n\n# Plot best-fit line\nx_train = np.array([[1, 2, 3]]).T\n\nbest_fit = beta_0 + beta_1 * x_train\n\nax_scat.scatter(x_train, y_train, s=300, label='Training Data')\nax_scat.plot(x_train, best_fit, ls='--', label='Best Fit Line')\n\nax_scat.set_xlabel(r'$x_{train}$')\nax_scat.set_ylabel(r'$y$');\n\n```\n\nThe values of `beta0` and `beta1` seem roughly reasonable. They capture the positive correlation. The line does appear to be trying to get as close as possible to all the points.\n\n\n## 4 - Building a model with `statsmodels` and `sklearn`\n\nNow that we can concretely fit the training data from scratch, let's learn two `python` packages to do it all for us:\n* [statsmodels](http://www.statsmodels.org/stable/regression.html) and \n* [scikit-learn (sklearn)](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).\n\nOur goal is to show how to implement simple linear regression with these packages. For an important sanity check, we compare the $\\beta$ values from `statsmodels` and `sklearn` to the $\\beta$ values that we found from above with our own implementation.\n\nFor the purposes of this lab, `statsmodels` and `sklearn` do the same thing. More generally though, `statsmodels` tends to be easier for inference \\[finding the values of the slope and intercept and dicussing uncertainty in those values\\], whereas `sklearn` has machine-learning algorithms and is better for prediction \\[guessing y values for a given x value\\]. (Note that both packages make the same guesses, it's just a question of which activity they provide more support for.\n\n**Note:** `statsmodels` and `sklearn` are different packages! Unless we specify otherwise, you can use either one.\n\n\n### Why do we need to add a constant in our simple linear regression model? \n\nLet's say we a data set of two obsevations with one predictor and one response variable each. We would then have the following two equations if we run a simple linear regression model. $$y_1=\\beta_0 + \\beta_1*x_1$$ $$y_2=\\beta_0 + \\beta_1*x_2$$
For simplicity and calculation efficiency we want to \"absorb\" the constant $b_0$ into an array with $b_1$ so we have only multiplication. To do this we introduce the constant ${x}^0=1$
$$y_1=\\beta_0*{x_1}^0 + \\beta_1*x_1$$ $$y_2=\\beta_0 * {x_2}^0 + \\beta_1*x_2$$
That becomes: \n$$y_1=\\beta_0*1 + \\beta_1*x_1$$ $$y_2=\\beta_0 * 1 + \\beta_1*x_2$$
\n \nIn matrix notation: \n \n$$\n\\left [\n\\begin{array}{c}\ny_1 \\\\ y_2 \\\\\n\\end{array}\n\\right] =\n\\left [\n\\begin{array}{cc}\n1& x_1 \\\\ 1 & x_2 \\\\\n\\end{array}\n\\right] \n\\cdot\n\\left [\n\\begin{array}{c}\n\\beta_0 \\\\ \\beta_1 \\\\\n\\end{array}\n\\right]\n$$\n

\n \n`sklearn` adds the constant for us where in `statsmodels` we need to explicitly add it using `sm.add_constant`\n\nBelow is the code for `statsmodels`. `Statsmodels` does not by default include the column of ones in the $X$ matrix, so we include it manually with `sm.add_constant`.\n\n\n```python\nimport statsmodels.api as sm\n```\n\n\n```python\n# create the X matrix by appending a column of ones to x_train\nX = sm.add_constant(x_train)\n\n# this is the same matrix as in our scratch problem!\nprint(X)\n\n# build the OLS model (ordinary least squares) from the training data\ntoyregr_sm = sm.OLS(y_train, X)\n\n# do the fit and save regression info (parameters, etc) in results_sm\nresults_sm = toyregr_sm.fit()\n\n# pull the beta parameters out from results_sm\nbeta0_sm = results_sm.params[0]\nbeta1_sm = results_sm.params[1]\n\nprint(f'The regression coef from statsmodels are: beta_0 = {beta0_sm:8.6f} and beta_1 = {beta1_sm:8.6f}')\n```\n\nBesides the beta parameters, `results_sm` contains a ton of other potentially useful information.\n\n\n```python\nimport warnings\nwarnings.filterwarnings('ignore')\nprint(results_sm.summary())\n```\n\nNow let's turn our attention to the `sklearn` library.\n\n\n```python\nfrom sklearn import linear_model\n```\n\n\n```python\n# build the least squares model\ntoyregr = linear_model.LinearRegression()\n\n# save regression info (parameters, etc) in results_skl\nresults = toyregr.fit(x_train, y_train)\n\n# pull the beta parameters out from results_skl\nbeta0_skl = toyregr.intercept_\nbeta1_skl = toyregr.coef_[0]\n\nprint(\"The regression coefficients from the sklearn package are: beta_0 = {0:8.6f} and beta_1 = {1:8.6f}\".format(beta0_skl, beta1_skl))\n```\n\nWe should feel pretty good about ourselves now, and we're ready to move on to a real problem!\n\n### The `scikit-learn` library and the shape of things\n\nBefore diving into a \"real\" problem, let's discuss more of the details of `sklearn`.\n\n`Scikit-learn` is the main `Python` machine learning library. It consists of many learners which can learn models from data, as well as a lot of utility functions such as `train_test_split()`. \n\nUse the following to add the library into your code:\n\n```python\nimport sklearn \n```\n\nIn `scikit-learn`, an **estimator** is a Python object that implements the methods `fit(X, y)` and `predict(T)`\n\nLet's see the structure of `scikit-learn` needed to make these fits. `fit()` always takes two arguments:\n```python\nestimator.fit(Xtrain, ytrain)\n```\nWe will consider two estimators in this lab: `LinearRegression` and `KNeighborsRegressor`.\n\nIt is very important to understand that `Xtrain` must be in the form of a **2x2 array** with each row corresponding to one sample, and each column corresponding to the feature values for that sample.\n\n`ytrain` on the other hand is a simple array of responses. These are continuous for regression problems.\n\n\n\n\n\n### Practice with `sklearn` and a real dataset\nWe begin by loading up the `mtcars` dataset. This data was extracted from the 1974 Motor Trend US magazine, and comprises of fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973\u201374 models). We will load this data to a dataframe with 32 observations on 11 (numeric) variables. Here is an explanation of the features:\n\n- `mpg` is Miles/(US) gallon \n- `cyl` is Number of cylinders, \n- `disp` is\tDisplacement (cu.in.), \n- `hp` is\tGross horsepower, \n- `drat` is\tRear axle ratio, \n- `wt` is the Weight (1000 lbs), \n- `qsec` is 1/4 mile time,\n- `vs` is Engine (0 = V-shaped, 1 = straight), \n- `am` is Transmission (0 = automatic, 1 = manual), \n- `gear` is the Number of forward gears, \n- `carb` is\tNumber of carburetors.\n\n\n```python\nimport pandas as pd\n\n#load mtcars\ndfcars = pd.read_csv(\"../data/mtcars.csv\")\ndfcars.head()\n```\n\n\n```python\n# Fix the column title \ndfcars = dfcars.rename(columns={\"Unnamed: 0\":\"car name\"})\ndfcars.head()\n```\n\n\n```python\ndfcars.shape\n```\n\n#### Searching for values: how many cars have 4 gears?\n\n\n```python\nlen(dfcars[dfcars.gear == 4].drop_duplicates(subset='car name', keep='first'))\n```\n\nNext, let's split the dataset into a training set and test set.\n\n\n```python\n# split into training set and testing set\nfrom sklearn.model_selection import train_test_split\n\n#set random_state to get the same split every time\ntraindf, testdf = train_test_split(dfcars, test_size=0.2, random_state=42)\n```\n\n\n```python\n# testing set is around 20% of the total data; training set is around 80%\nprint(\"Shape of full dataset is: {0}\".format(dfcars.shape))\nprint(\"Shape of training dataset is: {0}\".format(traindf.shape))\nprint(\"Shape of test dataset is: {0}\".format(testdf.shape))\n```\n\nNow we have training and test data. We still need to select a predictor and a response from this dataset. Keep in mind that we need to choose the predictor and response from both the training and test set. You will do this in the exercises below. However, we provide some starter code for you to get things going.\n\n\n```python\ntraindf.head()\n```\n\n\n```python\n# Extract the response variable that we're interested in\ny_train = traindf.mpg\ny_train\n```\n\n
Exercise
\n\nUse slicing to get the same vector `y_train`\n\n----\n\nNow, notice the shape of `y_train`.\n\n\n```python\ny_train.shape, type(y_train)\n```\n\n### Array reshape\nThis is a 1D array as should be the case with the **Y** array. Remember, `sklearn` requires a 2D array only for the predictor array. You will have to pay close attention to this in the exercises later. `Sklearn` doesn't care too much about the shape of `y_train`.\n\nThe whole reason we went through that whole process was to show you how to reshape your data into the correct format.\n\n**IMPORTANT:** Remember that your response variable `ytrain` can be a vector but your predictor variable `xtrain` ***must*** be an array!\n\n\n## 5 - Example: Simple linear regression with automobile data\nWe will now use `sklearn` to predict automobile mileage per gallon (mpg) and evaluate these predictions. We already loaded the data and split them into a training set and a test set.\n\nWe need to choose the variables that we think will be good predictors for the dependent variable `mpg`.\u200a\n\n
Exercise in pairs
\n\n* Pick one variable to use as a predictor for simple linear regression. Discuss your reasons with the person next to you. \n* Justify your choice with some visualizations. \n* Is there a second variable you'd like to use? For example, we're not doing multiple linear regression here, but if we were, is there another variable you'd like to include if we were using two predictors?\n\n\n```python\nx_wt = dfcars.wt\nx_wt.shape\n```\n\n\n```python\n# Your code here\n\n```\n\n\n```python\n# %load solutions/cars_simple_EDA.py\n```\n\n
Exercise
\n\n* Use `sklearn` to fit the training data using simple linear regression.\n* Use the model to make mpg predictions on the test set. \n* Plot the data and the prediction. \n* Print out the mean squared error for the training set and the test set and compare.\n\n\n```python\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_squared_error\n\ndfcars = pd.read_csv(\"../data/mtcars.csv\")\ndfcars = dfcars.rename(columns={\"Unnamed: 0\":\"name\"})\n\ndfcars.head()\n```\n\n\n```python\ntraindf, testdf = train_test_split(dfcars, test_size=0.2, random_state=42)\n\ny_train = np.array(traindf.mpg)\nX_train = np.array(traindf.wt)\nX_train = X_train.reshape(X_train.shape[0], 1)\n```\n\n\n```python\ny_test = np.array(testdf.mpg)\nX_test = np.array(testdf.wt)\nX_test = X_test.reshape(X_test.shape[0], 1)\n```\n\n\n```python\n# Let's take another look at our data\ndfcars.head()\n```\n\n\n```python\n# And out train and test sets \ny_train.shape, X_train.shape\n```\n\n\n```python\ny_test.shape, X_test.shape\n```\n\n\n```python\n#create linear model\nregression = LinearRegression()\n\n#fit linear model\nregression.fit(X_train, y_train)\n\npredicted_y = regression.predict(X_test)\n\nr2 = regression.score(X_test, y_test)\nprint(f'R^2 = {r2:.5}')\n```\n\n\n```python\nprint(regression.score(X_train, y_train))\n\nprint(mean_squared_error(predicted_y, y_test))\nprint(mean_squared_error(y_train, regression.predict(X_train)))\n\nprint('Coefficients: \\n', regression.coef_[0], regression.intercept_)\n```\n\n\n```python\nfig, ax = plt.subplots(1,1, figsize=(10,6))\nax.plot(y_test, predicted_y, 'o')\ngrid = np.linspace(np.min(dfcars.mpg), np.max(dfcars.mpg), 100)\nax.plot(grid, grid, color=\"black\") # 45 degree line\nax.set_xlabel(\"actual y\")\nax.set_ylabel(\"predicted y\")\n\nfig1, ax1 = plt.subplots(1,1, figsize=(10,6))\nax1.plot(dfcars.wt, dfcars.mpg, 'o')\nxgrid = np.linspace(np.min(dfcars.wt), np.max(dfcars.wt), 100)\nax1.plot(xgrid, regression.predict(xgrid.reshape(100, 1)))\n```\n\n\n## 6 - $k$-nearest neighbors\n\nNow that you're familiar with `sklearn`, you're ready to do a KNN regression. \n\nSklearn's regressor is called `sklearn.neighbors.KNeighborsRegressor`. Its main parameter is the `number of nearest neighbors`. There are other parameters such as the distance metric (default for 2 order is the Euclidean distance). For a list of all the parameters see the [Sklearn kNN Regressor Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html).\n\nLet's use $5$ nearest neighbors.\n\n\n```python\n# Import the library\nfrom sklearn.neighbors import KNeighborsRegressor\n```\n\n\n```python\n# Set number of neighbors\nk = 5\nknnreg = KNeighborsRegressor(n_neighbors=k)\n```\n\n\n```python\n# Fit the regressor - make sure your numpy arrays are the right shape\nknnreg.fit(X_train, y_train)\n\n# Evaluate the outcome on the train set using R^2\nr2_train = knnreg.score(X_train, y_train)\n\n# Print results\nprint(f'kNN model with {k} neighbors gives R^2 on the train set: {r2_train:.5}')\n```\n\n\n```python\nknnreg.predict(X_test)\n```\n\n
Exercise
\n\nCalculate and print the $R^{2}$ score on the test set\n\n\n```python\n# Your code here\n```\n\nNot so good? Lets vary the number of neighbors and see what we get.\n\n\n```python\n# Make our lives easy by storing the different regressors in a dictionary\nregdict = {}\n\n# Make our lives easier by entering the k values from a list\nk_list = [1, 2, 4, 15]\n\n# Do a bunch of KNN regressions\nfor k in k_list:\n knnreg = KNeighborsRegressor(n_neighbors=k)\n knnreg.fit(X_train, y_train)\n # Store the regressors in a dictionary\n regdict[k] = knnreg \n\n# Print the dictionary to see what we have\nregdict\n```\n\nNow let's plot all the k values in same plot.\n\n\n```python\nfig, ax = plt.subplots(1,1, figsize=(10,6))\n\nax.plot(dfcars.wt, dfcars.mpg, 'o', label=\"data\")\n\nxgrid = np.linspace(np.min(dfcars.wt), np.max(dfcars.wt), 100)\n\n# let's unpack the dictionary to its elements (items) which is the k and Regressor\nfor k, regressor in regdict.items():\n predictions = regressor.predict(xgrid.reshape(-1,1)) \n ax.plot(xgrid, predictions, label=\"{}-NN\".format(k))\n\nax.legend();\n```\n\n
Exercise
\n\nExplain what you see in the graph. **Hint** Notice how the $1$-NN goes through every point on the training set but utterly fails elsewhere. \n\nLets look at the scores on the training set.\n\n\n```python\nks = range(1, 15) # Grid of k's\nscores_train = [] # R2 scores\nfor k in ks:\n # Create KNN model\n knnreg = KNeighborsRegressor(n_neighbors=k) \n \n # Fit the model to training data\n knnreg.fit(X_train, y_train) \n \n # Calculate R^2 score\n score_train = knnreg.score(X_train, y_train) \n scores_train.append(score_train)\n\n# Plot\nfig, ax = plt.subplots(1,1, figsize=(12,8))\nax.plot(ks, scores_train,'o-')\nax.set_xlabel(r'$k$')\nax.set_ylabel(r'$R^{2}$')\n```\n\n
Exercise
\n\n* Why do we get a perfect $R^2$ at k=1 for the training set?\n* Make the same plot as above on the *test* set.\n* What is the best $k$?\n\n\n```python\n# Your code here\n\n```\n\n\n```python\n# %load solutions/knn_regression.py\n```\n\n\n```python\n# solution to previous exercise\nr2_test = knnreg.score(X_test, y_test)\nprint(f'kNN model with {k} neighbors gives R^2 on the test set: {r2_test:.5}')\n```\n", "meta": {"hexsha": "07e55da895a5bec1ab4ddb9627a163b08c7c8114", "size": 61402, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/labs/lab03/notebook/cs109a_lab3_extended-Copy1.ipynb", "max_stars_repo_name": "cinnamon-roll-with-coffee/2019-CS109A", "max_stars_repo_head_hexsha": "8a8231d1b9cee37cd22e604d637f5b5d967c85f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/labs/lab03/notebook/cs109a_lab3_extended-Copy1.ipynb", "max_issues_repo_name": "cinnamon-roll-with-coffee/2019-CS109A", "max_issues_repo_head_hexsha": "8a8231d1b9cee37cd22e604d637f5b5d967c85f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/labs/lab03/notebook/cs109a_lab3_extended-Copy1.ipynb", "max_forks_repo_name": "cinnamon-roll-with-coffee/2019-CS109A", "max_forks_repo_head_hexsha": "8a8231d1b9cee37cd22e604d637f5b5d967c85f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.2323978336, "max_line_length": 620, "alphanum_fraction": 0.5631575519, "converted": true, "num_tokens": 10590, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43782349911420193, "lm_q2_score": 0.28457599208146817, "lm_q1q2_score": 0.12459405661700382}} {"text": "# Technical information on how this blog is made\n> Using Fastpages and Github Pages to host interactive Jupyter Notebooks in blog format\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [jupyter]\n- hide: true\n- permalink: /technical\n- image: images/chart-preview.png\n\n# Introduction\n\nThis blog is written using [fastpages](https://github.com/fastai/fastpages){% cite fastpages_introduction%}, an easy-to-use (and to setup) way to convert Jupyter Notebooks to blogs, which are automatically hosted on a Github Pages website. This post serves as an introduction for potential users, as a way for me to test some things and as a reference for some more obscure parts of the Fastpages process. Let's begin!\n\n\n\n# Why Fastpages?\n\nA big part of my thesis will be working in python, specifically in Jupyter Notebooks. These are great ways to blend python scripts with markdown and some $\\LaTeX$ features, meaning that in one cell I can mathmatically derive some expression and in the next I can script it in Python. This back and forth got me thinking about using these notebooks as the foundation for my thesis, which led me to fastpages. This means that if I write proper jupyter notebooks, I will be able to:\n\n* Work on the problem at hand\n* Share the problem at hand in a more informal way\n* Copy-paste a section straight into my final report\n\nall from the same source file. Sounds great!\n\nHowever, there are some limitations to *vanilla* fastpages\n\n\n\n# Pedantic Limitations and how to solve them\n\n## Equations\n\nUsing jupyter notebooks you can enter inline math, for example for variables like $x$ and $\\eta_{NEP}$ by using the syntax `$some expression$`, just like you would do in $\\LaTeX$ markdown files. However a centered function\n ```\n$$\nComplicated math\n$$\n```\n will not be numbered and is not referenceable. Thankfully Anthoine C.{% cite math_with_fastpages %} gave the solution, which is to override the default mathrenderer with a couple of lines of code and that seemed to do the trick.\n\n $$\\begin{equation}\ni\\hbar\\frac{\\partial\\mathbf{\\Psi}(x,t)}{\\partial t}=\\left(-\\frac{\\hbar}{2m}\\frac{\\partial^2}{\\partial x^2}+V(x,t)\\right)\\mathbf{\\Psi}(x,t) \n \\end{equation}$$\n and as we can see in \\eqref{Schrodinger}, we have a beatifully typeset, referenced Schrodinger Equation. However, as we can see from the Identity matrix in $\\mathbb{R}^4$ \\eqref{I_R4}, the equation reference varies in size with the size of the equation, so maybe don't throw away your favorite $\\LaTeX$ renderer yet.\n$$\\begin{equation}\nI = \\begin{pmatrix}1&0&0&0 \\\\ 0&1&0&0 \\\\ 0&0&1&0 \\\\ 0&0&0&1\\end{pmatrix}\n\\end{equation}$$\n\n\n## BibTex\n\nFastpages has a built-in way to handle in-text references, but in order to minimize the manual work in copy-pasting sections of notebook to blog to thesis, I have decided to swap it out for a BibTex compliant renderer, as described by the fastpages team {% cite bibtex_with_fastpages%}. As you can see from this post, this works well. The big advantage is that the entire website uses a single `references.bib` file, so I can cite the same reference across multiple posts and end up with a single reference file when I want to consolidate multiple posts in a $\\LaTeX$ thesis or a single post.\n\nOne caveat in in-text refencing is the usual `\\\\cite{}` doesn't work and `{{'{'}}% cite %}` is needed instead. This can be changed in bulk using the find and replace feature of any editor however.\n\n# Bibliography\n{% bibliography --cited %}\n", "meta": {"hexsha": "b12d902285449fa0e420902b4927a8916fdf5f5e", "size": 6289, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-10-10-technical.ipynb", "max_stars_repo_name": "Joristiebosch/thesis", "max_stars_repo_head_hexsha": "e1fc8eb041bdb7cde7a6f70921c4ed5b2fad8bad", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-10-10-technical.ipynb", "max_issues_repo_name": "Joristiebosch/thesis", "max_issues_repo_head_hexsha": "e1fc8eb041bdb7cde7a6f70921c4ed5b2fad8bad", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-20T20:32:56.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T20:32:56.000Z", "max_forks_repo_path": "_notebooks/2021-10-10-technical.ipynb", "max_forks_repo_name": "Joristiebosch/thesis", "max_forks_repo_head_hexsha": "e1fc8eb041bdb7cde7a6f70921c4ed5b2fad8bad", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5534591195, "max_line_length": 606, "alphanum_fraction": 0.5702019399, "converted": true, "num_tokens": 884, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.45713673161914675, "lm_q2_score": 0.27202454519235225, "lm_q1q2_score": 0.12435241150941678}} {"text": "Osnabr\u00fcck University - Machine Learning (Summer Term 2016) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack\n\n# Exercise Sheet 03: Basics of Data Mining\n\n## Introduction\n\nThis week's sheet should be solved and handed in before the end of **Sunday, April 29, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups designated tutor or whomever of us you run into first. Please upload your results to your group's studip folder.\n\nThere are a lot of implementations with fewer theory questions on this sheet, but don't worry: To be able to implement most of the code, you have to understand the theory.\n\nThis week's assignments make use of two packages: `numpy` and `matplotlib`. We already expected you to install those as part of sheet 1. If you did not do so, go back to those instructions or just run the following command in the `terminal`/`cmd.exe` to do so. This will also upgrade your current installation.\n\n conda install jupyter numpy matplotlib\n\nOne note about `matplotlib`: If you run code which contains a plot like the cell below, it can sometimes take a while to execute the code and show the results. During that process the invocation count will be shown as a little Asterisk (\\*) like this:\n\n In [*]:\n\nJust be patient for a few seconds. The following cell tests if `numpy` and `matplotlib` are installed and work:\n\n\n```python\n%matplotlib notebook\nimport importlib\n\nassert importlib.util.find_spec('numpy') is not None , 'numpy not found'\nassert importlib.util.find_spec('matplotlib') is not None, 'matplotlib not found'\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfigure_intro = plt.figure('Example plot')\nplt.plot(np.random.randn(1000, 1))\nfigure_intro.canvas.draw()\n```\n\n\n \n\n\n\n\n\n\n## Assignment 0: Math recap (vector spaces) [2 Bonus Points]\n\nThis exercise is supposed to be very easy and is voluntary. There will be a similar exercise on every sheet.\nIt is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them.\nUsually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session.\nAlso, if you have a (math) topic you would like to recap, please let us know.\n\n**a)** What is a *vector space*? What is the *basis* of a vector space and what is its *dimensionality*? Can you provide examples for finite- and infinite-dimensional vector spaces?\n\nA vector space or linear space is a collection of vectors for which **vector addition** and **scalar multiplication**, that satisfy a set of axioms, hold. A basis is a infinite or finite set of vectors that are *linearly independent* and can be combined to a sum also called *linear combination* to express any vector of that space. \n\nThe dimension of a vector space is the cardinality of its basis. An example for a finite-dimensional vector space is the *Euclidean space* $\\mathbb{R}^3$, which has the basis \n\n$$\\begin{Bmatrix}\n\\begin{pmatrix}\n1\\\\ 0\\\\ 0\n\\end{pmatrix},\n\\begin{pmatrix}\n0\\\\ 1\\\\ 0\n\\end{pmatrix},\n\\begin{pmatrix}\n0\\\\ 0\\\\ 1\n\\end{pmatrix}\n\\end{Bmatrix}$$ \n\nand thereby a dimension of 3. An infinte-dimensional vector space can be found in the *Hilbert space*.\n\n**b)** What is a *linear map*? What is the *image* and the *kernel* of such a map?\n\nA linear mapping is a function that maps between two modules that adheres to certain rules regrading addition and scalar multiplication listed below. \n\n\\begin{align}\nf(\\vec{u}+\\vec{v}) &= f(\\vec{u}) + f(\\vec{v})\\\\\nf(c\\vec{u}) &= cf(\\vec{u})\n\\end{align}\n\nThe kernel and image of such alinear mapping can be defined as follows: \n\n\\begin{align}\nker(f) &= \\{ x \\in V: f(x) = 0 \\}\\\\\nim(f) &= \\{ w \\in W: f(x) = w, x \\in V \\}\\\\\n\\end{align}\n\nMeaning that the kernel contains all vectors for which the operation f maps to 0 and the image holds all vectors in $W$ which can be mapped to from $V$ using f.\n\n**c)** What is a *matrix*? What is the relation to linear maps?\n\nA matrix is a $n\\times m$ collection of values that can be used to perform linear mappings. A popular application of matrices that are linear mappings is the rotation or scaling of 2 or 3-dimensional objects in space (computer graphics).\n\n## Assignment 1: Rosner test [5 Points]\n\nThe Rosner test is an iterative procedure to remove outliers of a data set via a z-test. In this exercise you will implement it and apply it to a sample data set.\n\n### a) Outliers\n\nFirst of all, think about why we use procedures like this and answer the following questions: \n\nWhat are causes for outliers? And what are our options to deal with them? \n\nA common cause for outliers is the imperfection of measuring technology, leading to incorrect data values. Other causes include the possibility that the data actually exhibits high variation or that the underlying model accounts for the extreme data values, while the data set *at hand* does not. \n\nThere are several options available to deal with outliers:\n* Remove them\n* Weight them according to their z-values (distance to $\\mu$ in terms of $\\sigma$)\n* Remove them and replace the missing data according to a specified procedure\n\n### b) Rosner test\n\nIn the following you find a stub for the implementation. The dataset is already generated. Now it is your turn to write the Rosner test and detect the outliers in the data. \n\n`data` is a `np.array` of `[x, y]` coordinates. `outliers` is a list of `[x, y]` coordinates.\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom math import fabs\n\n# generate dataset\ndata = list(zip(np.random.uniform(size=100), np.random.normal(size=100)))\ndata += list(zip(np.random.uniform(size=10), np.random.normal(0, 10, size=10)))\ndata = np.array(data)\noutliers = []\n\n# just to check if everything is pretty\nfig_rosner_data = plt.figure('The Dataset')\nplt.scatter(data[:,0], data[:,1], marker='x')\nplt.axis([0, 1, -20, 20])\nfig_rosner_data.canvas.draw()\n\n# Now find the outliers, add them to 'outliers', and remove them from 'data'.\nmax_iter = 100\nfor i in range(max_iter):\n median = np.median(data, axis=0)\n dev = np.std(data, axis=0)\n # Find max z item\n z = np.array([[fabs(coord[0]-median[0])/dev[0], fabs(coord[1]-median[1])/dev[1]] for coord in data])\n maxz = np.unravel_index(np.argmax(z, axis=None), z.shape)\n if z[maxz] > 3.5:\n outliers.append(data[maxz[0]])\n data = np.delete(data, maxz[0], axis=0)\n\n# plot results\noutliers = np.array(outliers)\nfig_rosner = plt.figure('Rosner Result')\nplt.scatter(data[:,0], data[:,1], c='b', marker='x', label='cleared data')\nplt.scatter(outliers[:,0], outliers[:,1], c='r', marker='x', label='outliers')\nplt.axis([0, 1, -20, 20])\nplt.legend(loc='lower right');\nfig_rosner.canvas.draw()\n```\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n## Assignment 2: p-norm [5 Points]\n\nA very well known norm is the euclidean distance. However, it is not the only norm: It is in fact just one of many p-norms where $p = 2$. In this assignment you will take a look at other p-norms and see how they behave.\n\nImplement a function `pnorm` which expects a vector $x \\in \\mathcal{R}^n$ and a scalar $p \\geq 1, p \\in \\mathcal{R}$ and returns the p-norm of $x$ which is defined as:\n\n$$||x||_p = \\left(\\sum\\limits_{i=1}^n |x_i|^p \\right)^{\\frac{1}{p}}$$\n\n*Note:* Even though the norm is only defined for $p \\geq 1$, values $0 < p < 1$ are still interesting. In that case we can not talk about a norm anymore, as the triangle inequality ($||a|| + ||b|| \\geq ||a + b||$) does not hold. We will still take a look at some of these values, so your function should handle them as well.\n\n\n```python\nimport numpy as np\nfrom math import fsum, fabs\n\ndef pnorm(x, p):\n \"\"\"\n Calculates the p-norm of x.\n \n Args:\n x (array): the vector for which the norm is to be computed.\n p (float): the p-value (a positive real number).\n \n Returns:\n The p-norm of x.\n \"\"\" \n if p < 0:\n raise ValueError(\"P-Norm: p must be a positive number\")\n \n if not isinstance(x, list):\n x = [x];\n \n return pow(fsum([pow(fabs(a),p) for a in x]), 1.0/p)\n```\n\n\n```python\n# 1e-10 is 0.0000000001\nassert pnorm(1, 2) - 1 < 1e-10 , \"pnorm is incorrect for x = 1, p = 2\"\nassert pnorm(2, 2) - 2 < 1e-10 , \"pnorm is incorrect for x = 2, p = 2\"\nassert pnorm([2, 1], 2) - np.sqrt(5) < 1e-10 , \"pnorm is incorrect for x = [2, 1], p = 2\" \nassert pnorm(2, 0.5) - 2 < 1e-10 , \"pnorm is incorrect for x = 2, p = 0.5\"\n```\n\nImplement another function `pdist` which expects two vectors $x_0 \\in \\mathcal{R}^n, x_1 \\in \\mathcal{R}^n$ and a scalar $p \\geq 1, p \\in \\mathcal{R}$ and returns the distance between $x_0$ and $x_1$ on the p-norm defined by $p$. Again handle $0 < p < 1$ as well.\n\n\n```python\nimport numpy as np\nfrom math import fabs\n\ndef pdist(x0, x1, p):\n \"\"\"\n Calculates the distance between x0 and x1\n using the p-norm.\n \n Arguments:\n x0 (array): the first vector.\n x1 (array): the second vector.\n p (float): the p-value (a positive real number).\n \n Returns:\n The p-distance between x0 and x1.\n \"\"\"\n if p < 0:\n raise ValueError(\"P-Dist: p must be a positive number\")\n \n if not isinstance(x0, list):\n x0 = [x0];\n \n if not isinstance(x1, list):\n x1 = [x1];\n \n return pnorm([a-b for a,b in zip(x0,x1)], p)\n```\n\n\n```python\n# 1e-10 is 0.0000000001\nassert pdist(1, 2, 2) - 1 < 1e-10 , \"pdist is incorrect for x0 = 1, x1 = 2, p = 2\"\nassert pdist(2, 5, 2) - 3 < 1e-10 , \"pdist is incorrect for x0 = 2, x1 = 5, p = 2\"\nassert pdist([2, 1], [1, 2], 2) - np.sqrt(2) < 1e-10 , \"pdist is incorrect for x0 = [2, 1], x1 = [1, 2], p = 2\" \nassert pdist([2, 1], [0, 0], 2) - np.sqrt(5) < 1e-10 , \"pdist is incorrect for x0 = [2, 1], x1 = [0, 0], p = 2\" \nassert pdist(2, 0, 0.5) - 2 < 1e-10 , \"pdist is incorrect for x0 = 2, x1 = 0, p = 0.5\"\n```\n\nNow we will compare some different p-norms. Below is part of a code to plot data in nice scatter plots. \n\nYour task is to calculate the data to plot. The variable `data` is currently simply filled with zeros. Instead, fill it as follows:\n\n- Use the function `np.linspace()` to create a vector of `50` evenly distributed values between `-100` and `100` (inclusively).\n- Fill `data`: Data is basically the cartesian product of the vector you created before with itself filled up with each value's norm. It should have 2500 rows. Each of the 2500 rows should contain `[x, y, d]`, where `x` is the x coordinate and `y` the y coordinate of a point, and `d` the p-norm of `(x, y)`. Use either `pnorm` or `pdist` to calculate `d`.\n- Normalize the data in `data[:,2]` (i.e. all d-values) so that they are between 0 and 1.\n\nRun your code and take a look at your results. Darker colors mean that a value is closer to the center (0, 0) according to the p-norm used.\n\n*Hint:* To give you an idea of how `data` should look like, here is an example for three evenly distributed values between `-1` and `1` and a p-norm with `p = 2`.\n\nBefore normalization of the d-column:\n\n```python\ndata = np.array([[-1. -1. 1.41421356]\n [-1. 0. 1. ]\n [-1. 1. 1.41421356]\n [ 0. -1. 1. ]\n [ 0. 0. 0. ]\n [ 0. 1. 1. ]\n [ 1. -1. 1.41421356]\n [ 1. 0. 1. ]\n [ 1. 1. 1.41421356]])\n```\n\nAfter normalization of the d-column:\n\n```python\ndata = np.array([[-1. -1. 1. ]\n [-1. 0. 0.70710678]\n [-1. 1. 1. ]\n [ 0. -1. 0.70710678]\n [ 0. 0. 0. ]\n [ 0. 1. 0.70710678]\n [ 1. -1. 1. ]\n [ 1. 0. 0.70710678]\n [ 1. 1. 1. ]])\n```\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ColorConverter\nfrom itertools import product\n\ncolor = ColorConverter()\nfigure_norms = plt.figure('p-norm comparison')\n\n# create the linspace vector\nls = np.linspace(-100, 100)\n\nassert len(ls) == 50 , 'ls should be of length 50.'\nassert (min(ls), max(ls)) == (-100, 100) , 'ls should range from -100 to 100, inclusively.'\n\nfor i, p in enumerate([1/8, 1/4, 1/2, 1, 1.5, 2, 4, 8, 128]):\n # Create a numpy array containing useful values instead of zeros\n # tmp_data = np.array([x for x in product(ls, ls)])\n data = np.array([[x[0], x[1], pnorm([x[0], x[1]], p)] for x in product(ls, ls)])\n \n normalize = np.amax(data[:,2])\n data[:,2] /= normalize\n \n assert all(data[:,2] <= 1), 'The third column should be normalized.'\n\n # Plot the data.\n colors = [color.to_rgb((1, 1-a, 1-a)) for a in data[:,2]]\n a = plt.subplot(3, 3, i + 1)\n plt.scatter(data[:,0], data[:,1], marker='.', color=colors)\n a.set_ylim([-100, 100])\n a.set_xlim([-100, 100])\n a.set_title('{:.3g}-norm'.format(p))\n a.set_aspect('equal')\n plt.tight_layout()\n figure_norms.canvas.draw()\n```\n\n\n \n\n\n\n\n\n\n## Assignment 3: Expectation Maximization [10 Points]\n\nIn this assignment you will implement the Expectation Maximization algorithm (EM) for 1D data sets.\n\nAs some parts of this exercise would require some more knowledge of Python than what was already discussed in the practice sessions we built a small number of templates for you to use. However, if you prefer to do so you are also allowed to just go ahead and implement everything yourself! **Don't forget [task b)](#b%29-EM-and-missing-values)**!\n\n### a) Implement Expectation Maximization\n\nUse the next cell to implement your own solution or, if you want some more guidance, skip the next cell and continue the exercise at [Step 1) Load the data](#Step-1%29-Load-the-data).\n\nHere is an overview of what you have to do:\n\n**1) Load the data:**\n\nLoad the provided data set. It is stored in `em_normdistdata.txt`. We call the set $X$ and each individual data $x \\in X$.\n\n**2) Initialize EM:**\n\nInitialize three normal distributions whose parameters will be changed iteratively by the EM to converge close to the original distributions.\n\nEach normal distribution $j$ has three parameters: $\\mu_j$ (the mean), $\\sigma_j$ (the standard deviation), $\\alpha_j$ (the proportion of the normal distribution in the mixture, that means $\\sum\\limits_j\\alpha_j=1$).\n\nInitialize the three parameters using three random partitions $S_j$ of the data set. Calculate each $\\mu_j$ and $\\sigma_j$ and set $\\alpha_j = \\frac{|S_j|}{|X|}$.\n\n**3) Implement the expectation step:**\n\nPerform a soft classification of the data samples with the three normal distributions. That means: Calculate the likelihood that a data sample $x_i$ belongs to distribution $j$ given parameters $\\mu_j$ and $\\sigma_j$. Or in other words, what is the likelihood of $x_i$ to be drawn from $N_j(\\mu_j, \\sigma_j)$? When you got the likelihood, weight the result by $\\alpha_j$.\n\nAs a last step normalize the results such that the likelihoods of a data sample $x_i$ sum up to $1$.\n\n**4) Implement the maximization step:**\n\nIn the maximization step each $\\mu_j$, $\\sigma_j$ and $\\alpha_j$ is updated. First calculate the new means:\n\n$$\\mu_j = \\frac{1}{\\sum\\limits_{i=1}^{|X|} p_{ij}} \\sum\\limits_{i=1}^{|X|} p_{ij}x_i$$\n\nThat means $\\mu_j$ is the weighted mean of all samples, where the weight is their likelihood of belonging to distribution $j$.\n\nThen calculate the new $\\sigma_j$. Each new $\\sigma_j$ is the standard deviation of the normal distribution with the new $\\mu_j$, so for the calculation you already use the new $\\mu_j$:\n\n$$\\sigma_j = \\sqrt{ \\frac{1}{\\sum\\limits_{i=1}^{|X|} p_{ij}} \\sum\\limits_{i=1}^{|X|} p_{ij} \\left(x_i - \\mu_j\\right)^2 }$$\n\nTo calculate the new $\\alpha_j$ for each distribution, just take the mean of $p_j$ for each normal distribution $j$.\n\n**5) Perform the complete EM and plot your results:**\n\nBuild a loop around the iterative procedure of expectation and maximization which stops when the changes in all $\\mu_j$ and $\\sigma_j$ are sufficiently small enough.\n\nPlot your results after each step and mark which data points belong to which normal distribution. If you don't get it to work, just plot your final solution of the distributions.\n\n\n```python\n# Free space to implement your own solution -- either use this OR use the following step by step guide. \n# You may use scipy.stats.norm.pdf for your own implementation.\n\n\n\n\n```\n\n#### Step 1) Load the data\n\n\nLoad the provided data set. It is stored in `em_normdistdata.txt`. We call the set $X$ and each individual data $x \\in X$. \n\n*Hint:* Figure out a way on how numpy can load text data.\n\n\n```python\nimport numpy as np\n\ndef load_data(file_name):\n \"\"\"\n Loads the data stored in file_name into a numpy array.\n \"\"\"\n return np.loadtxt(file_name)\n\nassert load_data('em_normdistdata.txt').shape == (200,) , \"The data was not properly loaded.\"\n```\n\n*Optional:* The data consists of 200 data points drawn from three normal distributions. To get a feeling for the data set you can plot the data with the following cell. Change the number of bins to get a rough idea of how the three distributions might look like.\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = load_data('em_normdistdata.txt')\n\nfig_data_test = plt.figure('Data overview')\nplt.hist(data, bins=5)\nfig_data_test.canvas.draw()\n```\n\n\n \n\n\n\n\n\n\n#### Step 2) Initialize EM\n\nBelow is a class definition `NormPDF` which represents the probability density function (pdf) of the normal distribution with an additional parameter $\\alpha$. The class is explained in the next cells.\n\n\n```python\nimport numpy as np\nclass NormPDF():\n \"\"\"\n A representation of the probability density function of the normal distribution\n for the EM Algorithm.\n \"\"\"\n\n def __init__(self, mu=0, sigma=1, alpha=1):\n \"\"\"\n Initializes the normal distribution with mu, sigma and alpha.\n The defaults are 0, 1, and 1 respectively.\n \"\"\"\n self.mu = mu\n self.sigma = sigma\n self.alpha = alpha\n\n\n def __call__(self, x):\n \"\"\"\n Returns the evaluation of this normal distribution at x.\n Does not take alpha into account!\n \"\"\"\n return np.exp(-(x - self.mu) ** 2 / (2 * self.sigma ** 2)) / (np.sqrt(np.pi * 2) * self.sigma)\n\n\n def __repr__(self):\n \"\"\"\n A simple string representation of this instance.\n \"\"\"\n return 'NormPDF({self.mu:.2f},{self.sigma:.2f},{self.alpha:.2f})'.format(self=self)\n```\n\nThe class `NormPDF` offers several class methods: `__init__`, `__call__`, `__repr__`. They are all special Python functions which are overloaded so they can be used in a nice way. Note that all methods take as the first parameter `self`: this is just the python way of passing the instance itself to the method so that it becomes possible to access its data. You can always ignore it for now and just assume that the methods only need the parameters which follow.\n\n`__init__`: This is the constructor. When a new instance of the class is created this method is used. It takes the parameters `mu`, `sigma`, and `alpha`. Note that if you leave out parameters, they will be set to some default values.\nSo you can create `NormPDF` instances like this:\n\n\n```python\na = NormPDF() # No parameters: mu = 0, sigma = 1, alpha = 1\nb = NormPDF(1) # mu = 1, sigma = 1, alpha = 1\nc = NormPDF(1, alpha=0.4) # skips sigma but sets alpha, thus: mu = 1, sigma = 1, alpha = 0.4\nd = NormPDF(0, 0.5) # mu = 0, sigma = 0.5, alpha = 1\ne = NormPDF(0, 0.5, 0.9) # mu = 0, sigma = 0.5, alpha = 0.9\n```\n\n`__call__`: This is a very cool feature of Python. By implementing this method one can make an instance *callable*. That basically means one can use it as if it was a function. The `NormPDF` instances can be called with an x value (or a numpy array of x values) to get the evaluation of the normal distribution at x.\n\n\n```python\nnormpdf = NormPDF()\nprint(normpdf(0))\nprint(normpdf(0.5))\nprint(normpdf(np.linspace(-2, 2, 10)))\n```\n\n 0.3989422804014327\n 0.3520653267642995\n [0.05399097 0.11897819 0.21519246 0.31944801 0.38921247 0.38921247\n 0.31944801 0.21519246 0.11897819 0.05399097]\n\n\n`__repr__`: This method will be used in Python when one calls `repr(NormPDF())`. As long as `__str__` is not implemented (which you saw in last week's sheet) `str(NormPDF())` will also use this method. This comes in handy for printing:\n\n\n```python\nnormpdf1 = NormPDF()\nnormpdf2 = NormPDF(1, 0.5, 0.9)\nprint(normpdf1)\nprint([normpdf1, normpdf2])\n```\n\n NormPDF(0.00,1.00,1.00)\n [NormPDF(0.00,1.00,1.00), NormPDF(1.00,0.50,0.90)]\n\n\nIt is also possible to change the values of an instance of the NormPDF:\n\n\n```python\nnormpdf1 = NormPDF()\nprint(normpdf1)\nprint(normpdf1(np.linspace(-2, 2, 10)))\n\nnormpdf1.mu = 1\nnormpdf1.sigma = 2\nnormpdf1.alpha = 0.9\nprint(normpdf1)\nprint(normpdf1(np.linspace(-2, 2, 10)))\n```\n\n NormPDF(0.00,1.00,1.00)\n [0.05399097 0.11897819 0.21519246 0.31944801 0.38921247 0.38921247\n 0.31944801 0.21519246 0.11897819 0.05399097]\n NormPDF(1.00,2.00,0.90)\n [0.0647588 0.08817395 0.11427077 0.14095594 0.16549503 0.18494385\n 0.19671986 0.19916355 0.19192205 0.17603266]\n\n\nNow that you know how the `NormPDF` class works, it is time for the implementation of the initialization function. Here is the task again:\n\nWrite a function `gaussians = initialize_EM(data, num_distributions)` to initialize the EM.\n\nEach normal distribution $j$ has three parameters: $\\mu_j$ (the mean), $\\sigma_j$ (the standard deviation), $\\alpha_j$ (the proportion of the normal distribution in the mixture, that means $\\sum\\limits_j\\alpha_j=1$).\nInitialize the three parameters using three random partitions $S_j$ of the data set. Calculate each $\\mu_j$ and $\\sigma_j$ and set $\\alpha_j = \\frac{|S_j|}{|X|}$.\n\n\n```python\ndef initialize_EM(data, num_distributions):\n \"\"\"\n Initializes the EM algorithm by calculating num_distributions NormPDFs\n from a random partitioning of data. I.e., the data set is randomly\n divided into num_distribution parts, and each part is used to initialize\n mean, standard deviation and alpha parameter of a NormPDF object.\n \n Args:\n data (array): A collection of data.\n num_distributions (int): The number of distributions to return.\n \n Returns:\n A list of num_distribution NormPDF objects, initialized from a\n random partioning of the data.\n \"\"\"\n gaussians = []\n size = len(data)\n \n # Shuffle and split data to get 'randomized' partitioning\n np.random.shuffle(data)\n subsets = np.array_split(data, num_distributions)\n # For each subset, generate a pdf using the contained data\n for aset in subsets:\n gaussians.append(NormPDF(np.mean(aset), np.std(aset), len(aset)/size))\n \n return gaussians\n\n\nnormpdfs_ = initialize_EM(np.linspace(-1, 1, 100), 2)\nassert len(normpdfs_) == 2, \"The number of initialized distributions is not correct.\"\n# 1e-10 is 0.0000000001\nassert abs(1 - sum([normpdf.alpha for normpdf in normpdfs_])) < 1e-10 , \"Sum of all alphas is not 1.0!\"\n```\n\n#### Step 3) Implement the expectation step\n\nPerform a soft classification of the data samples with the normal distributions. That means: Calculate the likelihood that a data sample $x_i$ belongs to distribution $j$ given parameters $\\mu_j$ and $\\sigma_j$. Or in other words, what is the likelihood of $x_i$ to be drawn from $N_j(\\mu_j, \\sigma_j)$? When you got the likelihood, weight the result by $\\alpha_j$.\n\nAs a last step normalize the results such that the likelihoods of a data sample $x_i$ sum up to $1$.\n\n*Hint:* Store the data in a different array before you normalize it to not run into problems with partly normalized data.\n\n\n```python\nfrom math import fsum\n\ndef expectation_step(gaussians, data):\n \"\"\"\n Performs the expectation step of the EM.\n \n Args:\n gaussians (list): A list of NormPDF objects.\n data (array): The data vector.\n \n Returns:\n An array of shape (len(data), len(gaussians))\n which contains normalized likelihoods for each sample\n to denote to which of the normal distributions it \n most likely belongs to.\n \"\"\"\n tmp = np.array([[g(entry)*g.alpha for g in gaussians] for entry in data]) \n expectation = np.array(tmp)\n \n for i in range(len(data)):\n whole = fsum(tmp[i,:])\n expectation[i,:] = [x/whole for x in tmp[i,:]]\n \n return expectation\n\nassert expectation_step([NormPDF(), NormPDF()], np.linspace(-2, 2, 100)).shape == (100, 2) , \"Shape is not correct!\"\n```\n\n#### Step 4) Implement the maximization step\n\nIn the maximization step each $\\mu_j$, $\\sigma_j$ and $\\alpha_j$ is updated. First calculate the new means:\n\n$$\\mu_j = \\frac{1}{\\sum\\limits_{i=1}^{|X|} p_{ij}} \\sum\\limits_{i=1}^{|X|} p_{ij}x_i$$\n\nThat means $\\mu_j$ is the weighted mean of all samples, where the weight is their likelihood of belonging to distribution $j$.\n\nThen calculate the new $\\sigma_j$. Each new $\\sigma_j$ is the standard deviation of the normal distribution with the new $\\mu_j$, so for the calculation you already use the new $\\mu_j$:\n\n$$\\sigma_j = \\sqrt{ \\frac{1}{\\sum\\limits_{i=1}^{|X|} p_{ij}} \\sum\\limits_{i=1}^{|X|} p_{ij} \\left(x_i - \\mu_j\\right)^2 }$$\n\nTo calculate the new $\\alpha_j$ for each distribution, just take the mean of $p_j$ for each normal distribution $j$.\n\n**Caution:** For the next step it is necessary to know how much all $\\mu$ and $\\sigma$ changed. For that the function `maximization_step` should return a numpy array of those (absolute) changes. For example if $\\mu_0$ changed from 0.1 to 0.15, $\\sigma_0$ from 1 to 0.9, $\\mu_1$ from 0.5 to 0.6, and $\\sigma_1$, $\\mu_2$, and $\\sigma_2$ stayed the same, we expect the function to return `np.array([0.05, 0.1, 0.1, 0, 0, 0])` (however, the order is not important).\n\n\n```python\nfrom math import sqrt, fabs, fsum\n\ndef weighted_mean(data, pij):\n \"\"\"\n Calculates the new weighted mean\n \n Args:\n data (array): the data vector\n pij (array): expected values for the data\n \n Returns:\n The new weighted mean for the distribution\n \"\"\"\n return fsum([x*y for x,y in zip(data,pij)])/fsum(pij)\n\ndef std_dev(mu, data, pij):\n \"\"\"\n Calculates the new standard deviation\n \n Args:\n mu (float): mean\n data (array): the data vector\n pij (array): expected values for the data\n \n Returns:\n The new weighted mean for the distribution\n \"\"\"\n return sqrt(fsum([y*pow(x-mu, 2) for x,y in zip(data,pij)])/fsum(pij))\n\ndef maximization_step(gaussians, data, expectation):\n \"\"\"\n Performs the maximization step of the EM.\n Modifies the gaussians by updating their mus and sigmas.\n \n Args:\n gaussians (list): A list of NormPDF objects.\n data (array): The data vector.\n expectation (array): The expectation values for data element\n (as computed by expectation_step()).\n\n Returns:\n A numpy array of absolute changes in any mu or sigma, \n that means the returned array has twice as many elements as\n the supplied list of gaussians.\n \"\"\"\n changes = []\n for i in range(len(gaussians)):\n new_mu = weighted_mean(data, expectation[:,i])\n new_sigma = std_dev(new_mu, data, expectation[:,i])\n gaussians[i].alpha = sum(expectation[:,i])/len(data)\n \n changes.append(fabs(gaussians[i].mu - new_mu))\n changes.append(fabs(gaussians[i].sigma - new_sigma))\n gaussians[i].mu = new_mu\n gaussians[i].sigma = new_sigma\n \n return np.array(changes)\n```\n\n**5) Perform the complete EM and plot your results:**\n\nInitialize three normal distributions whose parameters will be changed iteratively by the EM to converge close to the original distributions.\n\nBuild a loop around the iterative procedure of expectation and maximization which stops when the changes in all $\\mu_j$ and $\\sigma_j$ are sufficiently small enough.\n\nPlot your results after each step and mark which data points belong to which normal distribution. If you don't get it to work, just plot your final solution.\n\n*Hint:* Remember to load the data and initialize the EM before the loop.\n\n*Hint:* A function `plot_intermediate_result` to plot your result after each step is already defined in the next cell. Take a look at what arguments it takes and try to use it in your loop.\n\n*Hint:* To plot your final result the first three images and corresponding code examples on the tutorial of [`plt.plot(...)`](http://matplotlib.org/users/pyplot_tutorial.html) should help you.\n\n*Optional:* Run the code multiple times. If your results are changing, use `np.random.seed(2)` in the beginning of the cell to get consistent results (any other integer will work as well, but 2 has some good results for the example solutions).\n\n\n```python\n%matplotlib notebook\nimport time\nimport itertools\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n# Sets the random seed to a fix value to make results consistent\n# np.random.seed(2)\n\ncolors = itertools.cycle(['r', 'g', 'b', 'c', 'm', 'y', 'k'])\nfigure, axis = plt.subplots(1)\naxis.set_xlim(-5, 5)\naxis.set_ylim(-0.2, 4)\naxis.set_title('Intermediate Results')\nfinal_figure = plt.figure('Final Result')\n\ndef plot_intermediate_result(gaussians, data, mapping):\n \"\"\"\n Gets a list of gaussians and data input. The mapping\n parameter is a list of indices of gaussians. Each value\n corresponds to the data value at the same position and \n maps this data value to the proper gaussian.\n \"\"\"\n x = np.linspace(-5, 5, 100)\n if len(axis.lines):\n for j, N in enumerate(gaussians):\n axis.lines[j * 2].set_xdata(x)\n axis.lines[j * 2].set_ydata(N(x))\n axis.lines[j * 2 + 1].set_xdata(data[mapping == j])\n axis.lines[j * 2 + 1].set_ydata([0] * len(data[mapping == j]))\n else:\n for j, N in enumerate(gaussians):\n axis.plot(x, N(x), data[mapping == j], [0] * len(data[mapping == j]), 'x', color=next(colors), markersize=5)\n figure.canvas.draw()\n time.sleep(1.0)\n\ndef plot_final(gaussians, data, mapping):\n x = np.linspace(-5, 5, 100)\n for j, g in enumerate(gaussians):\n plt.plot(x, g(x), data[mapping == j], [0] * len(data[mapping == j]), 'x', color=next(colors), markersize=5)\n final_figure.canvas.draw()\n \n# Perform the initialization.\ndata = load_data('em_normdistdata.txt')\ngaussians = initialize_EM(data, 3)\n\n# Loop until the changes are small enough.\neps = 0.05\nchanges = [float('inf')] * 2\nexpectation = []\nwhile max(changes) > eps:\n # E-step\n expectation = expectation_step(gaussians, data)\n # M-step\n changes = maximization_step(gaussians, data, expectation)\n # Optional: Calculate the parameters to update the plot and call the function to do it.\n plot_intermediate_result(gaussians, data, np.argmax(expectation, 1))\n\n# Plot your final result and print the final parameters.\nplot_final(gaussians, data, np.argmax(expectation, 1))\n```\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n### b) EM and missing values\n\nDescribe in your own words: How does the EM-algorithm deal with the missing value problem?\n\nIn order to deal with missing values, the EM-algorithm estimates the distribution underlying the data and then generates numbers that are sampled from that distribution to fill up the missing values. \n\nTo find the distribution, or rather the correct parameters of that distribution, the algorithm makes an initial guess and then, in each iteration, the probability for each data point to belong to a distribution is calculated. That result is used to refactor each distribution's parameters (in our case $\\mu$ and $\\sigma$) by recalculating them with weighted samples, the weight being that samples' probability of belonging to that distribution.\n", "meta": {"hexsha": "5180a9792f7b93729257c238272bc9ddc6ccd615", "size": 701482, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sheet_03/sheet_03_machine-learning.ipynb", "max_stars_repo_name": "ArielMant0/ml2018", "max_stars_repo_head_hexsha": "676dcf028766c369f94c164529ce16c4ef7716aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sheet_03/sheet_03_machine-learning.ipynb", "max_issues_repo_name": "ArielMant0/ml2018", "max_issues_repo_head_hexsha": "676dcf028766c369f94c164529ce16c4ef7716aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sheet_03/sheet_03_machine-learning.ipynb", "max_forks_repo_name": "ArielMant0/ml2018", "max_forks_repo_head_hexsha": "676dcf028766c369f94c164529ce16c4ef7716aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 97.9723463687, "max_line_length": 129295, "alphanum_fraction": 0.7620067229, "converted": true, "num_tokens": 8916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4225046348141882, "lm_q2_score": 0.2942149597859341, "lm_q1q2_score": 0.12430718414122716}} {"text": "# Introduction\n\nClassical mechanics is a topic which has been taught intensively over\nseveral centuries. It is, with its many variants and ways of\npresenting the educational material, normally the first **real** physics\ncourse many of us meet and it lays the foundation for further physics\nstudies. Many of the equations and ways of reasoning about the\nunderlying laws of motion and pertinent forces, shape our approaches and understanding\nof the scientific method and discourse, as well as the way we develop our insights\nand deeper understanding about physical systems. \n\nThere is a wealth of\nwell-tested (from both a physics point of view and a pedagogical\nstandpoint) exercises and problems which can be solved\nanalytically. However, many of these problems represent idealized and\nless realistic situations. The large majority of these problems are\nsolved by paper and pencil and are traditionally aimed\nat what we normally refer to as continuous models from which we may find an analytical solution. As a consequence,\nwhen teaching mechanics, it implies that we can seldomly venture beyond an idealized case\nin order to develop our understandings and insights about the\nunderlying forces and laws of motion.\n\n\nOn the other hand, numerical algorithms call for approximate discrete\nmodels and much of the development of methods for continuous models\nare nowadays being replaced by methods for discrete models in science and\nindustry, simply because **much larger classes of problems can be addressed** with discrete models, often by simpler and more\ngeneric methodologies.\n\nAs we will see below, when properly scaling the equations at hand,\ndiscrete models open up for more advanced abstractions and the possibility to\nstudy real life systems, with the added bonus that we can explore and\ndeepen our basic understanding of various physical systems\n\nAnalytical solutions are as important as before. In addition, such\nsolutions provide us with invaluable benchmarks and tests for our\ndiscrete models. Such benchmarks, as we will see below, allow us \nto discuss possible sources of errors and their behaviors. And\nfinally, since most of our models are based on various algorithms from\nnumerical mathematics, we have a unique oppotunity to gain a deeper\nunderstanding of the mathematical approaches we are using.\n\n\n\nWith computing and data science as important elements in essentially\nall aspects of a modern society, we could then try to define Computing as\n**solving scientific problems using all possible tools, including\nsymbolic computing, computers and numerical algorithms, and analytical\npaper and pencil solutions**. \nComputing provides us with the tools to develope our own understanding of the scientific method by enhancing algorithmic thinking.\n\n\nThe way we will teach this course reflects\nthis definition of computing. The course contains both classical paper\nand pencil exercises as well as computational projects and exercises. The\nhope is that this will allow you to explore the physics of systems\ngoverned by the degrees of freedom of classical mechanics at a deeper\nlevel, and that these insights about the scientific method will help\nyou to develop a better understanding of how the underlying forces and\nequations of motion and how they impact a given system. Furthermore, by introducing various numerical methods\nvia computational projects and exercises, we aim at developing your competences and skills about these topics.\n\n\nThese competences will enable you to\n\n* understand how algorithms are used to solve mathematical problems,\n\n* derive, verify, and implement algorithms,\n\n* understand what can go wrong with algorithms,\n\n* use these algorithms to construct reproducible scientific outcomes and to engage in science in ethical ways, and\n\n* think algorithmically for the purposes of gaining deeper insights about scientific problems.\n\nAll these elements are central for maturing and gaining a better understanding of the modern scientific process *per se*.\n\nThe power of the scientific method lies in identifying a given problem\nas a special case of an abstract class of problems, identifying\ngeneral solution methods for this class of problems, and applying a\ngeneral method to the specific problem (applying means, in the case of\ncomputing, calculations by pen and paper, symbolic computing, or\nnumerical computing by ready-made and/or self-written software). This\ngeneric view on problems and methods is particularly important for\nunderstanding how to apply available, generic software to solve a\nparticular problem.\n\n*However, verification of algorithms and understanding their limitations requires much of the classical knowledge about continuous models.*\n\n\n\n## A well-known examples to illustrate many of the above concepts\n\nBefore we venture into a reminder on Python and mechanics relevant applications, let us briefly outline some of the\nabovementioned topics using an example many of you may have seen before in for example CMSE201. \nA simple algorithm for integration is the Trapezoidal rule. \nIntegration of a function $f(x)$ by the Trapezoidal Rule is given by following algorithm for an interval $x \\in [a,b]$\n\n$$\n\\int_a^b(f(x) dx = \\frac{1}{2}\\left [f(a)+2f(a+h)+\\dots+2f(b-h)+f(b)\\right] +O(h^2),\n$$\n\nwhere $h$ is the so-called stepsize defined by the number of integration points $N$ as $h=(b-a)/(n)$.\nPython offers an extremely versatile programming environment, allowing for\nthe inclusion of analytical studies in a numerical program. Here we show an\nexample code with the **trapezoidal rule**. We use also **SymPy** to evaluate the exact value of the integral and compute the absolute error\nwith respect to the numerically evaluated one of the integral\n$\\int_0^1 dx x^2 = 1/3$.\nThe following code for the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to $10^8$ points one arrives at a region where numerical errors start to accumulate.\n\n\n```python\n%matplotlib inline\n\nfrom math import log10\nimport numpy as np\nfrom sympy import Symbol, integrate\nimport matplotlib.pyplot as plt\n# function for the trapezoidal rule\ndef Trapez(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n x = a\n for i in range(1,n,1):\n x = x+h\n s = s+ f(x)\n s = 0.5*(f(a)+f(b)) +s\n return h*s\n# function to compute pi\ndef function(x):\n return x*x\n# define integration limits\na = 0.0; b = 1.0;\n# find result from sympy\n# define x as a symbol to be used by sympy\nx = Symbol('x')\nexact = integrate(function(x), (x, a, b))\n# set up the arrays for plotting the relative error\nn = np.zeros(9); y = np.zeros(9);\n# find the relative error as function of integration points\nfor i in range(1, 8, 1):\n npts = 10**i\n result = Trapez(a,b,function,npts)\n RelativeError = abs((exact-result)/exact)\n n[i] = log10(npts); y[i] = log10(RelativeError);\nplt.plot(n,y, 'ro')\nplt.xlabel('n')\nplt.ylabel('Relative error')\nplt.show()\n```\n\nThis example shows the potential of combining numerical algorithms with symbolic calculations, allowing us to \n\n* Validate and verify their algorithms. \n\n* Including concepts like unit testing, one has the possibility to test and test several or all parts of the code.\n\n* Validation and verification are then included *naturally* and one can develop a better attitude to what is meant with an ethically sound scientific approach.\n\n* The above example allows the student to also test the mathematical error of the algorithm for the trapezoidal rule by changing the number of integration points. The students get **trained from day one to think error analysis**. \n\n* With a Jupyter notebook you can keep exploring similar examples and turn them in as your own notebooks. \n\nIn this process we can easily bake in\n1. How to structure a code in terms of functions\n\n2. How to make a module\n\n3. How to read input data flexibly from the command line\n\n4. How to create graphical/web user interfaces\n\n5. How to write unit tests (test functions or doctests)\n\n6. How to refactor code in terms of classes (instead of functions only)\n\n7. How to conduct and automate large-scale numerical experiments\n\n8. How to write scientific reports in various formats (LaTeX, HTML)\n\nThe conventions and techniques outlined here will save you a lot of time when you incrementally extend software over time from simpler to more complicated problems. In particular, you will benefit from many good habits:\n1. New code is added in a modular fashion to a library (modules)\n\n2. Programs are run through convenient user interfaces\n\n3. It takes one quick command to let all your code undergo heavy testing \n\n4. Tedious manual work with running programs is automated,\n\n5. Your scientific investigations are reproducible, scientific reports with top quality typesetting are produced both for paper and electronic devices.\n", "meta": {"hexsha": "53c6ddbf7324ab13438fcb0d7846d91b905a196d", "size": 18609, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/src/LectureNotes/_build/jupyter_execute/testbook/chapter1.ipynb", "max_stars_repo_name": "anacost/MachineLearning", "max_stars_repo_head_hexsha": "89e1c3637fe302c2b15b96bf89c8a01d2d693f29", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/src/LectureNotes/_build/jupyter_execute/testbook/chapter1.ipynb", "max_issues_repo_name": "anacost/MachineLearning", "max_issues_repo_head_hexsha": "89e1c3637fe302c2b15b96bf89c8a01d2d693f29", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/src/LectureNotes/_build/jupyter_execute/testbook/chapter1.ipynb", "max_forks_repo_name": "anacost/MachineLearning", "max_forks_repo_head_hexsha": "89e1c3637fe302c2b15b96bf89c8a01d2d693f29", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-04T16:21:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-04T16:21:16.000Z", "avg_line_length": 73.5533596838, "max_line_length": 7020, "alphanum_fraction": 0.778494277, "converted": true, "num_tokens": 1914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3629692055196168, "lm_q2_score": 0.3415824927356586, "lm_q1q2_score": 0.12398392600767229}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nPromijeni vidljivost ovdje.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n\n```\n\n\n\nPromijeni vidljivost ovdje.\n\n\n## Unutarnja stabilnost\n\nKoncept stabilnosti bilje\u017ei pona\u0161anje evolucije stanja sustava kada je isti \"izba\u010den\" iz ravnote\u017enog stanja: stabilnost opisuje divergira li evolucija stanja koja nastaje nakon poreme\u0107aja iz to\u010dke ravnote\u017ee ili ne.\n\n### Definicija\nS obzirom na vremenski nepromjenjiv dinami\u010dki sustav opisan vektorom stanja $x(t)\\in \\mathbb{R}^n$, to\u010dkom ravnote\u017ee $x_e$, po\u010detnim stanjem $x_0$ i po\u010detnim vremenom $t_0$, ako vrijedi\n\n$$\n\\forall \\, \\epsilon \\in \\mathbb{R}, \\, \\epsilon > 0 \\quad \\exists \\delta \\in \\mathbb{R}, \\, \\delta > 0 : \\quad ||x_0-x_e|| < \\delta \\, \\Rightarrow \\, ||x(t)-x_e|| < \\epsilon \\quad \\forall t \\ge t_0\n$$\nto bi se moglo protuma\u010diti kao: ako postoji dovoljno mala po\u010detna perturbacija $\\delta$ od to\u010dke ravnote\u017ee takva da evolucija stanja $x(t)$ od to\u010dke poreme\u0107aja ne odlazi predaleko (vi\u0161e od $\\epsilon$) od same ravnote\u017ee, tada je to\u010dka ravnote\u017ee stabilna.\n\nAko se tako\u0111er dogodi $\\lim_{t\\to\\infty}||x(t)-x_e|| = 0$, \u0161to se mo\u017ee protuma\u010diti kao: evolucija stanja se vra\u0107a natrag u to\u010dku ravnote\u017ee, tada se ka\u017ee da je ravnote\u017ea asimptotski stabilna.\n\nU slu\u010daju linearnih vremenski nepromjenjivih sustava:\n\n\n\\begin{cases}\n\\dot{x} = Ax +Bu \\\\\ny = Cx + Du,\n\\end{cases}\n\nmogu\u0107e je dokazati da stabilnost jedne to\u010dke ravnote\u017ee podrazumijeva stabilnost svih to\u010daka ravnote\u017ee, pa mo\u017eemo govoriti o stabilnosti sustava \u010dak i ako je, op\u0107enito, svojstvo stabilnosti povezano s to\u010dkom ravnote\u017ee. Posebnost ovog linearnog sustava posljedica je \u010dinjenice da je evolucija ove vrste sustava strogo povezana sa svojstvenim vrijednostima matrice dinamike $A$, koje su invarijantne s obzirom na rotaciju, translaciju, po\u010detne uvjete i vrijeme.\n\n\n\nPodsjetite se \u0161to je obja\u0161njeno u primjeru o modalnoj analizi:\n\n> Rje\u0161enje diferencijalne jednad\u017ebe (u zatvorenoj formi), od po\u010detnog vremena $t_0$, s po\u010detnim uvjetima $x(t_0)$, je \n$$\nx(t) = e^{A(t-t_0)}x(t_0).\n$$ Matrica $e^{A(t-t_0)}x(t_0)$ se sastoji od linearnih kombinacija funkcije vremena $t$, svake tipa: $$e^{\\lambda t},$$ gdje su $\\lambda$-e svojstvene vrijednosti matrice $A$; ove su funckije modovi sustava.\n\nstoga:\n- linearni dinami\u010dki sustav stabilan je ako i samo ako svi njegovi modovi nisu divergentni,\n- linearni dinami\u010dki sustav je asimptotski stabilan ako i samo ako su svi njegovi modovi konvergentni,\n- linearni dinami\u010dki sustav je nestabilan ako ima barem jedan divergentni mod.\n\ni, s obzirom na svojstvene vrijednosti matrice dinamike, to se doga\u0111a ako:\n- sve svojstvene vrijednosti matrice $A$ pripadaju zatvorenoj lijevoj polovici kompleksne ravnine (tj. realna vrijednost im je negativna ili nula), a, u slu\u010daju da imaju realnu vrijednost nula, njihova je algebarska vi\u0161estrukost ista kao geometrijska vi\u0161estrukost (mno\u017enost), ili, ekvivalentno tome, imaju skalarne blokove u Jordanovom obliku;\n- sve svojstvene vrijednosti pripadaju otvorenoj lijevoj polovici imaginarne ravnine, odnosno imaju strogo negativne realne dijelove;\n- barem jedna svojstvena vrijednost ima pozitivan realni dio ili postoje svojstvene vrijednosti s realnom vrijedno\u0161\u0107u 0 i neskalarni Jordanovi blokovi.\n\n\nOvaj interaktivni primjer predstavlja matricu dinamike $A$ koju je mogu\u0107e ure\u0111ivati, a prikazuje slobodni odziv sustava uz odgovaraju\u0107e svojstvene vrijednosti.\n\n\n\n### Kako koristiti ovaj interaktivni primjer?\n- Poku\u0161ajte promijeniti svojstvene vrijednosti i po\u010detni uvjet $x_0$ i pogledajte kako se mijenja odziv.\n\n\n```python\n%matplotlib inline\n#%matplotlib notebook \nimport control as control\nimport numpy\nimport sympy as sym\nfrom IPython.display import display, Markdown\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\n\n\n#print a matrix latex-like\ndef bmatrix(a):\n \"\"\"Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)\n\n :a: numpy array\n :returns: LaTeX bmatrix as a string\n \"\"\"\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{bmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{bmatrix}']\n return '\\n'.join(rv)\n\n\n# Display formatted matrix: \ndef vmatrix(a):\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{vmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{vmatrix}']\n return '\\n'.join(rv)\n\n\n#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !\nclass matrixWidget(widgets.VBox):\n def updateM(self,change):\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.M_[irow,icol] = self.children[irow].children[icol].value\n #print(self.M_[irow,icol])\n self.value = self.M_\n\n def dummychangecallback(self,change):\n pass\n \n \n def __init__(self,n,m):\n self.n = n\n self.m = m\n self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))\n self.value = self.M_\n widgets.VBox.__init__(self,\n children = [\n widgets.HBox(children = \n [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]\n ) \n for j in range(n)\n ])\n \n #fill in widgets and tell interact to call updateM each time a children changes value\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n self.children[irow].children[icol].observe(self.updateM, names='value')\n #value = Unicode('example@example.com', help=\"The email value.\").tag(sync=True)\n self.observe(self.updateM, names='value', type= 'All')\n \n def setM(self, newM):\n #disable callbacks, change values, and reenable\n self.unobserve(self.updateM, names='value', type= 'All')\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].unobserve(self.updateM, names='value')\n self.M_ = newM\n self.value = self.M_\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].observe(self.updateM, names='value')\n self.observe(self.updateM, names='value', type= 'All') \n\n #self.children[irow].children[icol].observe(self.updateM, names='value')\n\n \n#overlaod class for state space systems that DO NOT remove \"useless\" states (what \"professor\" of automatic control would do this?)\nclass sss(control.StateSpace):\n def __init__(self,*args):\n #call base class init constructor\n control.StateSpace.__init__(self,*args)\n #disable function below in base class\n def _remove_useless_states(self):\n pass\n```\n\n\n```python\n# Preparatory cell\n\nA = numpy.matrix([[0,1],[-2/5,-1/5]])\nX0 = numpy.matrix('5; 3')\n\nAw = matrixWidget(2,2)\nAw.setM(A)\nX0w = matrixWidget(2,1)\nX0w.setM(X0)\n```\n\n\n```python\n# Misc\n\n#create dummy widget \nDW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))\n\n#create button widget\nSTART = widgets.Button(\n description='Test',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Test',\n icon='check'\n)\n \ndef on_start_button_clicked(b):\n #This is a workaround to have intreactive_output call the callback:\n # force the value of the dummy widget to change\n if DW.value> 0 :\n DW.value = -1\n else: \n DW.value = 1\n pass\nSTART.on_click(on_start_button_clicked)\n```\n\n\n```python\n# Main cell\n\ndef main_callback(A, X0, DW):\n sols = numpy.linalg.eig(A)\n sys = sss(A,[[1],[0]],[0,1],0)\n pole = control.pole(sys)\n if numpy.real(pole[0]) != 0:\n p1r = abs(numpy.real(pole[0]))\n else:\n p1r = 1\n if numpy.real(pole[1]) != 0:\n p2r = abs(numpy.real(pole[1]))\n else:\n p2r = 1\n if numpy.imag(pole[0]) != 0:\n p1i = abs(numpy.imag(pole[0]))\n else:\n p1i = 1\n if numpy.imag(pole[1]) != 0:\n p2i = abs(numpy.imag(pole[1]))\n else:\n p2i = 1\n \n print('Svojstvene vrijednosti matrice A su:',round(sols[0][0],4),'i',round(sols[0][1],4))\n \n #T = numpy.linspace(0, 60, 1000)\n T, yout, xout = control.initial_response(sys,X0=X0,return_x=True)\n \n fig = plt.figure(\"Svojstvene vrijednosti od A\", figsize=(16,16))\n ax = fig.add_subplot(311,title='Polovi (Re vs Img)')\n #plt.axis(True)\n # Move left y-axis and bottim x-axis to centre, passing through (0,0)\n # Eliminate upper and right axes\n ax.spines['left'].set_position(('data',0.0))\n ax.spines['bottom'].set_position(('data',0.0))\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n ax.set_xlim(-max([p1r+p1r/3,p2r+p2r/3]),\n max([p1r+p1r/3,p2r+p2r/3]))\n ax.set_ylim(-max([p1i+p1i/3,p2i+p2i/3]),\n max([p1i+p1i/3,p2i+p2i/3]))\n \n plt.plot([numpy.real(pole[0]),numpy.real(pole[1])],[numpy.imag(pole[0]),numpy.imag(pole[1])],'o')\n plt.grid()\n\n ax1 = fig.add_subplot(312,title='Slobodni odziv')\n plt.plot(T,xout[0])\n plt.grid()\n ax1.set_xlabel('vrijeme [s]')\n ax1.set_ylabel('$x_1$')\n ax1.axvline(x=0,color='black',linewidth='0.8')\n ax1.axhline(y=0,color='black',linewidth='0.8')\n ax2 = fig.add_subplot(313)\n plt.plot(T,xout[1])\n plt.grid()\n ax2.set_xlabel('vrijeme [s]')\n ax2.set_ylabel('$x_2$')\n ax2.axvline(x=0,color='black',linewidth='0.8')\n ax2.axhline(y=0,color='black',linewidth='0.8')\n \n #plt.show()\n \n\n \nalltogether = widgets.HBox([widgets.VBox([widgets.Label('$A$:',border=3),\n Aw]),\n widgets.Label(' ',border=3),\n widgets.VBox([widgets.Label('$X_0$:',border=3),\n X0w]),\n START])\nout = widgets.interactive_output(main_callback, {'A':Aw, 'X0':X0w, 'DW':DW})\nout.layout.height = '1000px'\ndisplay(out, alltogether)\n```\n\n\n Output(layout=Layout(height='1000px'))\n\n\n\n HBox(children=(VBox(children=(Label(value='$A$:'), matrixWidget(children=(HBox(children=(FloatText(value=0.0, \u2026\n\n", "meta": {"hexsha": "507f92e31542dbbb7fde9b936afeb133fa05ba05", "size": 16192, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_hr/examples/04/SS-17-Unutarnja_stabilnost.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_hr/examples/04/SS-17-Unutarnja_stabilnost.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_hr/examples/04/SS-17-Unutarnja_stabilnost.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 39.6862745098, "max_line_length": 467, "alphanum_fraction": 0.5287796443, "converted": true, "num_tokens": 3475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657965106367595, "lm_q2_score": 0.25982564942392716, "lm_q1q2_score": 0.1238276173398482}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nPromijeni vidljivost ovdje.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n\n```\n\n\n\nPromijeni vidljivost ovdje.\n\n\n## Beskona\u010dne to\u010dke ravnote\u017ee 3\n\nOvaj interaktivni primjer prikazuje $2\\times2$ sustav koji ima beskona\u010dno mnogo to\u010daka ravnote\u017ee koje le\u017ee na pravcu $x_1=-x_2$ (za podle\u017ee\u0107u teoriju pogledajte interaktivnu lekciju o to\u010dkama ravnote\u017ee).\n\nDa bi pravac $x_1=-x_2$ bio prostor ravnote\u017enih to\u010daka, treba vrijediti:\n$$\nA\\bar{x}=0 \\quad \\forall \\, \\bar{x}\\in\\begin{bmatrix} \\alpha \\\\ -\\alpha\\end{bmatrix} \\, \\text{uz} \\, \\alpha\\in\\mathbb{R},\n$$\nstoga, $\\begin{bmatrix} \\alpha \\\\ -\\alpha\\end{bmatrix}$ mora pripadati \"null-prostoru\" (engl. nullspace) matrice $A$.\n\n### Kako koristiti ovaj interaktivni primjer?\n- Izravno izmijenite matricu $A$ da biste vidjeli kako se mijenjaju to\u010dke ravnote\u017ee.\n- Poku\u0161ajte modificirati $A$ na na\u010din da to\u010dke ravnote\u017ee le\u017ee na zadanom skupu.\n\n\n```python\n#Preparatory Cell \n\nimport control\nimport numpy\nfrom IPython.display import display, Markdown\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\nimport sympy as sym\n\n#print a matrix latex-like\ndef bmatrix(a):\n \"\"\"Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)\n\n :a: numpy array\n :returns: LaTeX bmatrix as a string\n \"\"\"\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{bmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{bmatrix}']\n return '\\n'.join(rv)\n\n\n# Display formatted matrix: \ndef vmatrix(a):\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{vmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{vmatrix}']\n return '\\n'.join(rv)\n\n\n#create a NxM matrix widget \ndef createMatrixWidget(n,m):\n M = widgets.GridBox(children=[widgets.FloatText(layout=widgets.Layout(width='100px', height='40px'),\n value=0.0, disabled=False, label=i) for i in range(n*m)],\n layout=widgets.Layout(\n #width='50%',\n grid_template_columns= ''.join(['100px ' for i in range(m)]),\n #grid_template_rows='80px 80px 80px',\n grid_row_gap='0px',\n track_size='0px')\n )\n return M\n\n\n#extract matrix from widgets and convert to numpy matrix\ndef getNumpyMatFromWidget(M,n,m):\n #get W gridbox dims\n M_ = numpy.matrix(numpy.zeros((n,m)))\n for irow in range(0,n):\n for icol in range(0,m):\n M_[irow,icol] = M.children[irow*3+icol].value\n\n \n#this is a simple derived class from FloatText used to experience with interact \nclass floatWidget(widgets.FloatText):\n def __init__(self,**kwargs):\n #self.n = n\n self.value = 30.0\n #self.M = \n widgets.FloatText.__init__(self, **kwargs)\n\n# def value(self):\n# return 0 #self.FloatText.value\n\nfrom traitlets import Unicode\nfrom ipywidgets import register \n\n\n#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !\nclass matrixWidget(widgets.VBox):\n def updateM(self,change):\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.M_[irow,icol] = self.children[irow].children[icol].value\n #print(self.M_[irow,icol])\n self.value = self.M_\n\n def dummychangecallback(self,change):\n pass\n \n \n def __init__(self,n,m):\n self.n = n\n self.m = m\n self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))\n self.value = self.M_\n widgets.VBox.__init__(self,\n children = [\n widgets.HBox(children = \n [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]\n ) \n for j in range(n)\n ])\n \n #fill in widgets and tell interact to call updateM each time a children changes value\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n self.children[irow].children[icol].observe(self.updateM, names='value')\n #value = Unicode('example@example.com', help=\"The email value.\").tag(sync=True)\n self.observe(self.updateM, names='value', type= 'All')\n \n def setM(self, newM):\n #disable callbacks, change values, and reenable\n self.unobserve(self.updateM, names='value', type= 'All')\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].unobserve(self.updateM, names='value')\n self.M_ = newM\n self.value = self.M_\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].observe(self.updateM, names='value')\n self.observe(self.updateM, names='value', type= 'All') \n\n #self.children[irow].children[icol].observe(self.updateM, names='value')\n\n \n\n \n#overlaod class for state space systems that DO NOT remove \"useless\" states (what \"professor\" of automatic control would do this?)\nclass sss(control.StateSpace):\n def __init__(self,*args):\n #call base class init constructor\n control.StateSpace.__init__(self,*args)\n #disable function below in base class\n def _remove_useless_states(self):\n pass\n```\n\n\n```python\n#define the matrixes\nA=matrixWidget(2,2)\nA.setM(numpy.matrix('1. 0.; 0. 1.'))\n\ndef main_callback(matA,DW):\n \n As = sym.Matrix(matA)\n NAs = As.nullspace()\n \n t = numpy.linspace(-10,10,1000)\n if len(NAs) == 1:\n eq1 = [t[i]*numpy.matrix(NAs[0]) for i in range(0,len(t))]\n x1 = [eq1[i][0,0] for i in range(0,len(t))]\n x2 = [eq1[i][1,0] for i in range(0,len(t))]\n \n fig = plt.figure(figsize=(6,6))\n if len(NAs) == 0:\n plt.plot(0,0,'bo')\n if len(NAs) == 1:\n plt.plot(x1,x2)\n if len(NAs) == 2:\n plt.fill((-5,-5,5,5),(-5,5,5,-5),alpha=0.5)\n plt.xlim(left=-5,right=5)\n plt.ylim(top=5,bottom=-5)\n plt.grid()\n plt.xlabel('$x_1$')\n plt.ylabel('$x_2$')\n print('Osnova za A-nullspace (po retcima) je %s. \\nSvojstvene vrijednosti su %s' %(str(numpy.array(NAs)),\n str(numpy.linalg.eig(matA)[0])))\n\n\n#create dummy widget \nDW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))\n\n#create button widget\nSTART = widgets.Button(\n description='Test',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Test',\n icon='check'\n)\n \ndef on_start_button_clicked(b):\n #This is a workaround to have intreactive_output call the callback:\n # force the value of the dummy widget to change\n if DW.value> 0 :\n DW.value = -1\n else: \n DW.value = 1\n pass\nSTART.on_click(on_start_button_clicked)\n\nout = widgets.interactive_output(main_callback,{'matA':A,'DW':DW})\nout1 = widgets.HBox([out,\n widgets.VBox([widgets.Label(''),widgets.Label(''),widgets.Label(''),widgets.Label('$\\qquad \\qquad A=$')]),\n widgets.VBox([widgets.Label(''),widgets.Label(''),widgets.Label(''),A,START])])\nout.layout.height = '450px'\ndisplay(out1)\n```\n\n\n HBox(children=(Output(layout=Layout(height='450px')), VBox(children=(Label(value=''), Label(value=''), Label(v\u2026\n\n\n\n```python\n#create dummy widget 2\nDW2 = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))\nDW2.value = -1\n\n#create button widget\nSTART2 = widgets.Button(\n description='Prika\u017ei odgovore',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Klikni za prikaz odgovora',\n icon='check'\n)\n \ndef on_start_button_clicked2(b):\n #This is a workaround to have intreactive_output call the callback:\n # force the value of the dummy widget to change\n if DW2.value> 0 :\n DW2.value = -1\n else: \n DW2.value = 1\n pass\nSTART2.on_click(on_start_button_clicked2)\n\ndef main_callback2(DW2):\n if DW2 > 0:\n display(Markdown(r'''Odgovor:\nZa konstrukciju matrice mogu\u0107e je odabrati vektore redaka koji su ortogonalni u odnosu na nullspace. Mogu\u0107a matrica je stoga:\n\n$$\nA=\\begin{bmatrix} 1 & 1 \\\\ 2 & 2 \\end{bmatrix}.\n$$'''))\n else:\n display(Markdown(''))\n\n#create a graphic structure to hold all widgets \nalltogether2 = widgets.VBox([START2])\n\nout2 = widgets.interactive_output(main_callback2,{'DW2':DW2})\n#out.layout.height = '300px'\ndisplay(out2,alltogether2)\n```\n\n\n Output()\n\n\n\n VBox(children=(Button(description='Prika\u017ei odgovore', icon='check', style=ButtonStyle(), tooltip='Klikni za pr\u2026\n\n", "meta": {"hexsha": "28476949329057dca16a48bcd7ab492b563ff998", "size": 14729, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_hr/examples/04/SS-16-Tocke_ravnoteze_primjer_3.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_hr/examples/04/SS-16-Tocke_ravnoteze_primjer_3.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_hr/examples/04/SS-16-Tocke_ravnoteze_primjer_3.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 35.8369829684, "max_line_length": 213, "alphanum_fraction": 0.5005091995, "converted": true, "num_tokens": 2680, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4649015713733885, "lm_q2_score": 0.2658804672827599, "lm_q1q2_score": 0.12360824703724588}} {"text": "\n\n\n\n# Half-Life and Radioactive Decay\n\nThe half-life of an unstable atom is the amount of time required on average for half of a population of that atom to decay to a different element. The half-life can range from yottoseconds ($10^{24}$ seconds!) to numbers so large that it's hard to consider the atom unstable (for example Tellerium-128 has a half-life of approximately $10^{24}$ (yotta) years). A short list of select atomic half-lives is available from Wikipedia [here](https://en.wikipedia.org/wiki/List_of_radioactive_isotopes_by_half-life). \n\nAs an example, suppose you have a collection of one thousand atoms which have a half-life of twenty minutes. After twenty minutes your collection of those atoms will have shrunk to approximately five hundred atoms. However, that does not mean that your atoms have disappeared. Those unstable atoms have **decayed**, meaning that the atom has lost energy by emitting radiation and changed either its number of neutrons, protons, or both!\u00a0Most commonly atoms decay by emitting a electrons, neutrons, positrons, or alpha particles (which are helium nuclei, He$^{2+}$). What happens in each of these decay processes is outlined below. \n\n## Modes of decay\n\n### Alpha ($\\alpha$) Decay\n\nAlpha decay is a decay process in which an atom releases an $\\alpha$ particle. An $\\alpha$ particle is simply another name for an ionized (no electrons) helium atom $\\left(^4_2He\\right)$, so if an element undergoes alpha decay, the atom reduces its number of neutrons $N$, and the number of protons $Z$. In general, for an atom X with atomic number $A$, the sum of neutrons and protons, decaying into an atom $Y$ via $\\alpha$ decay has the following form:\n\n\n\\begin{equation}\n^A_ZX^N \\rightarrow ^{A-4}_{Z-2}Y^{N-2} + ^4_2\\alpha^2 \n\\end{equation}\n\nHere $A$ is the **atomic number** of an atom, or the sum of its number of neutrons $N$ and protons $Z$.\n\n\\begin{equation}\nA = Z + N\n\\end{equation}\n\nAlpha decay, written in terms of a concrete example, the $\\alpha$ decay of Uranium-235 would appear as\n\n\\begin{equation}\n_{92} ^{235} \\text{U}^{143} \\rightarrow _{90} ^{231}\\text{Th}^{141} + _2 ^4\\alpha^{2}\n\\end{equation}\n\nWhere we see that uranium-235 decays to Thorium-231 by emitting an $\\alpha$ particle, it alpha decays. Notice however, that the total number of neutrons and protons are equal on both the right hand side and the left hand side of the reaction. Generally speaking however, the number of neutrons is not typically written explicitly, and calculated by subtracting $Z$ from $A$. \n\n### Beta $\\beta$ Decay\n\n$\\beta$ decay comes in two flavors: beta positive ($\\beta^+$) decay and beta minus ($\\beta^-$) decay. $\\beta^-$ decay emits an electron, and $\\beta^+$ decay emits the anti-particle of the electron: the positron (a particle with the same mass, but opposite charge of an electron). In the case of $\\beta^+$ decay, the element emits a positron from a proton, therefore the atom 'loses' a proton and 'gains' a neutron. In the case of $\\beta^-$ decay, a neutron emits an electron meaning the atom 'loses' a neutron and 'gains' a proton. \n\nThey also emit mysterious and common particle known as a neutrino\n\n\n## The Neutrino\n\n### Beta Minus decay\n\nGenerally speaking $\\beta^-$ decay has the following form\n\n\\begin{equation}\n^A _Z\\text{X}^N \\rightarrow _{Z+1}^A\\text{Y}^{N-1} + e^- + \\bar{\\nu}\n\\end{equation}\n\nwhere $\\bar{\\nu}$ is an anti-neutrino. Notice that charge and mass is conserved. From the above formula, we see that a neutron is emitting an electron and anti-neutrino and then becomes a proton. Notice how both charge and mass is conserved, we gained a positive charge with the proton, but also gained a negative charge in the process with an electron. \n\n### Beta plus decay\n\n\\begin{equation}\n^A _Z\\text{X}^{N} \\rightarrow _{Z-1}^A\\text{Y}^{N+1} + e^+ + \\nu\n\\end{equation}\n\nwhere $\\nu$ is a neutrino. Notice that charge and mass is again conserved. From the above formula, we see that a proton is emitting a positron and neutrino, and the becomes a neutron. \n\n### Electron Capture\nThis wasn't in the Alberta curriculum thing but would it important? There's also a whole bunch of others that we could toss in here that could be considered optional. I kind of want to put it in because then the neutron capture stuff will make more sense. Not that that's required either, but it would make for a cool animation.\n\n\n## Some Mathematical Background\nLet's build on our earlier discussion and start introducing some mathematical rigor to our discussion of half life. The half-life of an atom is denoted as $\\tau_{1/2}$, measured in seconds, where $\\tau$ is the Greek letter tau. Using the half life of an atom, we can calculate the number of atoms as a function of time $N(t)$ with the following relationship\n\n\\begin{equation}\nN(t) = N_0 \\; 2.718^{-\\lambda t} \n\\label{eq:decay}\n\\tag{1}\n\\end{equation}\n\nWhere $N(t)$ is the _number_ of atoms you have at time $t$, $N_0$ is the original number of atoms at $t=0$ and $\\lambda$ is known as the \"half life constant\" which has units of inverse seconds. The half life constant is a bit of an abstraction from the more intuitive half life, however $\\lambda$ is related to the half life by the following simple relationship\n\n\\begin{equation}\n\\lambda = \\frac{0.693}{\\tau_{1/2}}\n\\label{eq:lambda}\n\\tag{2}\n\\end{equation}\n\nThe above two equations make it is possible to calculate the amount of atoms remaining at any given time, provided the half life and the original number of atoms is known. Another, more subtle, fact about equation \\ref{eq:decay} that is worth pointing out is that the exponential part ($2.718^{\\lambda t}$) of that equation is directly related to the _probability_ that an atom will decay at a given time. Intuitively this makes sense, which we illustrate with an example. Suppose we started with one thousand atoms, and after ten seconds we find that we only have 60% of our atoms remaining. Using equation \\ref{eq:decay} we can write this scenario as follows with $N_0 = 1000$\n\n\\begin{equation}\nN(t= 3 \\text{ s}) = 1000 \\times 2.718^{-\\lambda \\; 3 \\text{s}} = 600\n\\tag{3}\n\\label{eq:probrelat}\n\\end{equation} \n\nWe can manipulate the left hand side of this equation to show\n\n\\begin{equation}\nN(t = 3 \\text{ s}) = 2.718^{-\\lambda \\; 3 \\text{s}} = \\frac{600}{1000} = 0.6 = 60 \\%\n\\end{equation}\n\nWhere if we substitute our new expression for $N(t = 3 \\text{ s})$ back to equation \\ref{eq:probrelat}, we see that this function is analogous to a probability. In other words, the expression\n\n\\begin{equation}\n2.718^{-\\lambda t}\n\\end{equation}\n\nis the probability that an atom **will not** decay at a given time $t$, or\n\n\\begin{equation}\nP_{exists}(t) = 2.718^{-\\lambda t}\n\\label{eq:exist}\n\\tag{4}\n\\end{equation}\n\nWhere $P_{exists}(t)$ is the probability that an atom will exist at time $t$.\n\n\nNow that we have an expression for the probability that an atom will **not** decay, we can now calculate the probability that an atom **does** decay. The probability that an atom **does** decay is simply one minus the probability of it not decaying, or\n\n\\begin{equation}\nP_{decayed}(t) = 1 - P_{exists}(t) = 1 - 2.718^{-\\lambda t}\n\\label{eq:decayed}\n\\tag{5}\n\\end{equation}\n\nWhere $P_{decayed}(t)$ is the probability that an atom has decayed by time $t$. \n\nThis means that the expression in equation \\ref{eq:decay} is a \"smooth idealization\". Radioactive decay is a statistical process, and actual data will only ever _approach_ the solution of equation \\ref{eq:decay}. In reality, our observations of radioactive will randomly fluctuate around that solution. To visualize this, we can use Python to get an idea of what we may measure if we were to observe an actual population of atoms decay.\n\n\n## Python Example\n\nUsing the relationships defined in equations \\ref{eq:exist} and \\ref{eq:decayed}, it is possible to construct a simple model to visualize how a population of atoms may be observed to decay. This is the case because we know the probability that an atom will decay or not at any given time step. If we know the probability of decay, we simply need to compare this probability to some metric in order to decide if our atom decays or not.\n\n\nSounds simple, but that's not an obvious metric to know. How do you decide which atoms decay? Suppose we have an atom that has a 40% probability of decay, how exactly do we decide if this atom decays at a given time or not? Do we simply consider a large group of these atoms and choose 40% of them and say they have decayed? This is certainly an option, however, if we do it this way we won't be observing the interesting random statistical fluctuations, as we would simply be using definition of equation \\ref{eq:decay}. What methodology should we use in order to view actual fluctuations?\n\n\n### Radioactive decay as an analogy to tossing a biased coin \n\nSuppose you were sifting through a bag of atoms and deciding which ones decay by throwing them away according to their half life. Let's suppose that at your current instance in time, each atom has a 50% probability of decay. Doing this manually, conceivably you could pick an atom and flip a coin to decide its fate. If the coin showed heads you could keep the atom (the atom does not decay), and if the coin showed tails you could throw it away (the atom decays). Once you've gone through the entire set of atoms, you will have thrown away those atoms randomly according to their probability of decay. This is a reasonable methodology: you're still removing half of the atoms randomly, and the act of removing one atom is completely independent from the act of removing another. Is there some way to generalize this process and apply it to what we've learned about nuclear decay? \n \nLuckily, Python allows us to draw random numbers which are uniformly distributed between 0 and 1 algorithmically. In other words, there is a Python function which allows us to randomly draw a percentage between 0 and 100%, where each percentage is equally probable to draw. Using this function, we can compare our new random percentage, against our probabilities of decay to decide if our atom decays or not! This is exactly the same as flipping a coin for each atom, except now we're using a \"biased coin\" which can favor heads over tails (or in our case decaying or not decaying). That's still pretty abstract in terms of what this actually means. For an introduction to simulating a biased coin toss please see [coin flipping](../Math/FlippingCoins.ipynb) for a brief introduction. Of course, if a deeper understanding of how the simulation of nuclear decay is not of interest to you, the following section is not required and you can feel free to skip the next section.\n\n\n\n\n\n### Simulating nuclear decay \nThe way in which we look through our \"bag\" of atoms can be depicted as a flow chart as seen below\n>\n> Here we see the flowchart for the process of simulating nuclear decay. Here we see that for every instance in time $t_i$ we have some number of atoms $N$. At that instance in time, we compare the probability of decay for each atom to a unique random number $r$. If the random number is less than our probability of decay, then the atom is said to have decayed. We then remove that atom from the bag. Once we have checked every atom at a particular instance in time, we now move to our next instance in time with our new number of atoms $N$ and repeat the process\n\nThe pseudo-code below shows how you would write this process with in code. Here we're looking through our \"bag\" of atoms, then the comparison between the random number and our calculated probability of decay acts as a (biased) coin toss to decide if the atom decays or not. \n\n\n```python\n\nfor each instance_in_time in time:\n probability_of_decay = 1 - exp(-lambda * instance_in_time)\n for each atom in atoms:\n # This is a random number between 0 and 1\n random_percent = random_number()\n if prob_decay > r:\n atom.decays()\n```\n \nWhere here `instance_in_time` is an equally spaced time step (steps of one second for example), and we're looping over many time steps. At each time step, we then calculate the probability that an atom decays at that instance in time by equation 5. We then loop over all atoms, generate a random number between 0 and 1 (random percentage between 0 and 100) and use that to decide if an atom decays on an atom to atom basis. Then, we do it again at the next time step. In terms of the coin toss analogy if `probability_of_decay` is greater than `random_percent`, our coin is heads and the atom does not decay. Should `probability_of_decay` be less than `random_percent` then our coin toss is tails and the atom decays. \n\nTo illustrate this with an example, if `probability_of_decay` is 76%, and the random number `random_percent` was 45%, our atom would not decay, however, if `random_percent` was greater than 76%, the atom would decay. As the random numbers are _uniformly_ distributed, that means that each percentage between 0 and 100 are all equally probable.\n \nAn important thing to notice that we calculate the probability of decay **before** we enter the loop where we see if an atom decays or not. This is because at a single instant of time, the probability of decay for each atom is **independent**: if one atom decays, this does not affect if any of the others also decay. \n\nWhat to keep in mind here is that the basis behind this simulation is deceptively simple: We're simply deciding **if** each atom decays by flipping a biased coin at each step in time. If you were to do this \"by hand\" you would quite literally just be flipping coins and counting atoms. And that's precisely what our simulation is doing, just faster than we can. \n\nBelow, we see an example of this simulation behavior for carbon-15, which has a half-life of 2.249 seconds.\n\nFor each step in time, we need to check if each atom decays or not. We only calculate our probability once because at a single instant in time, the probability of decay is the same for each -- they are independent events. \n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nimport random\nimport pandas as pd\n\ndef HalfLifeEquation(N_0, lamb, t):\n return N_0 * 2.71828 ** (-lamb * t)\n\nt = np.linspace(0,4*2.449,500) # Halflife of carbon-15 is 2.249 seconds\nplt.figure(figsize=(3, 2), dpi= 175, facecolor='w', edgecolor='k')\nplt.plot(t, HalfLifeEquation(1000, np.log(2)/2.249, t), label=\"Predicted\")\nplt.ylabel(\"Number of atoms\")\nplt.xlabel(\"t (seconds)\")\nplt.title(\"Carbon 15 Decay\")\n\ndef Decay(N_0, lamb,max_time, steps=100):\n time_atom_pairs = []\n N = N_0\n time_step = max_time/steps\n t = 0\n for i in range(steps):\n # the equation from eariler, divide by N to reuse function\n p = 1 - HalfLifeEquation(N, lamb, time_step)/N\n for i in range(N):\n # a 'random' number between 0 and 1 to compare to our decay probability\n r = random.uniform(0,1)\n if r < p: # if our decay probability for the atom is greater, that atom decays.\n N = N -1\n if N == 0: # if we run out of atoms we should stop\n break\n # We have now moved forward in time once again!\n t = t + time_step\n # Store how many atoms at a given time t we have in an array. \n time_atom_pairs.append([N, t])\n return time_atom_pairs\ndecays = Decay(1000, np.log(2)/2.249, 10, steps=500)\n\ny,x = np.array(decays).T\nplt.plot(x, y, label = \"Simulated\")\nplt.legend()\nplt.show()\n```\n\nFrom the plot above we see that the simulated curve (orange) follows the predicted one (blue), but does not follow it exactly. This is a result of atomic decay being a statistical process.\n\n# Simulations \n\nBelow you have a widget where you can compare how quickly different atoms decay, and view a simulation of how that process would look if you were to observe these counts yourself. The data is pulled automatically from the [National Institutes of Standards](https://www.nist.gov/pml/radionuclide-half-life-measurements/radionuclide-half-life-measurements-data).\n\nYou can choose which isotopes to watch their decay with the element drop-down menus, and you can run for more or less time with the `Time_Scale` slider. Move the slider around at the bottom to advance the decay simulation forward or backwards in time. \n\n## Using the data\n\n\n\n\n```python\nurl = 'https://www.nist.gov/pml/radionuclide-half-life-measurements/radionuclide-half-life-measurements-data'\ndf = pd.read_html(url)[0]\n\n# Rename the columns\ndf.columns = [\"Radionuclide\", \n \"NumberOfSources\", \n \"HalfLifes_Followed\", \n \"HalfLife\", \n \"StandardUncertainty\", \n \"OtherUncertainty\", \n \"ref\"]\ndf = df.drop(['ref', 'OtherUncertainty'], axis=1)\ndf\n```\n\nOnce the data is tabulated, we can use it in combination with equation 1 we can create the theoretical traces, and use it in combination with equation 5 to generate our \"observed\" decay path. These are shown graphically with the widget below \n\nFeel free to simulate decay over a longer time frame by adjusting `Time_Scale` and changing the decay isotope using the drop down menus. \n\n\n\n```python\nfrom ipywidgets import interactive\nimport plotly as py \nimport plotly.graph_objs as go\npy.offline.init_notebook_mode()\n\n# Read and calculate the half-life of each element in the data frame.\ndef get_halflife(Element):\n multiplies = {'s':1., \"min\":60., \"h\":3600., \"d\":3600.*24.}\n index = index1 = np.where(df[\"Radionuclide\"] == Element)[0][0]\n data = df[\"HalfLife\"].iloc[index]\n half_life, pm, d_time, unit = data.split()\n scale = multiplies[unit]\n HalfLife = float(scale) * float(half_life)\n return HalfLife, unit\n\n# calculate and animate the decay of two atoms on the same graph\ndef DecayRace2(Element1, Element2, Time_Scale=1):\n N_0 = 1000\n Decay_Points = 75 \n # Grab the half-lives\n HalfLife_Element_1, unit1 = get_halflife(Element1)\n HalfLife_Element_2, unit2 = get_halflife(Element2)\n time_length = max(HalfLife_Element_1, HalfLife_Element_2)\n # Points for antimation of decay\n Points_Element_1 = Decay(N_0, np.log(2)/(float(HalfLife_Element_1)), \n max_time= Time_Scale * float(time_length), steps=Decay_Points)\n Points_Element_2 = Decay(N_0, np.log(2)/(float(HalfLife_Element_2)), \n max_time= Time_Scale * float(time_length), steps=Decay_Points)\n # Make Points\n y1,x1 = np.array(Points_Element_1).T\n y2,x2 = np.array(Points_Element_2).T\n # For the predicted traces\n t = np.linspace(0, Time_Scale * float(time_length), Decay_Points)\n y3 = HalfLifeEquation(N_0, np.log(2)/(float(HalfLife_Element_1)), t)\n y4 = HalfLifeEquation(N_0, np.log(2)/(float(HalfLife_Element_2)), t)\n Decay_Element_1 = go.Scatter(x = x1, y = y1, mode=\"markers+lines\")\n Decay_Element_2 = go.Scatter(x = x2, y = y2, mode=\"markers+lines\")\n Predicted_Element_1 = [dict(type = 'scatter',\n visible = False, # hide until we have selected with slider\n name = Element1, mode = 'lines',\n line = dict(color = 'rgb(116,130,143)'), x = t, y = y3) for step in range(len(y3))]\n # Make the first one visible by default\n Predicted_Element_1[0]['visible'] = True\n # Same as above with different data\n Predicted_Element_2 = [dict(type='scatter', visible = False, mode = 'lines', name = Element2,\n line = dict(color = 'rgb(150,192,206)'), x = t, y = y4) for step in range(len(y4))]\n Predicted_Element_2[0]['visible'] = True\n data1 = [dict(type='scatter', visible = False, name = \" \".join([\"Simulated\", Element1]),\n marker = dict(color = 'rgb(194,91,86)', line = dict(color = 'rgb(194,91,86)')),\n mode = 'markers+lines', x = x1[0:step], y = y1[0:step]) for step in range(len(x1))]\n data2 = [dict(type='scatter', visible = False, name = \" \".join([\"Simulated\", Element2]),\n mode = 'markers+lines',\n marker = dict(color = 'rgb(190,185,191)', line = dict(color = 'rgb(190,185,191)')),\n x = x2[0:step], y = y2[0:step]) for step in range(len(x2))]\n steps = []\n for i in range(len(data1)):\n step = dict(method = 'restyle', args = ['visible', [False] * len(data1)],)\n step['args'][1][i] = True\n steps.append(step)\n # Add those arrays together for slider plots. \n # As long as they're the same size, this works perfectly\n data = data1 + data2 + Predicted_Element_1 + Predicted_Element_2\n # Create our slider for control of the graph\n sliders = [dict(active = 1, currentvalue = {\"prefix\": \"Time Step: \"},\n pad = {\"t\":50}, steps = steps)]\n # Update title to display the half-lives of our elements\n hlt = \"\".join(['Half life of ',Element1,': ','{:0.3e}'.format(HalfLife_Element_1),' ',unit1,\n '
',\n 'Half life of ',Element2,': ','{:0.3e}'.format(HalfLife_Element_2),' ',unit2])\n # Make a nice graph\n layout = dict(sliders=sliders, title = hlt,\n xaxis = {\"title\":\"Time (seconds)\"},\n yaxis = {\"title\":\"Number of Atoms\"})\n fig = py.graph_objs.Figure(data=data, layout=layout)\n py.offline.iplot(fig)\n# Surpress warnings from IPython that we don't care about. \nimport warnings\nimport sys\nif not sys.warnoptions:\n warnings.simplefilter(\"ignore\")\n# This gives us the drop down menu functionality \ninteractive_plot = interactive(DecayRace2, Element1=df[\"Radionuclide\"], Element2 = df[\"Radionuclide\"], continuous_update=True)\ninteractive_plot\n```\n\n## Questions\n1. Drag the slider to animate the decay of the same element twice. Do the traces look the same? If so why? If not, why not?\n2. Compare two isotopes with very different half-lives, what are the major differences you notice between their simulated and predicted traces?\n\n# Fusion Processes and the Origin of the Heavy Elements\n\nIt is well accepted that every star in the observable universe is primarily powered by the fusion of hydrogen into helium. However, stars also gain energy from the fusion of heavier elements up until iron-56 $^{56}Fe$. This is because the amount of energy it takes to combine elements past this point actually [require more energy than they produce](https://en.wikipedia.org/wiki/Nuclear_binding_energy). Typically, when a star begins creating Iron, the star is at the end of its life. However, if stars stop producing the elements at iron 56, how do we get the rest of the periodic table? Well, stellar nucleosynthesis! \n\n## Stellar Nucleosynthesis\n\n[Stellar nucleosynthesis](https://en.wikipedia.org/wiki/Stellar_nucleosynthesis) is the creation of atomic nucleides during high-energy astrophysical events such as [Supernova](https://en.wikipedia.org/wiki/Supernova_nucleosynthesis), [neutron star mergers](https://physics.aps.org/articles/v10/114), or [quark novae](http://www.quarknova.ca/_include/files/feature_quark_stars.pdf). During these high energy events, it is theorized that there are an untold number of free neutrons which can be bound to various nuclei. Essentially an atom moving through a great number of neutrons would pick up many neutrons rapidly creating very heavy, very unstable isotopes. \nIf a nuclei picks up a free neutron, this process is known as neutron capture, which has the general formula for an atom $X$ and neutron $N$\n\n\\begin{equation}\n^{A}_ZX^N + N \\rightarrow ^{A+1}_Z X^{N+1}\n\\end{equation}\n\nWhere this has formed a different **isotope** of the atom $X$.\nIn a high energy astrophysical event, the process of nuclei rapidly gathering neutrons is known as the **$r$-process** or [**rapid neutron capture process**](https://en.wikipedia.org/wiki/R-process) is responsible for the creation of many elements heavier than iron. There is also another process known as the [**slow neutron capture process**](https://en.wikipedia.org/wiki/S-process), or the **$s$-process**. The difference between the $r$ and $s$ processes, for our purposes, is minor and the difference between the two is only in how quickly they happen. \n\nA natural question to ask is \"How does this create the rich variety of elements? This only tells me that there should be some incredibly neutron rich isotopes!\". This is very true, however, the other half of the $r$ and $s$ processes is that they create incredibly _unstable_ neutron rich isotopes. And what do unstable isotopes do? Well, they $\\beta$ decay of course! In this case, our neutrons will $\\beta$ decay by spitting out an electron and turning into a proton -- thus creating a new element! Of course, if this happens too much, there is also a (much rarer) possibility that a proton will emit a positron thereby becoming a neutron. These processes are described for arbitrary atoms $X$, $Y$, and $W$ below, first $\\beta^-$ decay\n\n\\begin{equation}\n^{A}_ZX^N \\rightarrow ^{A}_{Z+1} Y^{N-1} + e^- + \\bar{\\nu}\n\\end{equation}\n\nand then $\\beta^+$ decay\n\n\\begin{equation}\n^{A}_ZX^N \\rightarrow ^{A}_{Z-1} W^{N+1} + e^+ + \\nu\n\\end{equation}\n\n\nSo essentially, the idea of stellar nucleosynthesis is that nuclei gather excess neutrons and then $\\beta$ decay to stability. During an actual stellar nucleosynthesis event however, there are many other processes happening simultaneously that complicates the process, but these main events in the $r$ and $s$ processes are sufficient for the following demonstration. \n\n## Question\n1. What other potential processes have we ignored in this description? \n\n\n## A (very) Simplified Model\n\nTo get a basic understanding of the life cycle of a nucleon trapped in the middle of the $r$ and $s processes, we're going to construct a toy model[1](#footnote1) of stellar nucleosynthesis that should produce some interesting visuals.\n\n\n\n\nFirst, we'll start with a with a generic atom, any old atom will do, and we'll specify and arbitrary number of neutrons $N$ and an arbitrary number of protons $Z$. Once the atom is specified, we're going to assume it is trapped inside the $r$ or $s$ process and then we will toss a coin as to whether or not it captures a neutron, or beta decays. If it captures a neutron we'll increase the neutron count by one, if not we'll beta decay and lose a neutron and gain a proton, outlined in the flowchart below\n\n>\n\n> The image above depicts the calculation process required in order to create our toy model of nucleosynthesis. The green box outlines the calculation process when protons are not allowed to decay, and the gray box outlines the process when protons are allowed to decay. To begin we have an initial number of neutrons $N$ and an initial number if protons $Z$. We then draw a random number $r$ uniformly from the range $[0,1]$, and compare that to our neutron capture probability $p_{NC}$. If $r$ is greater than the probability of neutron capture, we gain a neutron, if not, we beta decay thereby losing a neutron and gaining a proton. Alternatively, if we're allowing proton decay, if our random number has dictated that we beta decay, we then draw a second random number $r_2$ uniformly from $[0,1]$ and compare it to our probability of proton decay $p_{PD}$. This comparison then decides which mode of decay that our atom will undergo. We however note that the exit condition in the widget you'll soon see is slightly different preventing cases where reaching $Z/N > 1$ may take a long time, or is changed to $N/Z > 1$ if there are more protons than neutrons at the beginning of the simulation. \n\n\n-----\n\n1: Here, toy model is understood to mean \"wrong; but informative\". \n\n## Visualization\n\n\nWith the figure below you can visualize a nuclide's pathway to becoming a heavy element. We do note however that while this simulation will give you a basic idea of the process, the model described here is _far_ from realistic. \n\n\n\n```python\n# This is the exit condition of our loop, scaled to the original number of neutrons and protons\ndef Condition(Z,N, Z_0, N_0):\n if Z_0 >= N_0:\n return N/Z\n if N_0 > Z_0:\n return Z/N\n# This is our toy model \ndef SimulateNewElement(N,Z, BetaNegative=False, ProbNegative = 0.1):\n # Initialize some lists and counters\n count = 0\n NArray = []\n ZArray = []\n min_iterations = 20\n N_0 = N\n Z_0 = Z\n # An infinite loop, relying on the break function to stop based on the conditions we've decided on\n while True:\n # Count how many times we've cycled and draw a random number\n count += 1\n r = random.uniform(0,1)\n # neutron capture\n if r >= 0.5:\n N = N + 1\n NArray.append(N)\n ZArray.append(Z)\n # beta decay\n elif r < 0.5:\n # draw a second random number to decide what kind of beta decay\n r2 = random.uniform(0,1)\n # Can only beta decay if we have a neutron and a proton\n if N > 0 and Z > 0:\n # If we're allowing proton decay\n if BetaNegative:\n # Beta minus decay\n if r2 < 1 - ProbNegative:\n N = N - 1 \n Z = Z + 1\n # beta plus decay\n else:\n N = N + 1\n Z = Z - 1\n else:\n # beta minus decay\n N = N - 1\n Z = Z + 1\n # save our counts at each iteration\n NArray.append(N)\n ZArray.append(Z)\n # Decide if we should leave\n if Condition(Z,N,Z_0,N_0) > 1.01 and count > min_iterations:\n if count > min_iterations:\n print(\"Reached Z = N line and passed minimum iterations, simulation finished\")\n else:\n print(\"Ran minimum number of times and passed Z=N line.\")\n break \n if count > 200:\n print(\"Taking too long, exiting early after 200 iterations\")\n print(\"Try lowering the probability of negative beta decay\")\n print(\"Or just simply running this simulation again\")\n break\n return np.array(NArray), np.array(ZArray), np.linspace(0,count,count)\ndef TwoDAnimation(N,Z,ProtonDecayAllowed = True, ProbabilityOfProtonDecay = 0.0):\n N,Z,count = SimulateNewElement(N,\n Z, \n BetaNegative=ProtonDecayAllowed, \n ProbNegative=ProbabilityOfProtonDecay)\n steps = []\n data1 = [dict(type='scatter', visible = False, name = \"Nuclide Path\",\n marker = dict(color = 'rgb(194,91,86)', line = dict( color = 'rgb(194,91,86)')),\n mode = 'markers+lines', x = N[0:step], y = Z[0:step]) for step in range(len(N))]\n LinePoints = np.linspace(min(Z[0],N[0]), max(N[-1],Z[-1]), len(N))\n data2 = [dict(type='scatter', visible = False, name = \"N = Z line\",\n marker = dict(color = 'rgb(190,185,191)', line = dict( color = 'rgb(190,185,191)')),\n mode = 'lines', x = LinePoints, y = LinePoints) for step in range(len(N))]\n data2[1]['visible'] = True\n data = data1 + data2 \n frames = data\n for i in range(len(data1)):\n step = dict(method = 'restyle', args = ['visible' ,[False] * len(data1)])\n step['args'][1][i] = True\n steps.append(step)\n sliders = [dict(active = 0, pad = {\"t\":50}, steps = steps,\n currentvalue = {'font': {'size': 20}, 'prefix': 'Nuclide:',\n 'visible': True, 'xanchor': 'right'},\n transition= {'duration': 300, 'easing': 'cubic-in-out'})]\n layout = dict(sliders=sliders, #updatemenus = updatemenus,\n title = \"Toy Model of Nucleosynthesis\",\n xaxis={\"title\":\"Number of Neutrons\"},\n yaxis = {\"title\":\"Number of Protons\"})\n fig = py.graph_objs.Figure(data=data, layout=layout)\n py.offline.iplot(fig) \nProtonList = [i for i in range(5,100)]\nNeutronList = [i for i in range(10, 105)]\ninteractive_plot2 = interactive(TwoDAnimation, Z=ProtonList, N=NeutronList, continuous_update=True)\ninteractive_plot2\n```\n\nIn the figure above you can use the drop down menus to adjust the number of neutrons $N$ and the number of protons $Z$. By clicking `ProtonDecayAllowed` (enabled by default), you are allowing your protons to decay, with probability as selected by the `Probability...` slider. You can then view the nucleon's path through nucleosynthesis with the slider on the bottom. Also note by clicking and dragging you can zoom in on any point of the plot to your liking. \n\n### Questions\n1. Which way does the nucleon move on the path for neutron capture? $\\beta^+$ decay? $\\beta^-$ decay? \n2. What does the path look like when you increase the probability of proton decay to fifty percent or more? Why? Be sure that `ProtonDecayAllowed` is clicked, and that you have at least 40 protons.\n\n### Conclusion\n\nIn conclusion we covered the idea of radioactive decay as well as several modes of decay. We covered the general formulas for alpha and both flavors of beta decay. Using these ideas, we created a model which simulated the counts we might actually observe in nuclear decay and compared them to the theoretical curve for many unstable elements. This showed us that while the theory may predict smooth and equal counts, actual observation may be a little messier. This simulation can be used as preparation for actual experiments and to help understand what you would observe.\n\nWe also introduced the idea of neutron capture and the origin of the heavy elements. In order to help understand how the heavy elements are created, we created a toy model similar to the decay of an atom, except this time we simulated how an atom may capture neutrons and beta decay in order to create a heavier element.\n\n[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)\n", "meta": {"hexsha": "1715ad06305092b38ba789b8d03b981c926b54ce", "size": 40819, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Science/NuclearDecay/nuclear-decay.ipynb", "max_stars_repo_name": "callysto/curriculum-notebooks", "max_stars_repo_head_hexsha": "871241c8f318cff74f46f42705a03020cd48cd17", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2018-08-23T20:41:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-21T02:27:13.000Z", "max_issues_repo_path": "Science/NuclearDecay/nuclear-decay.ipynb", "max_issues_repo_name": "callysto/curriculum-notebooks", "max_issues_repo_head_hexsha": "871241c8f318cff74f46f42705a03020cd48cd17", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 113, "max_issues_repo_issues_event_min_datetime": "2018-07-23T21:05:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-16T23:40:08.000Z", "max_forks_repo_path": "Science/NuclearDecay/nuclear-decay.ipynb", "max_forks_repo_name": "callysto/curriculum-notebooks", "max_forks_repo_head_hexsha": "871241c8f318cff74f46f42705a03020cd48cd17", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2018-11-20T16:36:59.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-03T12:58:55.000Z", "avg_line_length": 67.1365131579, "max_line_length": 1207, "alphanum_fraction": 0.6337734878, "converted": true, "num_tokens": 8581, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.31405054499180746, "lm_q2_score": 0.3923368301671083, "lm_q1q2_score": 0.12321359533433857}} {"text": "\n\n# Lecture 1: What is Sound?\nAudio Processing, MED4, Aalborg University, 2021\n \n- Jesper Kj\u00e6r Nielsen (jkn@create.aau.dk), Audio Analysis Lab, Aalborg University, and\n- Cumhur Erkut (cer@create.aau.dk), Multisensory Experience Lab, Aalborg University\n\n

Table of Contents

\n\n\n## Sound and simple vibrations\nIn the next 20 minutes, you will learn\n- What sound is and how it propagates\n- How humans perceive sound\n- What a sinusoid is\n- that striking a bar creates a sinusoidal sound\n\n### What is sound?\n\n\n```\nfrom IPython.display import display, YouTubeVideo, Image\nYouTubeVideo('GkNJvZINSEY')\n```\n\n\n\n\n\n\n\n\n\n\nSound is a **vibration** which travels through a medium as\n- **longitudinal** waves: gasses (e.g., air), liquids (e.g., water), and solids (e.g., concrete)\n- **transversal** sound waves: solids (e.g., concrete)\n\nThe **speed of sound** depends on the medium. \n\nIn air at room temperature, for example, it is approximately 343 m/s.\n\n\n```\nImage('apLecture1_files/wave.png')\n```\n\nSound is normally divided into three types:\n1. *Infrasound*: Sound with frequencies up to 20 Hz\n2. *Audible sound*: Sound with frequencies in range 20 Hz - 20 kHz (*audio*)\n3. *Ultrasound*: Sound with frequencies above 20 kHz\n\n\n### Human hearing\n\n\n```\nYouTubeVideo('eQEaiZ2j9oc') #2:27\n```\n\n\n\n\n\n\n\n\n\n\nThe human ear consists of the following parts:\n- **Outer ear**: Everything on the outside of the ear drum, including the pinna\n- **Middle ear**: The three bones (Malleus, Incus, Stapes). What do they do?\n- **Inner ear**: The cochlea (Latin for what?) tube. What does the basilar membrane do inside? Hair cells?\n
\n\n
\n\nThe human ear\n- does not hear all frequencies equally well\n- is most sensitive to frequencies around 4 kHz\n- is tuned to speech\n- has a really large dynamic range of up to ~120 dB (i.e., we can hear sound intensities up to ~10^12 the quietest sounds)\n
\n\n
\n\n### Sinusoids\nA sinusoid (or a sine wave) is given by\n$$\n x(t) = A \\cos(\\Omega t + \\Phi)\n$$\nwhere\n- $A\\geq0$ is the **amplitude**\n- $\\Omega$ is the **frequency** measured in radians pr. second (SI symbol **rad/s**). Is related to the frequency $f$ measured in cycles pr. second (SI symbol **Hz**) via $\\Omega = 2\\pi f$.\n- $t$ is the **time** measured in seconds (SI symbol **s**)\n- $\\Phi$ is the **initial phase** measured in radians (SI symbol **rad**)\n\nThe above form of the sinusoid is often referred to as the **polar form**. By using the angle addition formula for a cosine, i.e.,\n\n$$\n \\cos(\\theta+\\phi) = \\cos(\\theta)\\cos(\\phi)-\\sin(\\theta)\\sin(\\phi)\\ ,\n$$\n\na sinusoid can also be written in a **rectangular form** as\n\n$$\n x(t) = a\\cos(\\Omega t) + b\\sin(\\Omega t)\n$$\n\nwhere a and b are scalars given by\n\\begin{align}\n a &= A\\cos(\\Phi)\\\\\n b &= -A\\sin(\\Phi)\\ .\n\\end{align}\n\n#### Numpy example: A sinusoid\n\n\n```\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsamplingFreq = 44100 # Hz\nnData = 2000\ntime = np.arange(0,nData).T/samplingFreq # s\n\n# Generate a sinusoid\namp = 1;\nfreq = 100 # Hz\ninitPhase = np.pi/2 # rad\nsinusoid = amp*np.cos(2*np.pi*freq*time+initPhase)\n \n# Plot the sinusoids\nplt.plot(time, sinusoid, linewidth=2)\nplt.xlim((time[0],time[nData-1]))\nplt.ylim((-1,1))\nplt.xlabel('Time [s]')\nplt.ylabel('Amplitude [.]')\nplt.grid(True)\n```\n\n#### Example: Generation of a sinusoid from a vibrating bar\n
\n
\n\n
\n\nAssume that the act of striking a bar is modelled as **compressing a spring** in one dimension. \n\nFrom Hooke's law, this compresssion can be written as\n$$\n F(t) = -k x(t)\n$$\nwhere\n- $F(t)$ is the **restoring force** measured in Newton (SI unit **N**)\n- $x(t)$ is the **displacement** measured in meters (SI unit **m**) of the string from its resting position\n- $k$ is the **spring constant** measured in N/m\n\nFrom **Newton's second law**, the force can also be expressed as\n\n$$\n F(t) = ma(t)\n$$\n\nwhere\n- $m$ is the **mass** of the string measured in kilogram (SI unit **kg**)\n- $a(t)$ is the **acceleration** measured in m/s^2.\n\nThe acceleration is related to the displacement $x(t)$ as\n$$\n a(t) = \\frac{dv(t)}{dt} = \\frac{d^2 x(t)}{d t^2}\n$$\n\nwhere $v(t)$ is the **velocity** measured in m/s.\n\nCombining these three equations gives\n$$\n -k x(t) = F(t) = ma(t) = m \\frac{d^2 x(t)}{dt^2}\n$$\nwhich can be rewritten as\n$$\n \\frac{d^2 x(t)}{dt^2} = -\\frac{k}{m} x(t)\\ .\n$$\nThis is a constant-coefficient second-order differential equation.\n\nLet us check if our sinusoid\n$$\n x(t) = A\\cos(\\Omega t + \\Phi)\n$$\nis a solution to the above differential equation. Since\n\\begin{align}\n \\frac{dx(t)}{dt} &= -\\Omega A\\sin(\\Omega t + \\Phi)\\\\\n \\frac{d^2 x(t)}{d t^2} &= -\\Omega^2 A\\cos(\\Omega t + \\Phi) = -\\Omega^2 x(t)\\ ,\n\\end{align}\nwe obtain\n$$\n -\\Omega^2x(t) = -\\frac{k}{m} x(t)\\ .\n$$\nThus, striking a bar will make it vibrate sinusoidally with the frequency\n$$\n \\Omega = \\sqrt{k/m}\\ .\n$$\nThis frequency can be changed by changing the spring constant and mass.\n\n### Summary\n- Sound is a vibration travelling through a medium.\n- Sound waves are longitudal waves (and also transversal waves when travelling through a solid).\n- The human ear converts pressure variations in the air to\n 1. mechanical movement (interface is the eardrum)\n 2. vibrations in a liquid (interface is the oval window)\n 3. electrical signal to the brain (interface is the haircells attached to the basilar membrane)\n- A sinusoid (or sine wave) is given by\n$$\n x(t) = A\\cos(\\Omega t + \\Phi)\\ ,\n$$\nand it an extremely important building block (or atom) in analysing and manipulating sound.\n- Assuming that striking a bar can be modelled as compressing a spring, the bar will vibrate sinusoidally.\n\n## Complex numbers\nIn the next 20 minutes, you will learn\n- that the equation\n$$\n x^2+1=0\n$$\nhas two solutions\n- what a complex number is\n- how you add and multiply complex numbers\n\n### The need for complex numbers\nWhile the **linear** equation\n$$\n x + 1 = 0\n$$\ncan easily be solved, the simple **quadratic** equation\n$$\n x^2 + 1 = 0\n$$\nwas in high school said to have **no** solution since its descriminant was negative.\n\n\n```\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnData = 100\nx = np.linspace(-2,2,nData)\ny = x**2+1\nplt.plot(x,y,linewidth=2)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.ylim((-1,5))\nplt.grid(True);\n```\n\nHowever, the quadratic equation can in fact be solved by using **complex numbers**.\n\n\n```\nYouTubeVideo('T647CGsuOVU') #5:47\n```\n\n\n\n\n\n\n\n\n\n\nRearranging our simple **quadratic** equation gives\n$$\n x^2 = -1\n$$\nwhich allows us to write the solution as\n$$\n x = \\pm\\sqrt{-1} = \\pm j\n$$\nwhere\n$$\n j = \\sqrt{-1}\n$$\nis the **imaginary unit**. This unit also satisfies that\n$$\n j^2 = \\sqrt{-1}^2 = -1\\ .\n$$\n\nNote that\n- **engineers** normally use the symbol $j$ for the imaginary unit\n- **mathematicians** normally use the symbol $i$ for the imaginary unit.\n\nLet us now consider the quadratic equation\n$$\n x^2 + 2x + 5 = 0\\ .\n$$\n\nWe know from high school that the solutions to the general quadratic\n$$\n ax^2 + bx + c = 0\\ ,\\qquad\\text{for }a\\neq0\n$$\nhave the form\n$$\n x = \\frac{-b\\pm\\sqrt{d}}{2a}\n$$\nwhere $d$ is the **discriminant** given by\n$$\n d = b^2-4ac\\ .\n$$\n\nWe obtain\n$$\n d = 4-20 = -16\n$$\nso that\n$$\n x = \\frac{-2\\pm\\sqrt{-16}}{2} = -1\\pm\\frac{1}{2}\\sqrt{-1\\cdot 4^2} = -1\\pm 2\\sqrt{-1} = -1\\pm 2j\\ .\n$$\nThus, the **complex numbers** $-1+2j$ and $-1-2j$ are the solutions.\n\n### The complex number\nA **complex number** can be written as\n$$\n z = a + jb\n$$\nwhere\n- $a = \\text{Re}\\{z\\}$ is the **real** part\n- $b = \\text{Im}\\{z\\}$ is the **imaginary** part.\n\nA complex number can be depicted in the **complex plane** which is a 2D coordinate system.\n
\n\n
\n\n\n#### The complex conjugate\nThe **complex conjugate** of a complex number $z$ is\n$$\n z^* = a - jb\\ .\n$$\nThus, the conjugation operator ${}^*$ changes the sign of imaginary part, but not the real part.\n\n#### Addition of complex numbers\nAssume we have the two complex numbers\n\\begin{align}\n z_1 &= a_1+jb_1\\\\\n z_2 &= a_2+jb_2\\ .\n\\end{align}\n\nThe **sum** of these two numbers is then\n$$\n z = z_1 + z_2 = a_1+jb_1 + a_2+jb_2 = (a_1+a_2) + j(b_1+b_2).\n$$\n\nThus, the real and imaginary part of of $z=a+jb$ are simply\n\\begin{align}\n a &= a_1 + a_2\\\\\n b &= b_1 + b_2\\ .\n\\end{align}\n\nNote that\n\\begin{align}\n z_1 + z_1^* &= 2a_1 + 0j = 2\\text{Re}(z_1)\\\\\n z_1 - z_1^* &= 0 + 2jb_1 = 2\\text{Im}(z_1)\\ .\n\\end{align}\n\n\n\n#### Multiplication of complex numbers\nAssume we have the two complex numbers\n\\begin{align}\n z_1 &= a_1+jb_1\\\\\n z_2 &= a_2+jb_2\\ .\n\\end{align}\n\nThe **product** of these two numbers is then\n$$\n z = z_1z_2 = (a_1+jb_1)(a_2+jb_2) = (a_1a_2-b_1b_2) + j(a_1b_2+b_1a_2).\n$$\n\nThus, the real and imaginary part of of $z=a+jb$ are\n\\begin{align}\n a &= (a_1a_2-b_1b_2)\\\\\n b &= (a_1b_2+b_1a_2)\\ .\n\\end{align}\n\nNote that\n$$\n z_1z_1^* = (a_1a_1-b_1(-b_1)) +j(a_1b_1-b_1a_1) = a_1^2+b_1^2 = \\text{Re}(z_1)^2+\\text{Im}(z_1)^2\n$$\n\n### Summary\n- Complex numbers were originally invented to solve algebraic equations (e.g., the cubic equation)\n- The imaginary unit is $j=\\sqrt{-1}$\n- A **complex number** $z$ consists of a real part $a$ and imaginary part $b$, and is written as\n$$\n z = a+jb\\ .\n$$\n- The **complex conjugate** of $z$ is\n$$\n z^* = a-jb\\ .\n$$\n- It is much easier to add two complex numbers than it is to multiply them.\n\n### Additional information on complex numbers\nIf you want to know more about complex numbers (e.g., its history), you can find some nice videos here:\n\nhttps://www.youtube.com/playlist?list=PLiaHhY2iBX9g6KIvZ_703G3KJXapKkNaF\n\n### Exit Question\nLet\n\\begin{align}\n z_1 &= a_1+jb_1 = 2+3j\\\\\n z_2 &= a_2+jb_2 = -1-2j\\ .\n\\end{align}\n\nBy hand, please calculate\n\\begin{align}\n z_1 + z_2 &= \\\\\n z_1 - z_2 &= \\\\\n z_1 + z_1^* &= \\\\\n z_2 - z_2^*+2z_1 &= \\\\\n z_1z_2^* &=\\\\\n z_1^2+z_2^*z_1 &=\n\\end{align}\nCheck the results with your neighbours.\n\n---\n**Tip:** Use the rules\n\\begin{align}\n z_1 + z_2 &= (a_1+a_2) + j(b_1+b_2)\\\\\n z_1z_2 &= (a_1a_2-b_1b_2) + j(a_1b_2+b_1a_2)\\ .\n\\end{align}\n\n## Phasors\nIn the next 20 minutes, you will learn\n- how a complex number can be written in a **polar form**\n- why the polar form makes multiplications much easier\n- what a **phasor** is\n- how a phasor is related to a **real sinusoid**\n\n### The polar (or exponential) form of a complex number\nAs for 2D vectors, we can also write a complex number in terms of its **magnitude** $r$ and **angle** $\\psi$. We have\n\\begin{align}\n a &= r\\cos\\psi\\\\\n b &= r\\sin\\psi\\ .\n\\end{align}\nThus,\n$$\n z = a + jb = r\\left(\\cos\\psi + j\\sin\\psi\\right) = r\\mathrm{e}^{j\\psi}\n$$\nwhere the last equality follows from **Euler's formula**.\n
\n\n
\n\n#### Euler's formula\nGiven by\n$$\n \\mathrm{e}^{j\\psi} = \\cos\\psi + j\\sin\\psi\\ .\n$$\n- A very important formula used everywhere in science and engineering\n- Simplifies notation and mathematical manipulations\n- Its real and imaginary parts are a cosine and a sine, respectively, i.e.,\n\\begin{align}\n \\text{Re}(\\mathrm{e}^{j\\psi}) &= \\cos\\psi\\\\\n \\text{Im}(\\mathrm{e}^{j\\psi}) &= \\sin\\psi\\ .\n\\end{align}\n
\n\n
\n\n#### The complex conjugate\nThe **complex conjugate** of a complex number\n$$\n z=r \\mathrm{e}^{j\\psi}\n$$\nis\n$$\n z^* = r \\mathrm{e}^{-j\\psi}\\ .\n$$\nThus, the conjugation operator ${}^*$ changes the sign of the angle, but not the magnitude.\n\n#### Multiplication of complex numbers\nMultiplication of complex numbers is much easier when the polar form is used. Let\n\\begin{align}\n z_1 &= a_1+jb_1 = r_1 \\mathrm{e}^{j\\psi_1}\\\\\n z_2 &= a_2+jb_2 = r_2 \\mathrm{e}^{j\\psi_2}\\ .\n\\end{align}\n\nThe **product** of these two numbers is then\n$$\n z = z_1z_2 = r_1 \\mathrm{e}^{j\\psi_1}r_2 \\mathrm{e}^{j\\psi_2} = r_1 r_2 \\mathrm{e}^{j\\psi_1}r_2 \\mathrm{e}^{j\\psi_2} = r_1 r_2 \\mathrm{e}^{j(\\psi_1+\\psi_2)}\n$$\nwhere we used $a^na^m = a^{n+m}$ to get the last equation.\n\nThus, to multiply two complex numbers we\n- multiply their magnitudes\n- add their angles\n\nNote that **divisions** can be calculated as multiplications since\n$$\n \\frac{z_1}{z_2} = z_1\\frac{1}{z_2} = z_1 z_2^{-1}\n$$\nand\n$$\n z_2^{-1} = \\frac{1}{r_2}\\mathrm{e}^{-j\\psi_2}\\ .\n$$\n\n#### Converting between the rectangular and polar forms\nWe have seen that a complex number $z$ can be written as\n$$\n z = a+jb = r\\mathrm{e}^{j\\psi}\\ .\n$$\n\nWe can convert from the polar coordinates $(r,\\psi)$ to the rectangular coordinates $(a,b)$ via\n\\begin{align}\n a &= r\\cos\\psi\\\\\n b &= r\\sin\\psi\\ .\n\\end{align}\n\nWe can convert from the rectangular coordinates $(a,b)$ to the polar coordinates $(r,\\psi)$ via\n\\begin{align}\n r &= \\sqrt{a^2+b^2}\\\\\n \\psi &= \\mathrm{arctan2}(b,a)\\ .\n\\end{align}\n\n### The phasor\nWe have previously looked at the sinusoid\n$$\n x(t) = A\\cos(\\Omega t + \\Phi)\\ .\n$$\n\nBased on what we know about Euler's formula and complex numbers, we can now also write $x(t)$ as\n$$\n x(t) = \\text{Re}\\left[A\\exp(j(\\Omega t +\\Psi))\\right]\n$$\nsince (from Euler's formula)\n$$\n A\\exp(j(\\Omega t +\\Psi)) = A\\cos(\\Omega t +\\Psi)+jA\\sin(\\Omega t +\\Psi)\\ .\n$$\nThis time-varying complex number is called a **phasor** or a **complex sinusoid**.\n\nNote that\n- using the phasor instead of the real sinusoid makes life much easier (you will see this later in the course)\n- even though we work with the phasor, we can always come back to the real sinusoid by taking the real part of the phasor\n\n
\n\n
\n\n
\n\n
\n\n### Summary\n- The **polar form** of a complex number $z=a+jb$ is\n$$\n z = r\\mathrm{e}^{j\\psi}\n$$\nwhere the magnitude $r$ and angle $\\psi$ are given by\n\\begin{align}\n r &= \\sqrt{a^2+b^2}\\\\\n \\psi &= \\mathrm{arctan2}(b,a)\\ .\n\\end{align}\n- Multiplications (and divisions) are much easier when using the polar form.\n- A **phasor** is a complex sinusoid given by\n$$\n z(t) = A\\exp(j(\\Omega t +\\Psi))\\ ,\n$$\nand its real part is a real sinusoid, i.e., \n$$\n x(t) = \\text{Re}(z(t)) = A\\cos(\\Omega t +\\Psi)\\ .\n$$\n\n", "meta": {"hexsha": "ca3817aa777496cca1a300ef20082aa933130236", "size": 215974, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lecture1_What_is_Sound/apLecture1.ipynb", "max_stars_repo_name": "SMC-AAU-CPH/med4-ap-jupyter", "max_stars_repo_head_hexsha": "398fbb0bd06a879127ab020d7dc09121c2c67822", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-03T08:38:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T04:26:04.000Z", "max_issues_repo_path": "lecture1_What_is_Sound/apLecture1.ipynb", "max_issues_repo_name": "SMC-AAU-CPH/med4-ap-jupyter", "max_issues_repo_head_hexsha": "398fbb0bd06a879127ab020d7dc09121c2c67822", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-02-10T21:54:02.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-18T21:00:48.000Z", "max_forks_repo_path": "lecture1_What_is_Sound/apLecture1.ipynb", "max_forks_repo_name": "SMC-AAU-CPH/med4-ap-jupyter", "max_forks_repo_head_hexsha": "398fbb0bd06a879127ab020d7dc09121c2c67822", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-02-04T12:39:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-01T02:40:10.000Z", "avg_line_length": 196.34, "max_line_length": 67678, "alphanum_fraction": 0.8825228963, "converted": true, "num_tokens": 5630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.24798741512455283, "lm_q1q2_score": 0.12302502642973404}} {"text": "```python\nfrom mayavi import mlab\nmlab.init_notebook()\nmlab.test_plot3d()\n```\n\n Notebook initialized with ipy backend.\n\n\n\n Image(value=b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01\\x90\\x00\\x00\\x01^\\x08\\x02\\x00\\x00\\x00$?\\xde_\\x00\\\u2026\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n# Applications of vibration models\n
\n\nThe following text derives some of the most well-known physical\nproblems that lead to second-order ODE models of the type addressed in\nthis ${DOCUMENT}. We consider a simple spring-mass system; thereafter\nextended with nonlinear spring, damping, and external excitation; a\nspring-mass system with sliding friction; a simple and a physical\n(classical) pendulum; and an elastic pendulum.\n\n## Oscillating mass attached to a spring\n
\n\n\n\n
\n\n

Simple oscillating mass.

\n\n\n\n\n\nThe most fundamental mechanical vibration system is depicted in [Figure](#vib:app:mass_spring:fig). A body with mass $m$ is attached to a\nspring and can move horizontally without friction (in the wheels). The\nposition of the body is given by the vector $\\rpos(t) = u(t)\\ii$, where\n$\\ii$ is a unit vector in $x$ direction.\nThere is\nonly one force acting on the body: a spring force $\\F_s =-ku\\ii$, where\n$k$ is a constant. The point $x=0$, where $u=0$, must therefore\ncorrespond to the body's position\nwhere the spring is neither extended nor compressed, so the force\nvanishes.\n\nThe basic physical principle that governs the motion of the body is\nNewton's second law of motion: $\\F=m\\acc$, where\n$\\F$ is the sum of forces on the body, $m$ is its mass, and $\\acc=\\ddot\\rpos$\nis the acceleration. We use the dot for differentiation with respect\nto time, which is\nusual in mechanics. Newton's second law simplifies here\nto $-\\F_s=m\\ddot u\\ii$, which translates to\n\n$$\n-ku = m\\ddot u\\thinspace .\n$$\n\nTwo initial conditions are needed: $u(0)=I$, $\\dot u(0)=V$.\nThe ODE problem is normally written as\n\n\n
\n\n$$\n\\begin{equation}\nm\\ddot u + ku = 0,\\quad u(0)=I,\\ \\dot u(0)=V\\thinspace .\n\\label{vib:app:mass_spring:eqx} \\tag{1}\n\\end{equation}\n$$\n\nmathcal{I}_t is\nnot uncommon to divide by $m$\nand introduce the frequency $\\omega = \\sqrt{k/m}$:\n\n\n
\n\n$$\n\\begin{equation}\n\\ddot u + \\omega^2 u = 0,\\quad u(0)=I,\\ \\dot u(0)=V\\thinspace .\n\\label{vib:app:mass_spring:equ} \\tag{2}\n\\end{equation}\n$$\n\nThis is the model problem in the first part of this chapter, with the\nsmall difference that we write the time derivative of $u$ with a dot\nabove, while we used $u^{\\prime}$ and $u^{\\prime\\prime}$ in previous\nparts of the ${DOCUMENT}.\n\n\nSince only one scalar mathematical quantity, $u(t)$, describes the\ncomplete motion, we say that the mechanical system has one degree of freedom\n(DOF).\n\n### Scaling\n\nFor numerical simulations it is very convenient to scale\n([2](#vib:app:mass_spring:equ)) and thereby get rid of the problem of\nfinding relevant values for all the parameters $m$, $k$, $I$, and $V$.\nSince the amplitude of the oscillations are dictated by $I$ and $V$\n(or more precisely, $V/\\omega$), we scale $u$ by $I$ (or $V/\\omega$ if\n$I=0$):\n\n$$\n\\bar u = \\frac{u}{I},\\quad \\bar t = \\frac{t}{t_c}\\thinspace .\n$$\n\nThe time scale $t_c$ is normally chosen as the inverse period $2\\pi/\\omega$ or\nangular frequency $1/\\omega$, most often as $t_c=1/\\omega$.\nInserting the dimensionless quantities $\\bar u$ and $\\bar t$ in\n([2](#vib:app:mass_spring:equ)) results in the scaled problem\n\n$$\n\\frac{d^2\\bar u}{d\\bar t^2} + \\bar u = 0,\\quad \\bar u(0)=1,\\ \\frac{\\bar u}{\\bar t}(0)=\\beta = \\frac{V}{I\\omega},\n$$\n\nwhere $\\beta$ is a dimensionless number. Any motion that starts from rest\n($V=0$) is free of parameters in the scaled model!\n\n### The physics\n\nThe typical physics of the system in [Figure](#vib:app:mass_spring:fig) can be described as follows. Initially,\nwe displace the body to some position $I$, say at rest ($V=0$). After\nreleasing the body, the spring, which is extended, will act with a\nforce $-kI\\ii$ and pull the body to the left. This force causes an\nacceleration and therefore increases velocity. The body passes the\npoint $x=0$, where $u=0$, and the spring will then be compressed and\nact with a force $kx\\ii$ against the motion and cause retardation. At\nsome point, the motion stops and the velocity is zero, before the\nspring force $kx\\ii$ has worked long enough to push the body in\npositive direction. The result is that the body accelerates back and\nforth. As long as there is no friction forces to damp the motion, the\noscillations will continue forever.\n\n## General mechanical vibrating system\n
\n\n\n\n
\n\n

General oscillating system.

\n\n\n\n\n\nThe mechanical system in [Figure](#vib:app:mass_spring:fig) can easily be\nextended to the more general system in [Figure](#vib:app:mass_gen:fig),\nwhere the body is attached to a spring and a dashpot, and also subject\nto an environmental force $F(t)\\ii$. The system has still only one\ndegree of freedom since the body can only move back and forth parallel to\nthe $x$ axis. The spring force was linear, $\\F_s=-ku\\ii$,\nin the section [Oscillating mass attached to a spring](#vib:app:mass_spring), but in more general cases it can\ndepend nonlinearly on the position. We therefore set $\\F_s=s(u)\\ii$.\nThe dashpot, which acts\nas a damper, results in a force $\\F_d$ that depends on the body's\nvelocity $\\dot u$ and that always acts against the motion.\nThe mathematical model of the force is written $\\F_d =f(\\dot u)\\ii$.\nA positive $\\dot u$ must result in a force acting in the positive $x$\ndirection.\nFinally, we have the external environmental force $\\F_e = F(t)\\ii$.\n\nNewton's second law of motion now involves three forces:\n\n$$\nF(t)\\ii - f(\\dot u)\\ii - s(u)\\ii = m\\ddot u \\ii\\thinspace .\n$$\n\nThe common mathematical form of the ODE problem is\n\n\n
\n\n$$\n\\begin{equation}\nm\\ddot u + f(\\dot u) + s(u) = F(t),\\quad u(0)=I,\\ \\dot u(0)=V\\thinspace .\n\\label{vib:app:mass_gen:equ} \\tag{3}\n\\end{equation}\n$$\n\nThis is the generalized problem treated in the last part of the\npresent chapter, but with prime denoting the derivative instead of the dot.\n\nThe most common models for the spring and dashpot are linear: $f(\\dot u)\n=b\\dot u$ with a constant $b\\geq 0$, and $s(u)=ku$ for a constant $k$.\n\n### Scaling\n\nA specific scaling requires specific choices of $f$, $s$, and $F$.\nSuppose we have\n\n$$\nf(\\dot u) = b|\\dot u|\\dot u,\\quad s(u)=ku,\\quad F(t)=A\\sin(\\phi t)\\thinspace .\n$$\n\nWe introduce dimensionless variables as usual, $\\bar u = u/u_c$ and\n$\\bar t = t/t_c$. The scale $u_c$ depends both on the initial conditions\nand $F$, but as time grows, the effect of the initial conditions die out\nand $F$ will drive the motion. Inserting $\\bar u$ and $\\bar t$ in the\nODE gives\n\n$$\nm\\frac{u_c}{t_c^2}\\frac{d^2\\bar u}{d\\bar t^2}\n+ b\\frac{u_c^2}{t_c^2}\\left\\vert\\frac{d\\bar u}{d\\bar t}\\right\\vert\n\\frac{d\\bar u}{d\\bar t} + ku_c\\bar u = A\\sin(\\phi t_c\\bar t)\\thinspace .\n$$\n\nWe divide by $u_c/t_c^2$ and demand the coefficients of the\n$\\bar u$ and the forcing term from $F(t)$ to have unit coefficients.\nThis leads to the scales\n\n$$\nt_c = \\sqrt{\\frac{m}{k}},\\quad u_c = \\frac{A}{k}\\thinspace .\n$$\n\nThe scaled ODE becomes\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d^2\\bar u}{d\\bar t^2}\n+ 2\\beta\\left\\vert\\frac{d\\bar u}{d\\bar t}\\right\\vert\n\\frac{d\\bar u}{d\\bar t} + \\bar u = \\sin(\\gamma\\bar t),\n\\label{vib:app:mass_gen:scaled} \\tag{4}\n\\end{equation}\n$$\n\nwhere there are two dimensionless numbers:\n\n$$\n\\beta = \\frac{Ab}{2mk},\\quad\\gamma =\\phi\\sqrt{\\frac{m}{k}}\\thinspace .\n$$\n\nThe $\\beta$ number measures the size of the damping term (relative to unity)\nand is assumed to be small, basically because $b$ is small. The $\\phi$\nnumber is the ratio of the time scale of free vibrations and the time scale\nof the forcing.\nThe scaled initial conditions have two other dimensionless numbers\nas values:\n\n$$\n\\bar u(0) = \\frac{Ik}{A},\\quad \\frac{d\\bar u}{d\\bar t}=\\frac{t_c}{u_c}V = \\frac{V}{A}\\sqrt{mk}\\thinspace .\n$$\n\n## A sliding mass attached to a spring\n
\n\nConsider a variant of the oscillating body in the section [Oscillating mass attached to a spring](#vib:app:mass_spring)\nand [Figure](#vib:app:mass_spring:fig): the body rests on a flat\nsurface, and there is sliding friction between the body and the surface.\n[Figure](#vib:app:mass_sliding:fig) depicts the problem.\n\n\n\n
\n\n

Sketch of a body sliding on a surface.

\n\n\n\n\n\nThe body is attached to a spring with spring force $-s(u)\\ii$.\nThe friction force is proportional to the normal force on the surface,\n$-mg\\jj$, and given by $-f(\\dot u)\\ii$, where\n\n$$\nf(\\dot u) = \\left\\lbrace\\begin{array}{ll}\n-\\mu mg,& \\dot u < 0,\\\\ \n\\mu mg, & \\dot u > 0,\\\\ \n0, & \\dot u=0\n\\end{array}\\right.\n$$\n\nHere, $\\mu$ is a friction coefficient. With the signum function\n\n$$\n\\mbox{sign(x)} = \\left\\lbrace\\begin{array}{ll}\n-1,& x < 0,\\\\ \n1, & x > 0,\\\\ \n0, & x=0\n\\end{array}\\right.\n$$\n\nwe can simply write $f(\\dot u) = \\mu mg\\,\\hbox{sign}(\\dot u)$\n(the sign function is implemented by `numpy.sign`).\n\nThe equation of motion becomes\n\n\n
\n\n$$\n\\begin{equation}\nm\\ddot u + \\mu mg\\hbox{sign}(\\dot u) + s(u) = 0,\\quad u(0)=I,\\ \\dot u(0)=V\\thinspace .\n\\label{vib:app:mass_sliding:equ} \\tag{5}\n\\end{equation}\n$$\n\n## A jumping washing machine\n
\n\nA washing machine is placed on four springs with efficient dampers.\nIf the machine contains just a few clothes, the circular motion of\nthe machine induces a sinusoidal external force from the floor and the machine will\njump up and down if the frequency of the external force is close to\nthe natural frequency of the machine and its spring-damper system.\n\n[hpl 1: Not finished. This is a good example on resonance.]\n\n\n\n## Motion of a pendulum\n
\n\n### Simple pendulum\n\nA classical problem in mechanics is the motion of a pendulum. We first\nconsider a [simplified pendulum](https://en.wikipedia.org/wiki/Pendulum) (sometimes also called a\nmathematical pendulum): a small body of mass $m$ is\nattached to a massless wire and can oscillate back and forth in the\ngravity field. [Figure](#vib:app:pendulum:fig_problem) shows a sketch\nof the problem.\n\n\n\n
\n\n

Sketch of a simple pendulum.

\n\n\n\n\n\nThe motion is governed by Newton's 2nd law, so we need to find\nexpressions for the forces and the acceleration. Three forces on the\nbody are considered: an unknown force $S$ from the wire, the gravity\nforce $mg$, and an air resistance force, $\\frac{1}{2}C_D\\varrho A|v|v$,\nhereafter called the drag force, directed against the velocity\nof the body. Here, $C_D$ is a drag coefficient, $\\varrho$ is the\ndensity of air, $A$ is the cross section area of the body, and $v$ is\nthe magnitude of the velocity.\n\nWe introduce a coordinate system with polar coordinates and unit\nvectors $\\ir$ and $\\ith$ as shown in [Figure](#vib:app:pendulum:fig_forces). The position of the center of mass\nof the body is\n\n$$\n\\rpos(t) = x_0\\ii + y_0\\jj + L\\ir,\n$$\n\nwhere $\\ii$ and $\\jj$ are unit vectors in the corresponding Cartesian\ncoordinate system in the $x$ and $y$ directions, respectively. We have\nthat $\\ir = \\cos\\theta\\ii +\\sin\\theta\\jj$.\n\n\n\n
\n\n

Forces acting on a simple pendulum.

\n\n\n\n\n\nThe forces are now expressed as follows.\n\n * Wire force: $-S\\ir$\n\n * Gravity force: $-mg\\jj = mg(-\\sin\\theta\\,\\ith + \\cos\\theta\\,\\ir)$\n\n * Drag force: $-\\frac{1}{2}C_D\\varrho A |v|v\\,\\ith$\n\nSince a positive velocity means movement in the direction of $\\ith$,\nthe drag force must be directed along $-\\ith$ so it works against the\nmotion. We assume motion in air so that the added mass effect can\nbe neglected (for a spherical body, the added mass is $\\frac{1}{2}\\varrho V$,\nwhere $V$ is the volume of the body). Also the buoyancy effect\ncan be neglected for motion in the air when the density difference\nbetween the fluid and the body is so significant.\n\nThe velocity of the body is found from $\\rpos$:\n\n$$\n\\v(t) = \\dot\\rpos (t) = \\frac{d}{d\\theta}(x_0\\ii + y_0\\jj + L\\ir)\\frac{d\\theta}{dt} = L\\dot\\theta\\ith,\n$$\n\nsince $\\frac{d}{d\\theta}\\ir = \\ith$. mathcal{I}_t follows that $v=|\\v|=L\\dot\\theta$.\nThe acceleration is\n\n$$\n\\acc(t) = \\dot\\v(r) = \\frac{d}{dt}(L\\dot\\theta\\ith)\n= L\\ddot\\theta\\ith + L\\dot\\theta\\frac{d\\ith}{d\\theta}\\dot\\theta =\n= L\\ddot\\theta\\ith - L\\dot\\theta^2\\ir,\n$$\n\nsince $\\frac{d}{d\\theta}\\ith = -\\ir$.\n\nNewton's 2nd law of motion becomes\n\n$$\n-S\\ir + mg(-\\sin\\theta\\,\\ith + \\cos\\theta\\,\\ir) -\n\\frac{1}{2}C_D\\varrho AL^2|\\dot\\theta|\\dot\\theta\\,\\ith\n= mL\\ddot\\theta\\dot\\theta\\,\\ith - L\\dot\\theta^2\\ir,\n$$\n\nleading to two component equations\n\n\n
\n\n$$\n\\begin{equation}\n-S + mg\\cos\\theta = -L\\dot\\theta^2,\n\\label{vib:app:pendulum:ir} \\tag{6}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n-mg\\sin\\theta - \\frac{1}{2}C_D\\varrho AL^2|\\dot\\theta|\\dot\\theta\n= mL\\ddot\\theta\\thinspace .\n\\label{vib:app:pendulum:ith} \\tag{7}\n\\end{equation}\n$$\n\nFrom ([6](#vib:app:pendulum:ir)) we get an expression for\n$S=mg\\cos\\theta + L\\dot\\theta^2$, and from ([7](#vib:app:pendulum:ith))\nwe get a differential equation for the angle $\\theta(t)$. This latter\nequation is ordered as\n\n\n
\n\n$$\n\\begin{equation}\nm\\ddot\\theta + \\frac{1}{2}C_D\\varrho AL|\\dot\\theta|\\dot\\theta\n+ \\frac{mg}{L}\\sin\\theta = 0\\thinspace .\n\\label{vib:app:pendulum:thetaeq} \\tag{8}\n\\end{equation}\n$$\n\nTwo initial conditions are needed: $\\theta=\\Theta$ and $\\dot\\theta = \\Omega$.\nNormally, the pendulum motion is started from rest, which means $\\Omega =0$.\n\nEquation ([8](#vib:app:pendulum:thetaeq)) fits the general model\nused in ([vib:ode2](#vib:ode2)) in the section [vib:model2](#vib:model2) if we define\n$u=\\theta$, $f(u^{\\prime}) = \\frac{1}{2}C_D\\varrho AL|\\dot u|\\dot u$,\n$s(u) = L^{-1}mg\\sin u$, and $F=0$.\nIf the body is a sphere with radius $R$, we can take $C_D=0.4$ and $A=\\pi R^2$.\n[Exercise 4: Simulate a simple pendulum](#vib:exer:pendulum_simple) asks you to scale the equations\nand carry out specific simulations with this model.\n\n### Physical pendulum\n\nThe motion of a compound or physical pendulum where the wire is a rod with\nmass, can be modeled very similarly. The governing equation is\n$I\\acc = \\boldsymbol{T}$ where $I$ is the moment of inertia of the entire body about\nthe point $(x_0,y_0)$, and $\\boldsymbol{T}$ is the sum of moments of the forces\nwith respect to $(x_0,y_0)$. The vector equation reads\n\n$$\n\\rpos\\times(-S\\ir + mg(-\\sin\\theta\\ith + \\cos\\theta\\ir) -\n\\frac{1}{2}C_D\\varrho AL^2|\\dot\\theta|\\dot\\theta\\ith)\n= I(L\\ddot\\theta\\dot\\theta\\ith - L\\dot\\theta^2\\ir)\\thinspace .\n$$\n\nThe component equation in $\\ith$ direction gives the equation of motion\nfor $\\theta(t)$:\n\n\n
\n\n$$\n\\begin{equation}\nI\\ddot\\theta + \\frac{1}{2}C_D\\varrho AL^3|\\dot\\theta|\\dot\\theta\n+ mgL\\sin\\theta = 0\\thinspace .\n\\label{vib:app:pendulum:thetaeq_physical} \\tag{9}\n\\end{equation}\n$$\n\n## Dynamic free body diagram during pendulum motion\n
\n\n\nUsually one plots the mathematical quantities as functions of time to\nvisualize the solution of ODE models. [Exercise 4: Simulate a simple pendulum](#vib:exer:pendulum_simple) asks you to do this for the motion of a\npendulum in the previous section. However, sometimes it is more\ninstructive to look at other types of visualizations. For example, we\nhave the pendulum and the free body diagram in Figures\n[vib:app:pendulum:fig_problem](#vib:app:pendulum:fig_problem) and\n[vib:app:pendulum:fig_forces](#vib:app:pendulum:fig_forces). We may think of these figures as\nanimations in time instead. Especially the free body diagram will show both the\nmotion of the pendulum *and* the size of the forces during the motion.\nThe present section exemplifies how to make such a dynamic body\ndiagram.\n% if FORMAT == 'pdflatex':\nTwo typical snapshots of free body diagrams are displayed below\n(the drag force is magnified 5 times to become more visual!).\n\n\n\n\n

\n\n\n\n\n\n% else:\n\n\n\n\n```python\nfrom IPython.display import HTML\n_s = \"\"\"\n
\n\n
\n

The drag force is magnified 5 times! % endif

\n\n\n\n\n\"\"\"\nHTML(_s)\n```\n\n\n\n\nDynamic physical sketches, coupled to the numerical solution of\ndifferential equations, requires a program to produce a sketch for\nthe situation at each time level.\n[Pysketcher](https://github.com/hplgit/pysketcher) is such a tool.\nIn fact (and not surprising!) Figures [vib:app:pendulum:fig_problem](#vib:app:pendulum:fig_problem) and\n[vib:app:pendulum:fig_forces](#vib:app:pendulum:fig_forces) were drawn using Pysketcher.\nThe details of the drawings are explained in the\n[Pysketcher tutorial](http://hplgit.github.io/pysketcher/doc/web/index.html).\nHere, we outline how this type of sketch can be used to create an animated\nfree body diagram during the motion of a pendulum.\n\nPysketcher is actually a layer of useful abstractions on top of\nstandard plotting packages. This means that we in fact apply Matplotlib\nto make the animated free body diagram, but instead of dealing with a wealth\nof detailed Matplotlib commands, we can express the drawing in terms of\nmore high-level objects, e.g., objects for the wire, angle $\\theta$,\nbody with mass $m$, arrows for forces, etc. When the position of these\nobjects are given through variables, we can just couple those variables\nto the dynamic solution of our ODE and thereby make a unique drawing\nfor each $\\theta$ value in a simulation.\n\n### Writing the solver\n\nLet us start with the most familiar part of the current problem:\nwriting the solver function. We use Odespy for this purpose.\nWe also work with dimensionless equations. Since $\\theta$ can be\nviewed as dimensionless, we only need to introduce a dimensionless time,\nhere taken as $\\bar t = t/\\sqrt{L/g}$.\nThe resulting dimensionless mathematical model for $\\theta$,\nthe dimensionless angular velocity $\\omega$, the\ndimensionless wire force $\\bar S$, and the dimensionless\ndrag force $\\bar D$ is then\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d\\omega}{d\\bar t} = - \\alpha|\\omega|\\omega - \\sin\\theta,\n\\label{vib:app:pendulum_bodydia:eqth} \\tag{10}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d\\theta}{d\\bar t} = \\omega,\n\\label{vib:app:pendulum_bodydia:eqomega} \\tag{11}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar S = \\omega^2 + \\cos\\theta,\n\\label{vib:app:pendulum_bodydia:eqS} \\tag{12}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar D = -\\alpha |\\omega|\\omega,\n\\label{vib:app:pendulum_bodydia:eqD} \\tag{13}\n\\end{equation}\n$$\n\nwith\n\n$$\n\\alpha = \\frac{C_D\\varrho\\pi R^2L}{2m}\\thinspace .\n$$\n\nas a dimensionless parameter expressing the ratio of the drag force and\nthe gravity force. The dimensionless $\\omega$ is made non-dimensional\nby the time, so $\\omega\\sqrt{L/g}$ is the corresponding angular\nfrequency with dimensions.\n\n\n\nA suitable function for computing\n([10](#vib:app:pendulum_bodydia:eqth))-([13](#vib:app:pendulum_bodydia:eqD))\nis listed below.\n\n\n```python\ndef simulate(alpha, Theta, dt, T):\n import odespy\n\n def f(u, t, alpha):\n omega, theta = u\n return [-alpha*omega*abs(omega) - sin(theta),\n omega]\n\n import numpy as np\n Nt = int(round(T/float(dt)))\n t = np.linspace(0, Nt*dt, Nt+1)\n solver = odespy.RK4(f, f_args=[alpha])\n solver.set_initial_condition([0, Theta])\n u, t = solver.solve(\n t, terminate=lambda u, t, n: abs(u[n,1]) < 1E-3)\n omega = u[:,0]\n theta = u[:,1]\n S = omega**2 + np.cos(theta)\n drag = -alpha*np.abs(omega)*omega\n return t, theta, omega, S, drag\n```\n\n### Drawing the free body diagram\n\nThe `sketch` function below applies Pysketcher objects to build\na diagram like that in [Figure](#vib:app:pendulum:fig_forces),\nexcept that we have removed the rotation point $(x_0,y_0)$ and\nthe unit vectors in polar coordinates as these objects are not\nimportant for an animated free body diagram.\n\n\n```python\nimport sys\ntry:\n from pysketcher import *\nexcept ImportError:\n print 'Pysketcher must be installed from'\n print 'https://github.com/hplgit/pysketcher'\n sys.exit(1)\n\n# Overall dimensions of sketch\nH = 15.\nW = 17.\n\ndrawing_tool.set_coordinate_system(\n xmin=0, xmax=W, ymin=0, ymax=H,\n axis=False)\n\ndef sketch(theta, S, mg, drag, t, time_level):\n \"\"\"\n Draw pendulum sketch with body forces at a time level\n corresponding to time t. The drag force is in\n drag[time_level], the force in the wire is S[time_level],\n the angle is theta[time_level].\n \"\"\"\n import math\n a = math.degrees(theta[time_level]) # angle in degrees\n L = 0.4*H # Length of pendulum\n P = (W/2, 0.8*H) # Fixed rotation point\n\n mass_pt = path.geometric_features()['end']\n rod = Line(P, mass_pt)\n\n mass = Circle(center=mass_pt, radius=L/20.)\n mass.set_filled_curves(color='blue')\n rod_vec = rod.geometric_features()['end'] - \\\n rod.geometric_features()['start']\n unit_rod_vec = unit_vec(rod_vec)\n mass_symbol = Text('$m$', mass_pt + L/10*unit_rod_vec)\n\n rod_start = rod.geometric_features()['start'] # Point P\n vertical = Line(rod_start, rod_start + point(0,-L/3))\n\n def set_dashed_thin_blackline(*objects):\n \"\"\"Set linestyle of objects to dashed, black, width=1.\"\"\"\n for obj in objects:\n obj.set_linestyle('dashed')\n obj.set_linecolor('black')\n obj.set_linewidth(1)\n\n set_dashed_thin_blackline(vertical)\n set_dashed_thin_blackline(rod)\n angle = Arc_wText(r'$\\theta$', rod_start, L/6, -90, a,\n text_spacing=1/30.)\n\n magnitude = 1.2*L/2 # length of a unit force in figure\n force = mg[time_level] # constant (scaled eq: about 1)\n force *= magnitude\n mg_force = Force(mass_pt, mass_pt + force*point(0,-1),\n '', text_pos='end')\n force = S[time_level]\n force *= magnitude\n rod_force = Force(mass_pt, mass_pt - force*unit_vec(rod_vec),\n '', text_pos='end',\n text_spacing=(0.03, 0.01))\n force = drag[time_level]\n force *= magnitude\n air_force = Force(mass_pt, mass_pt -\n force*unit_vec((rod_vec[1], -rod_vec[0])),\n '', text_pos='end',\n text_spacing=(0.04,0.005))\n\n body_diagram = Composition(\n {'mg': mg_force, 'S': rod_force, 'air': air_force,\n 'rod': rod, 'body': mass\n 'vertical': vertical, 'theta': angle,})\n\n body_diagram.draw(verbose=0)\n drawing_tool.savefig('tmp_%04d.png' % time_level, crop=False)\n # (No cropping: otherwise movies will be very strange!)\n```\n\n### Making the animated free body diagram\n\nmathcal{I}_t now remains to couple the `simulate` and `sketch` functions.\nWe first run `simulate`:\n\n\n```python\nfrom math import pi, radians, degrees\nimport numpy as np\nalpha = 0.4\nperiod = 2*pi # Use small theta approximation\nT = 12*period # Simulate for 12 periods\ndt = period/40 # 40 time steps per period\na = 70 # Initial amplitude in degrees\nTheta = radians(a)\n\nt, theta, omega, S, drag = simulate(alpha, Theta, dt, T)\n```\n\nThe next step is to run through the time levels in the simulation and\nmake a sketch at each level:\n\n\n```python\nfor time_level, t_ in enumerate(t):\n sketch(theta, S, mg, drag, t_, time_level)\n```\n\nThe individual sketches are (by the `sketch` function) saved in files\nwith names `tmp_%04d.png`. These can be combined to videos using\n(e.g.) `ffmpeg`. A complete function `animate` for running the\nsimulation and creating video files is\nlisted below.\n\n\n```python\ndef animate():\n # Clean up old plot files\n import os, glob\n for filename in glob.glob('tmp_*.png') + glob.glob('movie.*'):\n os.remove(filename)\n # Solve problem\n from math import pi, radians, degrees\n import numpy as np\n alpha = 0.4\n period = 2*pi # Use small theta approximation\n T = 12*period # Simulate for 12 periods\n dt = period/40 # 40 time steps per period\n a = 70 # Initial amplitude in degrees\n Theta = radians(a)\n\n t, theta, omega, S, drag = simulate(alpha, Theta, dt, T)\n\n # Visualize drag force 5 times as large\n drag *= 5\n mg = np.ones(S.size) # Gravity force (needed in sketch)\n\n # Draw animation\n import time\n for time_level, t_ in enumerate(t):\n sketch(theta, S, mg, drag, t_, time_level)\n time.sleep(0.2) # Pause between each frame on the screen\n\n # Make videos\n prog = 'ffmpeg'\n filename = 'tmp_%04d.png'\n fps = 6\n codecs = {'flv': 'flv', 'mp4': 'libx264',\n 'webm': 'libvpx', 'ogg': 'libtheora'}\n for ext in codecs:\n lib = codecs[ext]\n cmd = '%(prog)s -i %(filename)s -r %(fps)s ' % vars()\n cmd += '-vcodec %(lib)s movie.%(ext)s' % vars()\n print(cmd)\n os.system(cmd)\n```\n\n## Motion of an elastic pendulum\n
\n\n\nConsider a pendulum as in [Figure](#vib:app:pendulum:fig_problem), but\nthis time the wire is elastic. The length of the wire when it is not\nstretched is $L_0$, while $L(t)$ is the stretched\nlength at time $t$ during the motion.\n\nStretching the elastic wire a distance $\\Delta L$ gives rise to a\nspring force $k\\Delta L$ in the opposite direction of the\nstretching. Let $\\boldsymbol{n}$ be a unit normal vector along the wire\nfrom the point $\\rpos_0=(x_0,y_0)$ and in the direction of $\\ith$, see\n[Figure](#vib:app:pendulum:fig_forces) for definition of $(x_0,y_0)$\nand $\\ith$. Obviously, we have $\\boldsymbol{n}=\\ith$, but in this modeling\nof an elastic pendulum we do not need polar coordinates. Instead, it\nis more straightforward to develop the equation in Cartesian\ncoordinates.\n\nA mathematical expression for $\\boldsymbol{n}$ is\n\n$$\n\\boldsymbol{n} = \\frac{\\rpos-\\rpos_0}{L(t)},\n$$\n\nwhere $L(t)=||\\rpos-\\rpos_0||$ is the current length of the elastic wire.\nThe position vector $\\rpos$ in Cartesian coordinates reads\n$\\rpos(t) = x(t)\\ii + y(t)\\jj$, where $\\ii$ and $\\jj$ are unit vectors\nin the $x$ and $y$ directions, respectively.\nmathcal{I}_t is convenient to introduce the Cartesian components $n_x$ and $n_y$\nof the normal vector:\n\n$$\n\\boldsymbol{n} = \\frac{\\rpos-\\rpos_0}{L(t)} = \\frac{x(t)-x_0}{L(t)}\\ii + \\frac{y(t)-y_0}{L(t)}\\jj = n_x\\ii + n_y\\jj\\thinspace .\n$$\n\nThe stretch $\\Delta L$ in the wire is\n\n$$\n\\Delta t = L(t) - L_0\\thinspace .\n$$\n\nThe force in the wire is then $-S\\boldsymbol{n}=-k\\Delta L\\boldsymbol{n}$.\n\nThe other forces are the gravity and the air resistance, just as in\n[Figure](#vib:app:pendulum:fig_forces). For motion in air we can\nneglect the added mass and buoyancy effects. The main difference is\nthat we have a *model* for $S$ in terms of the motion (as soon as we\nhave expressed $\\Delta L$ by $\\rpos$). For simplicity, we drop the air\nresistance term (but [Exercise 6: Simulate an elastic pendulum with air resistance](#vib:exer:pendulum_elastic_drag) asks\nyou to include it).\n\nNewton's second law of motion applied to the body now results in\n\n\n
\n\n$$\n\\begin{equation}\nm\\ddot\\rpos = -k(L-L_0)\\boldsymbol{n} - mg\\jj\n\\label{vib:app:pendulum_elastic:eq1} \\tag{14}\n\\end{equation}\n$$\n\nThe two components of\n([14](#vib:app:pendulum_elastic:eq1)) are\n\n\n
\n\n$$\n\\begin{equation}\n\\ddot x = -\\frac{k}{m}(L-L_0)n_x,\n\\label{_auto1} \\tag{15}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\label{vib:app:pendulum_elastic:eq2a} \\tag{16} \n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\ddot y = - \\frac{k}{m}(L-L_0)n_y - g\n\\label{vib:app:pendulum_elastic:eq2b} \\tag{17}\\thinspace .\n\\end{equation}\n$$\n\n### Remarks about an elastic vs a non-elastic pendulum\n\nNote that the derivation of the ODEs for an elastic pendulum is more\nstraightforward than for a classical, non-elastic pendulum,\nsince we avoid the details\nwith polar coordinates, but instead work with Newton's second law\ndirectly in Cartesian coordinates. The reason why we can do this is that\nthe elastic pendulum undergoes a general two-dimensional motion where\nall the forces are known or expressed as functions of $x(t)$ and $y(t)$,\nsuch that we get two ordinary differential equations.\nThe motion of the non-elastic pendulum, on the other hand, is constrained:\nthe body has to move along a circular path, and the force $S$ in the\nwire is unknown.\n\nThe non-elastic pendulum therefore leads to\na *differential-algebraic* equation, i.e., ODEs for $x(t)$ and $y(t)$\ncombined with an extra constraint $(x-x_0)^2 + (y-y_0)^2 = L^2$\nensuring that the motion takes place along a circular path.\nThe extra constraint (equation) is compensated by an extra unknown force\n$-S\\boldsymbol{n}$. Differential-algebraic equations are normally hard\nto solve, especially with pen and paper.\nFortunately, for the non-elastic pendulum we can do a\ntrick: in polar coordinates the unknown force $S$ appears only in the\nradial component of Newton's second law, while the unknown\ndegree of freedom for describing the motion, the angle $\\theta(t)$,\nis completely governed by the asimuthal component. This allows us to\ndecouple the unknowns $S$ and $\\theta$. But this is a kind of trick and\nnot a widely applicable method. With an elastic pendulum we use straightforward\nreasoning with Newton's 2nd law and arrive at a standard ODE problem that\n(after scaling) is easy to solve on a computer.\n\n### Initial conditions\n\nWhat is the initial position of the body? We imagine that first the\npendulum hangs in equilibrium in its vertical position, and then it is\ndisplaced an angle $\\Theta$. The equilibrium position is governed\nby the ODEs with the accelerations set to zero.\nThe $x$ component leads to $x(t)=x_0$, while the $y$ component gives\n\n$$\n0 = - \\frac{k}{m}(L-L_0)n_y - g = \\frac{k}{m}(L(0)-L_0) - g\\quad\\Rightarrow\\quad\nL(0) = L_0 + mg/k,\n$$\n\nsince $n_y=-11$ in this position. The corresponding $y$ value is then\nfrom $n_y=-1$:\n\n$$\ny(t) = y_0 - L(0) = y_0 - (L_0 + mg/k)\\thinspace .\n$$\n\nLet us now choose $(x_0,y_0)$ such that the body is at the origin\nin the equilibrium position:\n\n$$\nx_0 =0,\\quad y_0 = L_0 + mg/k\\thinspace .\n$$\n\nDisplacing the body an angle $\\Theta$ to the right leads to the\ninitial position\n\n$$\nx(0)=(L_0+mg/k)\\sin\\Theta,\\quad y(0)=(L_0+mg/k)(1-\\cos\\Theta)\\thinspace .\n$$\n\nThe initial velocities can be set to zero: $x'(0)=y'(0)=0$.\n\n### The complete ODE problem\n\nWe can summarize all the equations as follows:\n\n$$\n\\begin{align*}\n\\ddot x &= -\\frac{k}{m}(L-L_0)n_x,\n\\\\ \n\\ddot y &= -\\frac{k}{m}(L-L_0)n_y - g,\n\\\\ \nL &= \\sqrt{(x-x_0)^2 + (y-y_0)^2},\n\\\\ \nn_x &= \\frac{x-x_0}{L},\n\\\\ \nn_y &= \\frac{y-y_0}{L},\n\\\\ \nx(0) &= (L_0+mg/k)\\sin\\Theta,\n\\\\ \nx'(0) &= 0,\n\\\\ \ny(0) & =(L_0+mg/k)(1-\\cos\\Theta),\n\\\\ \ny'(0) &= 0\\thinspace .\n\\end{align*}\n$$\n\nWe insert $n_x$ and $n_y$ in the ODEs:\n\n\n
\n\n$$\n\\begin{equation}\n\\ddot x = -\\frac{k}{m}\\left(1 -\\frac{L_0}{L}\\right)(x-x_0),\n\\label{vib:app:pendulum_elastic:x} \\tag{18}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\ddot y = -\\frac{k}{m}\\left(1 -\\frac{L_0}{L}\\right)(y-y_0) - g,\n\\label{vib:app:pendulum_elastic:y} \\tag{19}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \nL = \\sqrt{(x-x_0)^2 + (y-y_0)^2},\n\\label{vib:app:pendulum_elastic:L} \\tag{20}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \nx(0) = (L_0+mg/k)\\sin\\Theta,\n\\label{vib:app:pendulum_elastic:x0} \\tag{21}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \nx'(0) = 0,\n\\label{vib:app:pendulum_elastic:vx0} \\tag{22}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \ny(0) =(L_0+mg/k)(1-\\cos\\Theta),\n\\label{vib:app:pendulum_elastic:y0} \\tag{23}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \ny'(0) = 0\\thinspace .\n\\label{vib:app:pendulum_elastic:vy0} \\tag{24}\n\\end{equation}\n$$\n\n### Scaling\n\nThe elastic pendulum model can be used to study both an elastic pendulum\nand a classic, non-elastic pendulum. The latter problem is obtained\nby letting $k\\rightarrow\\infty$. Unfortunately,\na serious problem with the ODEs\n([18](#vib:app:pendulum_elastic:x))-([19](#vib:app:pendulum_elastic:y)) is that for large $k$, we have a very large factor $k/m$ multiplied by a\nvery small number $1-L_0/L$, since for large $k$, $L\\approx L_0$ (very\nsmall deformations of the wire). The product is subject to\nsignificant round-off errors for many relevant physical values of\nthe parameters. To circumvent the problem, we introduce a scaling. This\nwill also remove physical parameters from the problem such that we end\nup with only one dimensionless parameter,\nclosely related to the elasticity of the wire. Simulations can then be\ndone by setting just this dimensionless parameter.\n\nThe characteristic length can be taken such that in equilibrium, the\nscaled length is unity, i.e., the characteristic length is $L_0+mg/k$:\n\n$$\n\\bar x = \\frac{x}{L_0+mg/k},\\quad \\bar y = \\frac{y}{L_0+mg/k}\\thinspace .\n$$\n\nWe must then also work with the scaled length $\\bar L = L/(L_0+mg/k)$.\n\nIntroducing $\\bar t=t/t_c$, where $t_c$ is a characteristic time we\nhave to decide upon later, one gets\n\n$$\n\\begin{align*}\n\\frac{d^2\\bar x}{d\\bar t^2} &=\n-t_c^2\\frac{k}{m}\\left(1 -\\frac{L_0}{L_0+mg/k}\\frac{1}{\\bar L}\\right)\\bar x,\\\\ \n\\frac{d^2\\bar y}{d\\bar t^2} &=\n-t_c^2\\frac{k}{m}\\left(1 -\\frac{L_0}{L_0+mg/k}\\frac{1}{\\bar L}\\right)(\\bar y-1)\n-t_c^2\\frac{g}{L_0 + mg/k},\\\\ \n\\bar L &= \\sqrt{\\bar x^2 + (\\bar y-1)^2},\\\\ \n\\bar x(0) &= \\sin\\Theta,\\\\ \n\\bar x'(0) &= 0,\\\\ \n\\bar y(0) & = 1 - \\cos\\Theta,\\\\ \n\\bar y'(0) &= 0\\thinspace .\n\\end{align*}\n$$\n\nFor a non-elastic pendulum with small angles, we know that the\nfrequency of the oscillations are $\\omega = \\sqrt{L/g}$. mathcal{I}_t is therefore\nnatural to choose a similar expression here, either the length in\nthe equilibrium position,\n\n$$\nt_c^2 = \\frac{L_0+mg/k}{g}\\thinspace .\n$$\n\nor simply the unstretched length,\n\n$$\nt_c^2 = \\frac{L_0}{g}\\thinspace .\n$$\n\nThese quantities are not very different (since the elastic model\nis valid only for quite small elongations), so we take the latter as it is\nthe simplest one.\n\nThe ODEs become\n\n$$\n\\begin{align*}\n\\frac{d^2\\bar x}{d\\bar t^2} &=\n-\\frac{L_0k}{mg}\\left(1 -\\frac{L_0}{L_0+mg/k}\\frac{1}{\\bar L}\\right)\\bar x,\\\\ \n\\frac{d^2\\bar y}{d\\bar t^2} &=\n-\\frac{L_0k}{mg}\\left(1 -\\frac{L_0}{L_0+mg/k}\\frac{1}{\\bar L}\\right)(\\bar y-1)\n-\\frac{L_0}{L_0 + mg/k},\\\\ \n\\bar L &= \\sqrt{\\bar x^2 + (\\bar y-1)^2}\\thinspace .\n\\end{align*}\n$$\n\nWe can now identify a dimensionless number\n\n$$\n\\beta = \\frac{L_0}{L_0 + mg/k} = \\frac{1}{1+\\frac{mg}{L_0k}},\n$$\n\nwhich is the ratio of the unstretched length and the\nstretched length in equilibrium. The non-elastic pendulum will have\n$\\beta =1$ ($k\\rightarrow\\infty$).\nWith $\\beta$ the ODEs read\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d^2\\bar x}{d\\bar t^2} =\n-\\frac{\\beta}{1-\\beta}\\left(1- \\frac{\\beta}{\\bar L}\\right)\\bar x,\n\\label{vib:app:pendulum_elastic:x:s} \\tag{25}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d^2\\bar y}{d\\bar t^2} =\n-\\frac{\\beta}{1-\\beta}\\left(1- \\frac{\\beta}{\\bar L}\\right)(\\bar y-1)\n-\\beta,\n\\label{vib:app:pendulum_elastic:y:s} \\tag{26}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar L = \\sqrt{\\bar x^2 + (\\bar y-1)^2},\n\\label{vib:app:pendulum_elastic:L:s} \\tag{27}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar x(0) = (1+\\epsilon)\\sin\\Theta,\n\\label{vib:app:pendulum_elastic:x0:s} \\tag{28}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d\\bar x}{d\\bar t}(0) = 0,\n\\label{vib:app:pendulum_elastic:vx0:s} \\tag{29}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar y(0) = 1 - (1+\\epsilon)\\cos\\Theta,\n\\label{vib:app:pendulum_elastic:y0:s} \\tag{30}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d\\bar y}{d\\bar t}(0) = 0,\n\\label{vib:app:pendulum_elastic:vy0:s} \\tag{31}\n\\end{equation}\n$$\n\nWe have here added a parameter $\\epsilon$, which is an additional\ndownward stretch of the wire at $t=0$. This parameter makes it possible\nto do a desired test: vertical oscillations of the pendulum. Without\n$\\epsilon$, starting the motion from $(0,0)$ with zero velocity will\nresult in $x=y=0$ for all times (also a good test!), but with\nan initial stretch so the body's position is $(0,\\epsilon)$, we\nwill have oscillatory vertical motion with amplitude $\\epsilon$ (see\n[Exercise 5: Simulate an elastic pendulum](#vib:exer:pendulum_elastic)).\n\n### Remark on the non-elastic limit\n\nWe immediately see that as $k\\rightarrow\\infty$ (i.e., we obtain a non-elastic\npendulum), $\\beta\\rightarrow 1$, $\\bar L\\rightarrow 1$, and we have\nvery small values $1-\\beta\\bar L^{-1}$ divided by very small values\n$1-\\beta$ in the ODEs. However, it turns out that we can set $\\beta$\nvery close to one and obtain a path of the body that within the visual\naccuracy of a plot does not show any elastic oscillations.\n(Should the division of very small values become a problem, one can\nstudy the limit by L'Hospital's rule:\n\n$$\n\\lim_{\\beta\\rightarrow 1}\\frac{1 - \\beta \\bar L^{-1}}{1-\\beta}\n= \\frac{1}{\\bar L},\n$$\n\nand use the limit $\\bar L^{-1}$ in the ODEs for $\\beta$ values very\nclose to 1.)\n\n## Vehicle on a bumpy road\n
\n\n\n\n
\n\n

Sketch of one-wheel vehicle on a bumpy road.

\n\n\n\n\n\nWe consider a very simplistic vehicle, on one wheel, rolling along a\nbumpy road. The oscillatory nature of the road will induce an external\nforcing on the spring system in the vehicle and cause vibrations.\n[Figure](#vib:app:bumpy:fig:sketch) outlines the situation.\n\nTo derive the equation that governs the motion, we must first establish\nthe position vector of the black mass at the top of the spring.\nSuppose the spring has length $L$ without any elongation or compression,\nsuppose the radius of the wheel is $R$, and suppose the height of the\nblack mass at the top is $H$. With the aid of the $\\rpos_0$ vector\nin [Figure](#vib:app:bumpy:fig:sketch), the position $\\rpos$ of\nthe center point of the mass is\n\n\n
\n\n$$\n\\begin{equation}\n\\rpos = \\rpos_0 + 2R\\jj + L\\jj + u\\jj + \\frac{1}{2} H\\jj,\\ \n\\label{_auto2} \\tag{32}\n\\end{equation}\n$$\n\nwhere $u$ is the elongation or compression in the spring according to\nthe (unknown and to be computed) vertical displacement $u$ relative to the\nroad. If the vehicle travels\nwith constant horizontal velocity $v$ and $h(x)$ is the shape of the\nroad, then the vector $\\rpos_0$ is\n\n$$\n\\rpos_0 = vt\\ii + h(vt)\\jj,\n$$\n\nif the motion starts from $x=0$ at time $t=0$.\n\nThe forces on the mass is the gravity, the spring force, and an optional\ndamping force that is proportional to the vertical velocity $\\dot u$. Newton's\nsecond law of motion then tells that\n\n$$\nm\\ddot\\rpos = -mg\\jj - s(u) - b\\dot u\\jj\\thinspace .\n$$\n\nThis leads to\n\n$$\nm\\ddot u = - s(u) - b\\dot u - mg -mh''(vt)v^2\n$$\n\nTo simplify a little bit, we omit the gravity force $mg$ in comparison with\nthe other terms. Introducing $u'$ for $\\dot u$ then gives a standard\ndamped, vibration equation with external forcing:\n\n\n
\n\n$$\n\\begin{equation}\nmu'' + bu' + s(u) = -mh''(vt)v^2\\thinspace .\n\\label{_auto3} \\tag{33}\n\\end{equation}\n$$\n\nSince the road is normally known just as a set of array values, $h''$ must\nbe computed by finite differences. Let $\\Delta x$ be the spacing between\nmeasured values $h_i= h(i\\Delta x)$ on the road. The discrete second-order\nderivative $h''$ reads\n\n$$\nq_i = \\frac{h_{i-1} - 2h_i + h_{i+1}}{\\Delta x^2}, \\quad i=1,\\ldots,N_x-1\\thinspace .\n$$\n\nWe may for maximum simplicity set\nthe end points as $q_0=q_1$ and $q_{N_x}=q_{N_x-1}$.\nThe term $-mh''(vt)v^2$ corresponds to a force with discrete time values\n\n$$\nF^n = -mq_n v^2,\\quad \\Delta t = v^{-1}\\Delta x\\thinspace .\n$$\n\nThis force can be directly used in a numerical model\n\n$$\n[mD_tD_t u + bD_{2t} u + s(u) = F]^n\\thinspace .\n$$\n\nSoftware for computing $u$ and also making an animated sketch of\nthe motion like we did in the section [Dynamic free body diagram during pendulum motion](#vib:app:pendulum_bodydia)\nis found in a separate project on the web:\n. You may start looking at the\n\"tutorial\":\n% if FORMAT == 'pdflatex':\n\"http://hplgit.github.io/bumpy/doc/pub/bumpy.pdf\".\n% else:\n\"http://hplgit.github.io/bumpy/doc/pub/bumpy.html\".\n% endif\n\n## Bouncing ball\n
\n\nA bouncing ball is a ball in free vertically fall until it impacts the\nground, but during the impact, some kinetic energy is lost, and a new\nmotion upwards with reduced velocity starts. After the motion is retarded,\na new free fall starts, and the process is repeated. At some point the\nvelocity close to the ground is so small that the ball is considered\nto be finally at rest.\n\nThe motion of the ball falling in air is governed by Newton's second\nlaw $F=ma$, where $a$ is the acceleration of the body, $m$ is the mass,\nand $F$ is the sum of all forces. Here, we neglect the air resistance\nso that gravity $-mg$ is the only force. The height of the ball is\ndenoted by $h$ and $v$ is the velocity. The relations between $h$, $v$, and\n$a$,\n\n$$\nh'(t)= v(t),\\quad v'(t) = a(t),\n$$\n\ncombined with Newton's second law gives the ODE model\n\n\n
\n\n$$\n\\begin{equation}\nh^{\\prime\\prime}(t) = -g,\n\\label{vib:app:bouncing:ball:h2eq} \\tag{34}\n\\end{equation}\n$$\n\nor expressed alternatively as a system of first-order equations:\n\n\n
\n\n$$\n\\begin{equation}\nv'(t) = -g,\n\\label{vib:app:bouncing:ball:veq} \\tag{35} \n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \nh'(t) = v(t)\\thinspace .\n\\label{vib:app:bouncing:ball:heq} \\tag{36}\n\\end{equation}\n$$\n\nThese equations govern the motion as long as the ball is away from\nthe ground by a small distance $\\epsilon_h > 0$. When $h<\\epsilon_h$,\nwe have two cases.\n\n1. The ball impacts the ground, recognized by a sufficiently large negative\n velocity ($v<-\\epsilon_v$). The velocity then changes sign and is\n reduced by a factor $C_R$, known as the [coefficient of restitution](http://en.wikipedia.org/wiki/Coefficient_of_restitution).\n For plotting purposes, one may set $h=0$.\n\n2. The motion stops, recognized by a sufficiently small velocity\n ($|v|<\\epsilon_v$) close to the ground.\n\n## Two-body gravitational problem\n
\n\nConsider two astronomical objects $A$ and $B$ that attract each other\nby gravitational forces. $A$ and $B$ could be two stars in a binary\nsystem, a planet orbiting a star, or a moon orbiting a planet.\nEach object is acted upon by the\ngravitational force due to the other object. Consider motion in a plane\n(for simplicity) and let $(x_A,y_A)$ and $(x_B,y_B)$ be the\npositions of object $A$ and $B$, respectively.\n\n### The governing equations\n\nNewton's second law of motion applied to each object is all we need\nto set up a mathematical model for this physical problem:\n\n\n
\n\n$$\n\\begin{equation}\nm_A\\ddot\\x_A = \\F,\n\\label{_auto4} \\tag{37}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \nm_B\\ddot\\x_B = -\\F,\n\\label{_auto5} \\tag{38}\n\\end{equation}\n$$\n\nwhere $F$ is the gravitational force\n\n$$\n\\F = \\frac{Gm_Am_B}{||\\rpos||^3}\\rpos,\n$$\n\nwhere\n\n$$\n\\rpos(t) = \\x_B(t) - \\x_A(t),\n$$\n\nand $G$ is the gravitational constant:\n$G=6.674\\cdot 10^{-11}\\hbox{ Nm}^2/\\hbox{kg}^2$.\n\n### Scaling\n\nA problem with these equations is that the parameters are very large\n($m_A$, $m_B$, $||\\rpos||$) or very small ($G$). The rotation time\nfor binary stars can be very small and large as well. mathcal{I}_t is therefore\nadvantageous to scale the equations.\nA natural length scale could be the initial distance between the objects:\n$L=\\rpos(0)$. We write the dimensionless quantities as\n\n$$\n\\bar\\x_A = \\frac{\\x_A}{L},\\quad\\bar\\x_B = \\frac{\\x_B}{L},\\quad\n\\bar t = \\frac{t}{t_c}\\thinspace .\n$$\n\nThe gravity force is transformed to\n\n$$\n\\F = \\frac{Gm_Am_B}{L^2||\\bar\\rpos||^3}\\bar\\rpos,\\quad \\bar\\rpos = \\bar\\x_B - \\bar\\x_A,\n$$\n\nso the first ODE for $\\x_A$ becomes\n\n$$\n\\frac{d^2 \\bar\\x_A}{d\\bar t^2} =\n\\frac{Gm_Bt_c^2}{L^3}\\frac{\\bar\\rpos}{||\\bar\\rpos||^3}\\thinspace .\n$$\n\nAssuming that quantities with a bar and their derivatives are around unity\nin size, it is natural to choose $t_c$ such that the fraction $Gm_Bt_c/L^2=1$:\n\n$$\nt_c = \\sqrt{\\frac{L^3}{Gm_B}}\\thinspace .\n$$\n\nFrom the other equation for $\\x_B$ we get another candidate for $t_c$ with\n$m_A$ instead of $m_B$. Which mass we choose play a role if $m_A\\ll m_B$ or\n$m_B\\ll m_A$. One solution is to use the sum of the masses:\n\n$$\nt_c = \\sqrt{\\frac{L^3}{G(m_A+m_B)}}\\thinspace .\n$$\n\nTaking a look at [Kepler's laws](https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion) of planetary motion, the orbital period for a planet around the star is given by the $t_c$ above, except for a missing factor of $2\\pi$,\nbut that means that $t_c^{-1}$ is just the angular frequency of the motion.\nOur characteristic time $t_c$ is therefore highly relevant.\nIntroducing the dimensionless number\n\n$$\n\\alpha = \\frac{m_A}{m_B},\n$$\n\nwe can write the dimensionless ODE as\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d^2 \\bar\\x_A}{d\\bar t^2} =\n\\frac{1}{1+\\alpha}\\frac{\\bar\\rpos}{||\\bar\\rpos||^3},\n\\label{_auto6} \\tag{39}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d^2 \\bar\\x_B}{d\\bar t^2} =\n\\frac{1}{1+\\alpha^{-1}}\\frac{\\bar\\rpos}{||\\bar\\rpos||^3}\\thinspace .\n\\label{_auto7} \\tag{40}\n\\end{equation}\n$$\n\nIn the limit $m_A\\ll m_B$, i.e., $\\alpha\\ll 1$,\nobject B stands still, say $\\bar\\x_B=0$, and object\nA orbits according to\n\n$$\n\\frac{d^2 \\bar\\x_A}{d\\bar t^2} = -\\frac{\\bar\\x_A}{||\\bar \\x_A||^3}\\thinspace .\n$$\n\n### Solution in a special case: planet orbiting a star\n\nTo better see the motion, and that our scaling is reasonable,\nwe introduce polar coordinates $r$ and $\\theta$:\n\n$$\n\\bar\\x_A = r\\cos\\theta\\ii + r\\sin\\theta\\jj,\n$$\n\nwhich means $\\bar\\x_A$ can be written as $\\bar\\x_A =r\\ir$. Since\n\n$$\n\\frac{d}{dt}\\ir = \\dot\\theta\\ith,\\quad \\frac{d}{dt}\\ith = -\\dot\\theta\\ir,\n$$\n\nwe have\n\n$$\n\\frac{d^2 \\bar\\x_A}{d\\bar t^2} =\n(\\ddot r - r\\dot\\theta^2)\\ir + (r\\ddot\\theta + 2\\dot r\\dot\\theta)\\ith\\thinspace .\n$$\n\nThe equation of motion for mass A is then\n\n$$\n\\begin{align*}\n\\ddot r - r\\dot\\theta^2 &= -\\frac{1}{r^2},\\\\ \nr\\ddot\\theta + 2\\dot r\\dot\\theta &= 0\\thinspace .\n\\end{align*}\n$$\n\nThe special case of circular motion, $r=1$, fulfills the equations, since\nthe latter equation then gives $\\dot\\theta =\\hbox{const}$ and\nthe former then gives $\\dot\\theta = 1$, i.e., the motion is\n$r(t)=1$, $\\theta(t)=t$, with unit angular frequency as expected and\nperiod $2\\pi$ as expected.\n\n\n## Electric circuits\n\nAlthough the term \"mechanical vibrations\" is used in the present\nbook, we must mention that the same type of equations arise\nwhen modeling electric circuits.\nThe current $I(t)$ in a\ncircuit with an inductor with inductance $L$, a capacitor with\ncapacitance $C$, and overall resistance $R$, is governed by\n\n\n
\n\n$$\n\\begin{equation}\n\\ddot I + \\frac{R}{L}\\dot I + \\frac{1}{LC}I = \\dot V(t),\n\\label{_auto8} \\tag{41}\n\\end{equation}\n$$\n\nwhere $V(t)$ is the voltage source powering the circuit.\nThis equation has the same form as the general model considered in\nthe section [vib:model2](#vib:model2) if we set $u=I$, $f(u^{\\prime})=bu^{\\prime}$\nand define $b=R/L$, $s(u) = L^{-1}C^{-1}u$, and $F(t)=\\dot V(t)$.\n\n\n# Exercises\n\n\n\n\n\n## Exercise 1: Simulate resonance\n
\n\n\nWe consider the scaled ODE model\n([4](#vib:app:mass_gen:scaled)) from the section [General mechanical vibrating system](#vib:app:mass_gen).\nAfter scaling, the amplitude of $u$ will have a size about unity\nas time grows and the effect of the initial conditions die out due\nto damping. However, as $\\gamma\\rightarrow 1$, the amplitude of $u$\nincreases, especially if $\\beta$ is small. This effect is called\n*resonance*. The purpose of this exercise is to explore resonance.\n\n\n**a)**\nFigure out how the `solver` function in `vib.py` can be called\nfor the scaled ODE ([4](#vib:app:mass_gen:scaled)).\n\n\n\n**Solution.**\nComparing the scaled ODE ([4](#vib:app:mass_gen:scaled))\nwith the ODE ([3](#vib:app:mass_gen:equ)) with dimensions, we\nrealize that the parameters in the latter must be set as\n\n * $m=1$\n\n * $f(\\dot u) = 2\\beta |\\dot u|\\dot u$\n\n * $s(u)=ku$\n\n * $F(t)=\\sin(\\gamma t)$\n\n * $I=Ik/A$\n\n * $V=\\sqrt{mk}V/A$\n\nThe expected period is $2\\pi$, so simulating for $N$ periods means\n$T=2\\pi N$. Having $m$ time steps per period means $\\Delta t = 2\\pi/m$.\n\nSuppose we just choose $I=1$ and $V=0$. Simulating for 20 periods with\n60 time steps per period, implies the following\n`solver` call to run the scaled model:\n\n\n```python\nu, t = solver(I=1, V=0, m=1, b=2*beta, s=lambda u: u,\n F=lambda t: sin(gamma*t), dt=2*pi/60,\n T=2*pi*20, damping='quadratic')\n```\n\n\n\n**b)**\nRun $\\gamma =5, 1.5, 1.1, 1$ for $\\beta=0.005, 0.05, 0.2$.\nFor each $\\beta$ value, present an image with plots of $u(t)$ for\nthe four $\\gamma$ values.\n\n\n\n**Solution.**\nAn appropriate program is\n\n\n```python\nfrom vib import solver, visualize, plt\nfrom math import pi, sin\nimport numpy as np\n\nbeta_values = [0.005, 0.05, 0.2]\nbeta_values = [0.00005]\ngamma_values = [5, 1.5, 1.1, 1]\nfor i, beta in enumerate(beta_values):\n for gamma in gamma_values:\n u, t = solver(I=1, V=0, m=1, b=2*beta, s=lambda u: u,\n F=lambda t: sin(gamma*t), dt=2*pi/60,\n T=2*pi*20, damping='quadratic')\n visualize(u, t, title='gamma=%g' %\n gamma, filename='tmp_%s' % gamma)\n print gamma, 'max u amplitude:', np.abs(u).max()\n for ext in 'png', 'pdf':\n cmd = 'doconce combine_images '\n cmd += ' '.join(['tmp_%s.' % gamma + ext\n for gamma in gamma_values])\n cmd += ' resonance%d.' % (i+1) + ext\n os.system(cmd)\nraw_input()\n```\n\nFor $\\beta = 0.2$ we see that the amplitude is not far from unity:\n\n\n\n\n

\n\n\n\n\n\nFor $\\beta =0.05$ we see that as $\\gamma\\rightarrow 1$, the amplitude grows:\n\n\n\n\n

\n\n\n\n\n\nFinally, a small damping ($\\beta = 0.005$) amplifies the amplitude significantly (by a factor of 10) for $\\gamma=1$:\n\n\n\n\n

\n\n\n\n\n\nFor a very small $\\beta=0.00005$, the amplitude grows linearly up to\nabout 60 for $\\bar t\\in [0,120]$.\n\n\n\nFilename: `resonance`.\n\n\n\n\n\n\n\n\n## Exercise 2: Simulate oscillations of a sliding box\n
\n\nConsider a sliding box on a flat surface as modeled in the section [A sliding mass attached to a spring](#vib:app:mass_sliding). As spring force we choose the nonlinear\nformula\n\n$$\ns(u) = \\frac{k}{\\alpha}\\tanh(\\alpha u) = ku + \\frac{1}{3}\\alpha^2 ku^3 + \\frac{2}{15}\\alpha^4 k u^5 + \\Oof{u^6}\\thinspace .\n$$\n\n**a)**\nPlot $g(u)=\\alpha^{-1}\\tanh(\\alpha u)$ for various values of $\\alpha$.\nAssume $u\\in [-1,1]$.\n\n\n\n**Solution.**\nHere is a function that does the plotting:\n\n\n```python\n%matplotlib inline\n\nimport scitools.std as plt\nimport numpy as np\n\ndef plot_spring():\n alpha_values = [1, 2, 3, 10]\n s = lambda u: 1.0/alpha*np.tanh(alpha*u)\n u = np.linspace(-1, 1, 1001)\n for alpha in alpha_values:\n print alpha, s(u)\n plt.plot(u, s(u))\n plt.hold('on')\n plt.legend([r'$\\alpha=%g$' % alpha for alpha in alpha_values])\n plt.xlabel('u'); plt.ylabel('Spring response $s(u)$')\n plt.savefig('tmp_s.png'); plt.savefig('tmp_s.pdf')\n```\n\n\n\n\n

\n\n\n\n\n\n\n\n**b)**\nScale the equations using $I$ as scale for $u$ and $\\sqrt{m/k}$ as\ntime scale.\n\n\n\n**Solution.**\nInserting the dimensionless dependent and independent variables,\n\n$$\n\\bar u = \\frac{u}{I},\\quad \\bar t = \\frac{t}{\\sqrt{m/k}},\n$$\n\nin the problem\n\n$$\nm\\ddot u + \\mu mg\\hbox{sign}(\\dot u) + s(u) = 0,\\quad u(0)=I,\\ \\dot u(0)=V,\n$$\n\ngives\n\n$$\n\\frac{d^2\\bar u}{d\\bar t^2} + \\frac{\\mu mg}{kI}\\hbox{sign}\\left(\n\\frac{d\\bar u}{d\\bar t}\\right) + \\frac{1}{\\alpha I}\\tanh(\\alpha I\\bar u)\n= 0,\\quad \\bar u(0)=1,\\ \\frac{d\\bar u}{d\\bar t}(0)=\\frac{V\\sqrt{mk}}{kI}\\thinspace .\n$$\n\nWe can now identify three dimensionless parameters,\n\n$$\n\\beta = \\frac{\\mu mg}{kI},\\quad\n\\gamma = \\alpha I,\\quad \\delta = \\frac{V\\sqrt{mk}}{kI}\\thinspace .\n$$\n\nThe scaled problem can then be written\n\n$$\n\\frac{d^2\\bar u}{d\\bar t^2} + \\beta\\hbox{sign}\\left(\n\\frac{d\\bar u}{d\\bar t}\\right) + \\gamma^{-1}\\tanh(\\gamma \\bar u)\n= 0,\\quad \\bar u(0)=1,\\ \\frac{d\\bar u}{d\\bar t}(0)=\\delta\\thinspace .\n$$\n\nThe initial set of 7 parameters $(\\mu, m, g, k, \\alpha, I, V)$ are\nreduced to 3 dimensionless combinations.\n\n\n\n**c)**\nImplement the scaled model in b). Run it for some values of\nthe dimensionless parameters.\n\n\n\n**Solution.**\nWe use Odespy to solve the ODE, which requires rewriting the ODE as a\nsystem of two first-order ODEs:\n\n$$\n\\begin{align*}\nv' &= - \\beta\\hbox{sign}(v) - \\gamma^{-1}\\tanh(\\gamma \\bar u),\\\\ \nu' &= v,\n\\end{align*}\n$$\n\nwith initial conditions $v(0)=\\delta$ and $u(0)=1$. Here, $u(t)$ corresponds\nto the previous $\\bar u(\\bar t)$, while $v(t)$ corresponds to\n$d\\bar u/d\\bar t (\\bar t)$. The code can be like this:\n\n\n```python\ndef simulate(beta, gamma, delta=0,\n num_periods=8, time_steps_per_period=60):\n # Use oscillations without friction to set dt and T\n P = 2*np.pi\n dt = P/time_steps_per_period\n T = num_periods*P\n t = np.linspace(0, T, time_steps_per_period*num_periods+1)\n import odespy\n def f(u, t, beta, gamma):\n # Note the sequence of unknowns: v, u (v=du/dt)\n v, u = u\n return [-beta*np.sign(v) - 1.0/gamma*np.tanh(gamma*u), v]\n #return [-beta*np.sign(v) - u, v]\n\n solver = odespy.RK4(f, f_args=(beta, gamma))\n solver.set_initial_condition([delta, 1]) # sequence must match f\n uv, t = solver.solve(t)\n u = uv[:,1] # recall sequence in f: v, u\n v = uv[:,0]\n return u, t\n```\n\nWe simulate for an almost linear spring in the regime of $\\bar u$ (recall\nthat $\\bar u\\in [0,1]$ since $u$ is scaled with $I$), which corresponds\nto $\\alpha = 1$ in a) and therefore $\\gamma =1$. Then we can try a\nspring whose force quickly flattens out like $\\alpha=5$ in a), which\ncorresponds to $\\gamma = 5$ in the scaled model. A third option is\nto have a truly linear spring, e.g., $\\gamma =0.1$. After some\nexperimentation we realize that $\\beta=0,0.05, 0.1$ are relevant values.\n\n\n\n\n

\n\n\n\n\n\n\n\n\n

\n\n\n\n\n\n\n\n\n

\n\n\n\n\n\n\n\nFilename: `sliding_box`.\n\n\n\n\n\n\n\n\n## Exercise 3: Simulate a bouncing ball\n
\n\nThe section [Bouncing ball](#vib:app:bouncing_ball) presents a model for a bouncing\nball.\nChoose one of the two ODE formulation, ([34](#vib:app:bouncing:ball:h2eq)) or\n([35](#vib:app:bouncing:ball:veq))-([36](#vib:app:bouncing:ball:heq)),\nand simulate the motion of a bouncing ball. Plot $h(t)$. Think about how to\nplot $v(t)$.\n\n\n\n**Hint.**\nA naive implementation may get stuck in repeated impacts for large time\nstep sizes. To avoid this situation, one can introduce a state\nvariable that holds the mode of the motion: free fall, impact, or rest.\nTwo consecutive impacts imply that the motion has stopped.\n\n\n\n\n\n**Solution.**\nA tailored `solver` function and some plotting statements go like\n\n\n```python\nimport numpy as np\n\ndef solver(H, C_R, dt, T, eps_v=0.01, eps_h=0.01):\n \"\"\"\n Simulate bouncing ball until it comes to rest. Time step dt.\n h(0)=H (initial height). T: maximum simulation time.\n Method: Euler-Cromer.\n \"\"\"\n dt = float(dt)\n Nt = int(round(T/dt))\n h = np.zeros(Nt+1)\n v = np.zeros(Nt+1)\n t = np.linspace(0, Nt*dt, Nt+1)\n g = 0.81\n\n v[0] = 0\n h[0] = H\n mode = 'free fall'\n for n in range(Nt):\n v[n+1] = v[n] - dt*g\n h[n+1] = h[n] + dt*v[n+1]\n\n if h[n+1] < eps_h:\n #if abs(v[n+1]) > eps_v: # handles large dt, but is wrong\n if v[n+1] < -eps_v:\n # Impact\n v[n+1] = -C_R*v[n+1]\n h[n+1] = 0\n if mode == 'impact':\n # impact twice\n return h[:n+2], v[:n+2], t[:n+2]\n mode = 'impact'\n elif abs(v[n+1]) < eps_v:\n mode = 'rest'\n v[n+1] = 0\n h[n+1] = 0\n return h[:n+2], v[:n+2], t[:n+2]\n else:\n mode = 'free fall'\n else:\n mode = 'free fall'\n print '%4d v=%8.5f h=%8.5f %s' % (n, v[n+1], h[n+1], mode)\n raise ValueError('T=%g is too short simulation time' % T)\n\nimport matplotlib.pyplot as plt\nh, v, t = solver(\n H=1, C_R=0.8, T=100, dt=0.0001, eps_v=0.01, eps_h=0.01)\nplt.plot(t, h)\nplt.legend('h')\nplt.savefig('tmp_h.png'); plt.savefig('tmp_h.pdf')\nplt.figure()\nplt.plot(t, v)\nplt.legend('v')\nplt.savefig('tmp_v.png'); plt.savefig('tmp_v.pdf')\nplt.show()\n```\n\n\n\n\n

\n\n\n\n\n\n\nFilename: `bouncing_ball`.\n\n\n\n\n\n\n\n\n## Exercise 4: Simulate a simple pendulum\n
\n\nSimulation of simple pendulum can be carried out by using\nthe mathematical model derived in the section [Motion of a pendulum](#vib:app:pendulum)\nand calling up functionality in the [`vib.py`](${src_vib}/vib.py)\nfile (i.e., solve the second-order ODE by centered finite differences).\n\n\n**a)**\nScale the model. Set up the dimensionless governing equation for $\\theta$\nand expressions for dimensionless drag and wire forces.\n\n\n\n**Solution.**\nThe angle is measured in radians so we may think of this quantity as\ndimensionless, or we may scale it by the initial condition to obtain\na primary unknown that lies in $[-1,1]$. We go for the former strategy here.\n\nDimensionless time $\\bar t$ is introduced as $t/t_c$ for some suitable\ntime scale $t_c$.\n\nInserted in the two governing equations\n([8](#vib:app:pendulum:thetaeq)) and ([6](#vib:app:pendulum:ir)),\nfor the\ntwo unknowns $\\theta$ and $S$, respectively, we achieve\n\n$$\n\\begin{align*}\n-S + mg\\cos\\theta &= -\\frac{1}{t_c}L\\frac{d\\theta}{d\\bar t},\\\\ \n\\frac{1}{t_c^2}m\\frac{d^2\\theta}{d\\bar t^2} +\n\\frac{1}{2}C_D\\varrho AL \\frac{1}{t_c^2}\\left\\vert\n\\frac{d\\theta}{d\\bar t}\\right\\vert\n\\frac{d\\theta}{d\\bar t}\n+ \\frac{mg}{L}\\sin\\theta &= 0\\thinspace .\n\\end{align*}\n$$\n\nWe multiply the latter equation by $t_c^2/m$ to make each term\ndimensionless:\n\n$$\n\\frac{d^2\\theta}{d\\bar t^2} +\n\\frac{1}{2m}C_D\\varrho AL \\left\\vert\n\\frac{d\\theta}{d\\bar t}\\right\\vert\n\\frac{d\\theta}{d\\bar t}\n+ \\frac{t_c^2g}{L}\\sin\\theta = 0\\thinspace .\n$$\n\nAssuming that the acceleration term and the\ngravity term to be the dominating terms, these should balance, so\n$t_c^2g/L=1$, giving $t_c = \\sqrt{g/L}$. With $A=\\pi R^2$ we get the\ndimensionless ODEs\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d^2\\theta}{d\\bar t^2} +\n\\alpha\\left\\vert\\frac{d\\theta}{d\\bar t}\\right\\vert\\frac{d\\theta}{d\\bar t} +\n\\sin\\theta = 0,\n\\label{vib:exer:pendulum_simple:eq:ith:s} \\tag{42}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{S}{mg} = \\left(\\frac{d\\theta}{d\\bar t}\\right)^2 + \\cos\\theta,\n\\label{vib:exer:pendulum_simple:eq:ir:s} \\tag{43}\n\\end{equation}\n$$\n\nwhere $\\alpha$ is a dimensionless drag coefficient\n\n$$\n\\alpha = \\frac{C_D\\varrho\\pi R^2L}{2m}\\thinspace .\n$$\n\nNote that in ([43](#vib:exer:pendulum_simple:eq:ir:s)) we have divided by\n$mg$, which is in fact a force scale, making the gravity force unity\nand also $S/mg=1$ in the equilibrium position $\\theta=0$. We may\nintroduce\n\n$$\n\\bar S = S/mg\n$$\n\nas a dimensionless drag force.\n\nThe parameter $\\alpha$ is about\nthe ratio of the drag force and the gravity force:\n\n$$\n\\frac{|\\frac{1}{2} C_D\\varrho \\pi R^2 |v|v|}{|mg|}\\sim\n\\frac{C_D\\varrho \\pi R^2 L^2 t_c^{-2}}{mg}\n\\left|\\frac{d\\bar\\theta}{d\\bar t}\\right|\\frac{d\\bar\\theta}{d\\bar t}\n\\sim \\frac{C_D\\varrho \\pi R^2 L}{2m}\\Theta^2 = \\alpha \\Theta^2\\thinspace .\n$$\n\n(We have that $\\theta(t)/d\\Theta$ is in $[-1,1]$, so we expect\nsince $\\Theta^{-1}d\\bar\\theta/d\\bar t$ to be around unity. Here,\n$\\Theta=\\theta(0)$.)\n\nLet us introduce $\\omega$ for the dimensionless angular velocity,\n\n$$\n\\omega = \\frac{d\\theta}{d\\bar t}\\thinspace .\n$$\n\nWhen $\\theta$ is computed, the dimensionless wire and drag forces\nare computed by\n\n$$\n\\begin{align*}\n\\bar S &= \\omega^2 + \\cos\\theta,\\\\ \n\\bar D &= -\\alpha |\\omega|\\omega\\thinspace .\n\\end{align*}\n$$\n\n\n\n**b)**\nWrite a function for computing\n$\\theta$ and the dimensionless drag force and the force in the wire,\nusing the `solver` function in\nthe `vib.py` file. Plot these three quantities\nbelow each other (in subplots) so the graphs can be compared.\nRun two cases, first one in the limit of $\\Theta$ small and\nno drag, and then a second one with $\\Theta=40$ degrees and $\\alpha=0.8$.\n\n\n\n**Solution.**\nThe first step is to realize how to utilize the `solver` function for\nour dimensionless model. Introducing `Theta` for $\\Theta$, the\narguments to `solver` must be set as\n\n\n```python\nI = Theta\nV = 0\nm = 1\nb = alpha\ns = lambda u: sin(u)\nF = lambda t: 0\ndamping = 'quadratic'\n```\n\nAfter computing $\\theta$, we need to find $\\omega$ by finite differences:\n\n$$\n\\omega^n = \\frac{\\theta^{n+1}-\\theta^{n-1}}{2\\Delta t},\n\\ n=1,\\ldots,N_t-1,\\quad \\omega^0=\\frac{\\theta^1-\\theta^0}{\\Delta t},\n\\ \\omega^{N_t}=\\frac{\\theta^{N_t}-\\theta^{N_t-1}}{\\Delta t}\\thinspace .\n$$\n\nThe duration of the simulation and the time step can be computed on\nbasis of the analytical insight we have for small $\\theta$\n($\\theta\\approx \\Theta\\cos(t)$). A complete function then reads\n\n\n```python\ndef simulate(Theta, alpha, num_periods=10):\n # Dimensionless model requires the following parameters:\n from math import sin, pi\n\n I = Theta\n V = 0\n m = 1\n b = alpha\n s = lambda u: sin(u)\n F = lambda t: 0\n damping = 'quadratic'\n\n # Estimate T and dt from the small angle solution\n P = 2*pi # One period (theta small, no drag)\n dt = P/40 # 40 intervals per period\n T = num_periods*P\n\n theta, t = solver(I, V, m, b, s, F, dt, T, damping)\n omega = np.zeros(theta.size)\n omega[1:-1] = (theta[2:] - theta[:-2])/(2*dt)\n omega[0] = (theta[1] - theta[0])/dt\n omega[-1] = (theta[-1] - theta[-2])/dt\n\n S = omega**2 + np.cos(theta)\n D = alpha*np.abs(omega)*omega\n return t, theta, S, D\n```\n\nAssuming imports like\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nthe following function visualizes $\\theta$, $\\bar S$, and $\\bar D$\nwith three subplots:\n\n\n```python\ndef visualize(t, theta, S, D, filename='tmp'):\n f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=False)\n ax1.plot(t, theta)\n ax1.set_title(r'$\\theta(t)$')\n ax2.plot(t, S)\n ax2.set_title(r'Dimensonless force in the wire')\n ax3.plot(t, D)\n ax3.set_title(r'Dimensionless drag force')\n plt.savefig('%s.png' % filename)\n plt.savefig('%s.pdf' % filename)\n```\n\nA suitable main program is\n\n\n```python\nimport math\n# Rough verification that small theta and no drag gives cos(t)\nTheta = 1.0\nalpha = 0\nt, theta, S, D = simulate(Theta, alpha, num_periods=4)\n# Scale theta by Theta (easier to compare with cos(t))\ntheta /= Theta\nvisualize(t, theta, S, D, filename='pendulum_verify')\n\nTheta = math.radians(40)\nalpha = 0.8\nt, theta, S, D = simulate(Theta, alpha)\nvisualize(t, theta, S, D, filename='pendulum_alpha0.8_Theta40')\nplt.show()\n```\n\nThe \"verification\" case looks good (at least when the `solver` function\nhas been thoroughly verified in other circumstances):\n\n\n\n\n

\n\n\n\n\n\nThe \"real case\" shows how quickly the drag force is reduced, even when\nwe set $\\alpha$ to a significant value (0.8):\n\n\n\n\n

\n\n\n\n\n\n\n\n\n\n\nFilename: `simple_pendulum`.\n\n\n\n\n\n\n\n\n## Exercise 5: Simulate an elastic pendulum\n
\n\nThe section [Motion of an elastic pendulum](#vib:app:pendulum_elastic) describes a model for an elastic\npendulum, resulting in a system of two ODEs. The purpose of this\nexercise is to implement the scaled model, test the software, and\ngeneralize the model.\n\n\n**a)**\nWrite a function `simulate`\nthat can simulate an elastic pendulum using the scaled model.\nThe function should have the following arguments:\n\n\n```python\ndef simulate(\n beta=0.9, # dimensionless parameter\n Theta=30, # initial angle in degrees\n epsilon=0, # initial stretch of wire\n num_periods=6, # simulate for num_periods\n time_steps_per_period=60, # time step resolution\n plot=True, # make plots or not\n ):\n```\n\nTo set the total simulation time and the time step, we\nuse our knowledge of the scaled, classical, non-elastic pendulum:\n$u^{\\prime\\prime} + u = 0$, with solution\n$u = \\Theta\\cos \\bar t$.\nThe period of these oscillations is $P=2\\pi$\nand the frequency is unity. The time\nfor simulation is taken as `num_periods` times $P$. The time step\nis set as $P$ divided by `time_steps_per_period`.\n\nThe `simulate` function should return the arrays of\n$x$, $y$, $\\theta$, and $t$, where $\\theta = \\tan^{-1}(x/(1-y))$ is\nthe angular displacement of the elastic pendulum corresponding to the\nposition $(x,y)$.\n\nIf `plot` is `True`, make a plot of $\\bar y(\\bar t)$\nversus $\\bar x(\\bar t)$, i.e., the physical motion\nof the mass at $(\\bar x,\\bar y)$. Use the equal aspect ratio on the axis\nsuch that we get a physically correct picture of the motion. Also\nmake a plot of $\\theta(\\bar t)$, where $\\theta$ is measured in degrees.\nIf $\\Theta < 10$ degrees, add a plot that compares the solutions of\nthe scaled, classical, non-elastic pendulum and the elastic pendulum\n($\\theta(t)$).\n\nAlthough the mathematics here employs a bar over scaled quantities, the\ncode should feature plain names `x` for $\\bar x$, `y` for $\\bar y$, and\n`t` for $\\bar t$ (rather than `x_bar`, etc.). These variable names make\nthe code easier to read and compare with the mathematics.\n\n\n\n**Hint 1.**\nEqual aspect ratio is set by `plt.gca().set_aspect('equal')` in\nMatplotlib (`import matplotlib.pyplot as plt`)\nand in SciTools by the command\n`plt.plot(..., daspect=[1,1,1], daspectmode='equal')`\n(provided you have done `import scitools.std as plt`).\n\n\n\n\n\n**Hint 2.**\nIf you want to use Odespy to solve the equations, order the ODEs\nlike $\\dot \\bar x, \\bar x, \\dot\\bar y,\\bar y$ such that\n`odespy.EulerCromer` can be applied.\n\n\n\n\n\n**Solution.**\nHere is a suggested `simulate` function:\n\n\n```python\nimport odespy\nimport numpy as np\nimport scitools.std as plt\n\ndef simulate(\n beta=0.9, # dimensionless parameter\n Theta=30, # initial angle in degrees\n epsilon=0, # initial stretch of wire\n num_periods=6, # simulate for num_periods\n time_steps_per_period=60, # time step resolution\n plot=True, # make plots or not\n ):\n from math import sin, cos, pi\n Theta = Theta*np.pi/180 # convert to radians\n # Initial position and velocity\n # (we order the equations such that Euler-Cromer in odespy\n # can be used, i.e., vx, x, vy, y)\n ic = [0, # x'=vx\n (1 + epsilon)*sin(Theta), # x\n 0, # y'=vy\n 1 - (1 + epsilon)*cos(Theta), # y\n ]\n\n def f(u, t, beta):\n vx, x, vy, y = u\n L = np.sqrt(x**2 + (y-1)**2)\n h = beta/(1-beta)*(1 - beta/L) # help factor\n return [-h*x, vx, -h*(y-1) - beta, vy]\n\n # Non-elastic pendulum (scaled similarly in the limit beta=1)\n # solution Theta*cos(t)\n P = 2*pi\n dt = P/time_steps_per_period\n T = num_periods*P\n omega = 2*pi/P\n\n time_points = np.linspace(\n 0, T, num_periods*time_steps_per_period+1)\n\n solver = odespy.EulerCromer(f, f_args=(beta,))\n solver.set_initial_condition(ic)\n u, t = solver.solve(time_points)\n x = u[:,1]\n y = u[:,3]\n theta = np.arctan(x/(1-y))\n\n if plot:\n plt.figure()\n plt.plot(x, y, 'b-', title='Pendulum motion',\n daspect=[1,1,1], daspectmode='equal',\n axis=[x.min(), x.max(), 1.3*y.min(), 1])\n plt.savefig('tmp_xy.png')\n plt.savefig('tmp_xy.pdf')\n # Plot theta in degrees\n plt.figure()\n plt.plot(t, theta*180/np.pi, 'b-',\n title='Angular displacement in degrees')\n plt.savefig('tmp_theta.png')\n plt.savefig('tmp_theta.pdf')\n if abs(Theta) < 10*pi/180:\n # Compare theta and theta_e for small angles (<10 degrees)\n theta_e = Theta*np.cos(omega*t) # non-elastic scaled sol.\n plt.figure()\n plt.plot(t, theta, t, theta_e,\n legend=['theta elastic', 'theta non-elastic'],\n title='Elastic vs non-elastic pendulum, '\\\n 'beta=%g' % beta)\n plt.savefig('tmp_compare.png')\n plt.savefig('tmp_compare.pdf')\n # Plot y vs x (the real physical motion)\n return x, y, theta, t\n```\n\n\n\n**b)**\nWrite a test function for testing that $\\Theta=0$ and $\\epsilon=0$\ngives $x=y=0$ for all times.\n\n\n\n**Solution.**\nHere is the code:\n\n\n```python\ndef test_equilibrium():\n \"\"\"Test that starting from rest makes x=y=theta=0.\"\"\"\n x, y, theta, t = simulate(\n beta=0.9, Theta=0, epsilon=0,\n num_periods=6, time_steps_per_period=10, plot=False)\n tol = 1E-14\n assert np.abs(x.max()) < tol\n assert np.abs(y.max()) < tol\n assert np.abs(theta.max()) < tol\n```\n\n\n\n**c)**\nWrite another test function for checking that the pure vertical\nmotion of the elastic pendulum is correct.\nStart with simplifying the ODEs for pure vertical motion and show that\n$\\bar y(\\bar t)$ fulfills a vibration equation with\nfrequency $\\sqrt{\\beta/(1-\\beta)}$. Set up the exact solution.\n\nWrite a test function that\nuses this special case to verify the `simulate` function. There will\nbe numerical approximation errors present in the results from\n`simulate` so you have to believe in correct results and set a\n(low) tolerance that corresponds to the computed maximum error.\nUse a small $\\Delta t$ to obtain a small numerical approximation error.\n\n\n\n**Solution.**\nFor purely vertical motion, the ODEs reduce to $\\ddot x = 0$ and\n\n$$\n\\frac{d^2\\bar y}{d\\bar t^2} = -\\frac{\\beta}{1-\\beta}(1-\\beta\\frac{1}{\\sqrt{(\\bar y - 1)^2}})(\\bar y-1) - \\beta = -\\frac{\\beta}{1-\\beta}(\\bar y-1 + \\beta) - \\beta\\thinspace .\n$$\n\nWe have here used that $(\\bar y -1)/\\sqrt{(\\bar y -1)^2}=-1$ since\n$\\bar y$ cannot exceed 1 (the pendulum's wire is fixed at the scaled\npoint $(0,1)$). In fact, $\\bar y$ will be around zero.\n(As a consistency check, we realize that in equilibrium, $\\ddot\\bar y =0$,\nand multiplying by $(1-\\beta)/\\beta$ leads to the expected $\\bar y=0$.)\nFurther calculations easily lead to\n\n$$\n\\frac{d^2\\bar y}{d\\bar t^2} = -\\frac{\\beta}{1-\\beta}\\bar y = -\\omega^2\\bar y,\n$$\n\nwhere we have introduced the frequency\n$\\omega = \\sqrt{\\beta/(1-\\beta)}$.\nSolving this standard ODE, with an initial stretching $\\bar y(0)=\\epsilon$\nand no velocity, results in\n\n$$\n\\bar y(\\bar t) = \\epsilon\\cos(\\omega\\bar t)\\thinspace .\n$$\n\nNote that the oscillations we describe here are very different from\nthe oscillations used to set the period and time step in function\n`simulate`. The latter type of oscillations are due to gravity when\na classical, non-elastic pendulum oscillates back and forth, while\n$\\bar y(\\bar t)$ above refers to vertical *elastic* oscillations in the wire\naround the equilibrium point in the gravity field. The angular frequency\nof the vertical oscillations are given by $\\omega$ and the corresponding\nperiod is $\\hat P = 2\\pi/\\omega$. Suppose we want to simulate for\n$T=N\\hat P = N2\\pi/\\omega$ and use $n$ time steps per period,\n$\\Delta\\bar t = \\hat P/n$. The `simulate` function operates with\na simulation time of `num_periods` times $2\\pi$. This means that we must set\n`num_periods=N/omega` if we want to simulate to time $T=N\\hat P$.\nThe parameter `time_steps_per_period` must be set to $\\omega n$\nsince `simulate` has $\\Delta t$ as $2\\pi$ divided by `time_steps_per_period`\nand we want $\\Delta t = 2\\pi\\omega^{-1}n^{-1}$.\n\nThe corresponding test function can be written as follows.\n\n\n```python\ndef test_vertical_motion():\n beta = 0.9\n omega = np.sqrt(beta/(1-beta))\n # Find num_periods. Recall that P=2*pi for scaled pendulum\n # oscillations, while here we don't have gravity driven\n # oscillations, but elastic oscillations with frequency omega.\n period = 2*np.pi/omega\n # We want T = N*period\n N = 5\n # simulate function has T = 2*pi*num_periods\n num_periods = 5/omega\n n = 600\n time_steps_per_period = omega*n\n\n y_exact = lambda t: -0.1*np.cos(omega*t)\n x, y, theta, t = simulate(\n beta=beta, Theta=0, epsilon=0.1,\n num_periods=num_periods,\n time_steps_per_period=time_steps_per_period,\n plot=False)\n\n tol = 0.00055 # ok tolerance for the above resolution\n # No motion in x direction is epxected\n assert np.abs(x.max()) < tol\n # Check motion in y direction\n y_e = y_exact(t)\n diff = np.abs(y_e - y).max()\n if diff > tol: # plot\n plt.plot(t, y, t, y_e, legend=['y', 'exact'])\n raw_input('Error in test_vertical_motion; type CR:')\n assert diff < tol, 'diff=%g' % diff\n```\n\n\n\n**d)**\nMake a function `demo(beta, Theta)` for simulating an elastic pendulum with a\ngiven $\\beta$ parameter and initial angle $\\Theta$. Use 600 time steps\nper period to get every accurate results, and simulate for 3 periods.\n\n\n\n**Solution.**\nThe `demo` function is just\n\n\n```python\ndef demo(beta=0.999, Theta=40, num_periods=3):\n x, y, theta, t = simulate(\n beta=beta, Theta=Theta, epsilon=0,\n num_periods=num_periods, time_steps_per_period=600,\n plot=True)\n```\n\nBelow are plots corresponding to $\\beta = 0.999$ (3 periods) and\n$\\beta = 0.93$ (one period):\n\n\n\n\n

\n\n\n\n\n\n\n\n\n

\n\n\n\n\n\n\n\n\n

\n\n\n\n\n\n\n\n\n

\n\n\n\n\n\n\n\nFilename: `elastic_pendulum`.\n\n\n\n\n\n\n\n\n## Exercise 6: Simulate an elastic pendulum with air resistance\n
\n\nThis is a continuation [Exercise 5: Simulate an elastic pendulum](#vib:exer:pendulum_elastic).\nAir resistance on the body with mass $m$ can be modeled by the\nforce $-\\frac{1}{2}\\varrho C_D A|\\v|\\v$,\nwhere $C_D$ is a drag coefficient (0.2 for a sphere), $\\varrho$\nis the density of air (1.2 $\\hbox{kg }\\,{\\hbox{m}}^{-3}$), $A$ is the\ncross section area ($A=\\pi R^2$ for a sphere, where $R$ is the radius),\nand $\\v$ is the velocity of the body.\nInclude air resistance in the original model, scale the model,\nwrite a function `simulate_drag` that is a copy of the `simulate`\nfunction from [Exercise 5: Simulate an elastic pendulum](#vib:exer:pendulum_elastic), but with the\nnew ODEs included, and show plots of how air resistance\ninfluences the motion.\n\n\n\n**Solution.**\nWe start with the model\n([18](#vib:app:pendulum_elastic:x))-([24](#vib:app:pendulum_elastic:vy0)).\nSince $\\v = \\dot x\\ii + \\dot y\\jj$, the air resistance term\ncan be written\n\n$$\n-q(\\dot x\\ii + \\dot y\\jj),\\quad q=\\frac{1}{2}\\varrho C_D A\\sqrt{\\dot x^2 + \\dot y^2}\\thinspace .\n$$\n\nNote that for positive velocities, the pendulum is moving to the right\nand the air resistance works against the motion, i.e., in direction of\n$-\\v = -\\dot x\\ii - \\dot y\\jj$.\n\nWe can easily include the terms in the ODEs:\n\n\n
\n\n$$\n\\begin{equation}\n\\ddot x = -\\frac{q}{m}\\dot x -\\frac{k}{m}\\left(1 -\\frac{L_0}{L}\\right)(x-x_0),\n\\label{vib:app:pendulum_elastic_drag:x} \\tag{44}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\ddot y = -\\frac{q}{m}\\dot y -\\frac{k}{m}\\left(1 -\\frac{L_0}{L}\\right)(y-y_0) - g,\n\\label{vib:app:pendulum_elastic_drag:y} \\tag{45}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \nL = \\sqrt{(x-x_0)^2 + (y-y_0)^2},\n\\label{vib:app:pendulum_elastic_drag:L} \\tag{46}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\label{_auto9} \\tag{47}\n\\end{equation}\n$$\n\nThe initial conditions are not affected.\n\nThe next step is to scale the model. We use the same scales as in\n[Exercise 5: Simulate an elastic pendulum](#vib:exer:pendulum_elastic), introduce $\\beta$, and $A=\\pi R^2$\nto simplify the $-q\\dot x/m$ term to\n\n$$\n\\frac{L_0}{2m}\\varrho C_D R^2\\beta^{-1}\n\\sqrt{\\left(\\frac{d\\bar x}{d\\bar t}\\right)^2 +\n\\left(\\frac{d\\bar y}{d\\bar t}\\right)^2}\n= \\gamma \\beta^{-1}\n\\sqrt{\\left(\\frac{d\\bar x}{d\\bar t}\\right)^2 +\n\\left(\\frac{d\\bar y}{d\\bar t}\\right)^2},\n$$\n\nwhere $\\gamma$ is a second dimensionless parameter:\n\n$$\n\\gamma = \\frac{L_0}{2m}\\varrho C_D R^2\\thinspace .\n$$\n\nThe final set of scaled equations is then\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d^2\\bar x}{d\\bar t^2} = -\\gamma\\beta^{-1}\n\\sqrt{\\left(\\frac{d\\bar x}{d\\bar t}\\right)^2 +\n\\left(\\frac{d\\bar y}{d\\bar t}\\right)^2}\\frac{d\\bar x}{d\\bar t}\n-\\frac{\\beta}{1-\\beta}\\left(1- \\frac{\\beta}{\\bar L}\\right)\\bar x,\n\\label{vib:app:pendulum_elastic_drag:x:s} \\tag{48}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d^2\\bar y}{d\\bar t^2} =\n-\\gamma\\beta^{-1}\n\\sqrt{\\left(\\frac{d\\bar x}{d\\bar t}\\right)^2 +\n\\left(\\frac{d\\bar y}{d\\bar t}\\right)^2}\\frac{d\\bar y}{d\\bar t}\n-\\frac{\\beta}{1-\\beta}\\left(1- \\frac{\\beta}{\\bar L}\\right)(\\bar y-1)\n-\\beta,\n\\label{vib:app:pendulum_elastic_drag:y:s} \\tag{49}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar L = \\sqrt{\\bar x^2 + (\\bar y-1)^2},\n\\label{vib:app:pendulum_elastic_drag:L:s} \\tag{50}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar x(0) = (1+\\epsilon)\\sin\\Theta,\n\\label{vib:app:pendulum_elastic_drag:x0:s} \\tag{51}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d\\bar x}{d\\bar t}(0) = 0,\n\\label{vib:app:pendulum_elastic_drag:vx0:s} \\tag{52}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\bar y(0) = 1 - (1+\\epsilon)\\cos\\Theta,\n\\label{vib:app:pendulum_elastic_drag:y0:s} \\tag{53}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{d\\bar y}{d\\bar t}(0) = 0,\n\\label{vib:app:pendulum_elastic_drag:vy0:s} \\tag{54}\n\\end{equation}\n$$\n\nThe new `simulate_drag` function is implemented below.\n\n\n```python\ndef simulate_drag(\n beta=0.9, # dimensionless elasticity parameter\n gamma=0, # dimensionless drag parameter\n Theta=30, # initial angle in degrees\n epsilon=0, # initial stretch of wire\n num_periods=6, # simulate for num_periods\n time_steps_per_period=60, # time step resolution\n plot=True, # make plots or not\n ):\n from math import sin, cos, pi\n Theta = Theta*np.pi/180 # convert to radians\n # Initial position and velocity\n # (we order the equations such that Euler-Cromer in odespy\n # can be used, i.e., vx, x, vy, y)\n ic = [0, # x'=vx\n (1 + epsilon)*sin(Theta), # x\n 0, # y'=vy\n 1 - (1 + epsilon)*cos(Theta), # y\n ]\n\n def f(u, t, beta, gamma):\n vx, x, vy, y = u\n L = np.sqrt(x**2 + (y-1)**2)\n v = np.sqrt(vx**2 + vy**2)\n h1 = beta/(1-beta)*(1 - beta/L) # help factor\n h2 = gamma/beta*v\n return [-h2*vx - h1*x, vx, -h2*vy - h1*(y-1) - beta, vy]\n\n # Non-elastic pendulum (scaled similarly in the limit beta=1)\n # solution Theta*cos(t)\n P = 2*pi\n dt = P/time_steps_per_period\n T = num_periods*P\n omega = 2*pi/P\n\n time_points = np.linspace(\n 0, T, num_periods*time_steps_per_period+1)\n\n solver = odespy.EulerCromer(f, f_args=(beta, gamma))\n solver.set_initial_condition(ic)\n u, t = solver.solve(time_points)\n x = u[:,1]\n y = u[:,3]\n theta = np.arctan(x/(1-y))\n\n if plot:\n plt.figure()\n plt.plot(x, y, 'b-', title='Pendulum motion',\n daspect=[1,1,1], daspectmode='equal',\n axis=[x.min(), x.max(), 1.3*y.min(), 1])\n plt.savefig('tmp_xy.png')\n plt.savefig('tmp_xy.pdf')\n # Plot theta in degrees\n plt.figure()\n plt.plot(t, theta*180/np.pi, 'b-',\n title='Angular displacement in degrees')\n plt.savefig('tmp_theta.png')\n plt.savefig('tmp_theta.pdf')\n if abs(Theta) < 10*pi/180:\n # Compare theta and theta_e for small angles (<10 degrees)\n theta_e = Theta*np.cos(omega*t) # non-elastic scaled sol.\n plt.figure()\n plt.plot(t, theta, t, theta_e,\n legend=['theta elastic', 'theta non-elastic'],\n title='Elastic vs non-elastic pendulum, '\\\n 'beta=%g' % beta)\n plt.savefig('tmp_compare.png')\n plt.savefig('tmp_compare.pdf')\n # Plot y vs x (the real physical motion)\n return x, y, theta, t\n```\n\nThe plot of $\\theta$ shows the damping ($\\beta = 0.999$):\n\n\n\n\n

\n\n\n\n\n\nTest functions for equilibrium and vertical motion are also included. These\nare as in [Exercise 6: Simulate an elastic pendulum with air resistance](#vib:exer:pendulum_elastic_drag), except that\nthey call `simulate_drag` instead of `simulate`.\n\n\nFilename: `elastic_pendulum_drag`.\n\n\n\n### Remarks\n\nTest functions are challenging to construct for the problem with\nair resistance. You can reuse the tests from\n[Exercise 6: Simulate an elastic pendulum with air resistance](#vib:exer:pendulum_elastic_drag) for `simulate_drag`,\nbut these tests does not verify the new terms arising from air\nresistance.\n\n\n\n\n\n\n\n\n\n## Exercise 7: Implement the PEFRL algorithm\n
\n\nWe consider the motion of a planet around a star (the section [Two-body gravitational problem](#vib:app:gravitation)).\nThe simplified case where one\nmass is very much bigger than the other and one object is at rest,\nresults in the scaled ODE model\n\n$$\n\\begin{align*}\n\\ddot x + (x^2 + y^2)^{-3/2}x & = 0,\\\\ \n\\ddot y + (x^2 + y^2)^{-3/2}y & = 0\\thinspace .\n\\end{align*}\n$$\n\n**a)**\nmathcal{I}_t is easy to show that $x(t)$ and $y(t)$ go like sine and cosine\nfunctions. Use this idea to derive the exact solution.\n\n\n\n**Solution.**\nWe may assume $x=C_x\\cos(\\omega t)$ and $y=C_y\\sin(\\omega t)$ for\nconstants $C_x,$, $C_y$, and $\\omega$. Inserted in the equations, we\nsee that $\\omega =1$. The initial conditions determine the other\nconstants, which we may choose as $C_x=C_y=1$ (the object starts\nat $(1,0)$ with a velocity $(0,1)$). The motion is a perfect circle,\nwhich should last forever.\n\n\n\n**b)**\nOne believes that a planet may orbit a star for billions of years.\nWe are now interested\nin how accurate methods we actually need for such calculations.\nA first task is to determine what the time interval of interest is in\nscaled units. Take the earth and sun as typical objects and find\nthe characteristic time used in the scaling of the equations\n($t_c = \\sqrt{L^3/(mG)}$), where $m$ is the mass of the sun, $L$ is the\ndistance between the sun and the earth, and $G$ is the gravitational\nconstant. Find the scaled time interval corresponding to one billion years.\n\n\n\n**Solution.**\nAccording to [Wikipedia](https://en.wikipedia.org/wiki/Solar_mass),\nthe mass of the sun is approximately $2\\cdot 10^{30}$ kg. This\nis 332946 times the mass of the earth, implying that the\ndimensionless constant $\\alpha \\approx 3\\cdot 10^{-6}$. With\n$G=6.674\\cdot 10^{-11}\\hbox{ Nm}^2/\\hbox{kg}^2$, and the\n[sun-earth distance](https://en.wikipedia.org/wiki/Astronomical_unit)\nas (approximately) 150 million km, we have $t_c \\approx 5 028 388$ s.\nThis is about 58 days, which is the characteristic time, chosen as the\nangular frequency of the oscillations. To get the period of one orbit we therefore must multiply by $2\\pi$. This gives about 1 year (and demonstrates the\nfact mentioned about the scaling: the natural time scale is consistent with\nKepler's law about the period).\n\nThus, one billion years correspond to 62,715,924,070 time units (dividing\none billion years by $t_c$), which corresponds to about 2000\n\"time unit years\".\n\n\n\n**c)**\nSolve the equations using 4th-order Runge-Kutta and the Euler-Cromer\nmethods. You may benefit from applying Odespy for this purpose. With\neach solver, simulate 10000 orbits and print the maximum position\nerror and CPU time as a function of time step. Note that the maximum\nposition error does not necessarily occur at the end of the\nsimulation. The position error achieved with each solver will depend\nheavily on the size of the time step. Let the time step correspond to\n200, 400, 800 and 1600 steps per orbit, respectively. Are the results\nas expected? Explain briefly. When you develop your program, have in\nmind that it will be extended with an implementation of the other\nalgorithms (as requested in d) and e) later) and experiments with this\nalgorithm as well.\n\n\n\n**Solution.**\nThe first task is to implement the right-hand side function for the\nsystem of ODEs such that we can call up Odespy solvers (or make use of\nother types of ODE software, e.g., from SciPy). The $2\\times 2$ system of\nsecond-order ODEs must be expressed as a $4\\times 4$ system of first-order\nODEs. We have three different cases of right-hand sides:\n\n1. Common numbering of unknowns: $x$, $v_x$, $y$, $y_x$\n\n2. Numbering required by Euler-Cromer: $v_x$, $x$, $v_y$, $y$\n\n3. Numbering required by the PEFRL method: same as Euler-Cromer\n\nMost Odespy solvers can handle any convention for numbering of the unknowns.\nThe important point is that initial conditions and new values at the end of\nthe time step are filled in the right positions of a one-dimensional array\ncontaining the unknowns.\nUsing Odespy to solve the system by the Euler-Cromer method, however, requires\nthe unknowns to appear as velocity 1st degree-of-freedom, displacement\n1st degree-of-freedom, velocity 2nd degree-of-freedom, displacement\n2nd degree-of-freedom, and so forth. Two alternative right-hand side\nfunctions `f(u, t)` for Odespy solvers is then\n\n\n```python\ndef f_EC(u, t):\n '''\n Return derivatives for the 1st order system as\n required by Euler-Cromer.\n '''\n vx, x, vy, y = u # u: array holding vx, x, vy, y\n d = -(x**2 + y**2)**(-3.0/2)\n return [d*x, vx, d*y, vy ]\n\ndef f_RK4(u, t):\n '''\n Return derivatives for the 1st order system as\n required by ordinary solvers in Odespy.\n '''\n x, vx, y, vy = u # u: array holding x, vx, y, vy\n d = -(x**2 + y**2)**(-3.0/2)\n return [vx, d*x, vy, d*y ]\n```\n\nIn addition, we shall later in d) implement the PEFRL method and just\ngive the $g$ function as input to a system of the form $dv_x = g_x$,\n$dv_y = g_y$, and $g$ becomes the vector $(g_x,g_y)$:\n\nSome prefer to number the unknowns differently, and with the RK4 method we\nare free to use any numbering, including this one:\n\n\n```python\ndef g(u, v):\n return np.array([-u])\ndef u_exact(t):\n return np.array([3*np.cos(t)]).transpose()\nI = u_exact(0)\nV = np.array([0])\nprint 'V:', V, 'I:', I\n\n# Numerical parameters\nw = 1\nP = 2*np.pi/w\ndt_values = [P/20, P/40, P/80, P/160, P/320]\nT = 8*P\nerror_vs_dt = []\nfor n, dt in enumerate(dt_values):\n u, v, t = solver_PEFRL(I, V, g, dt, T)\n error = np.abs(u - u_exact(t)).max()\n print 'error:', error\n if n > 0:\n error_vs_dt.append(error/dt**4)\nfor i in range(1, len(error_vs_dt)):\n #print abs(error_vs_dt[i]- error_vs_dt[0])\n assert abs(error_vs_dt[i]-\n error_vs_dt[0]) < 0.1\n\n\ns PEFRL(odespy.Solver):\n\"\"\"Class wrapper for Odespy.\"\"\" # Not used!\nquick_desctiption = \"Explicit 4th-order method for v'=-f, u=v.\"\"\"\n\ndef advance(self):\n u, f, n, t = self.u, self.f, self.n, self.t\n dt = t[n+1] - t[n]\n I = np.array([u[1], u[3]])\n V = np.array([u[0], u[2]])\n u, v, t = solver_PFFRL(I, V, f, dt, t+dt)\n return np.array([v[-1], u[-1]])\n\ncompute_orbit_and_error(\nf,\nsolver_ID,\ntimesteps_per_period=20,\nN_orbit_groups=1000,\norbit_group_size=10):\n'''\nFor one particular solver:\nCalculte the orbits for a multiple of grouped orbits, i.e.\nnumber of orbits = orbit_group_size*N_orbit_groups.\nReturns: time step dt, and, for each N_orbit_groups cycle,\nthe 2D position error and cpu time (as lists).\n'''\ndef u_exact(t):\n return np.array([np.cos(t), np.sin(t)])\n\nw = 1\nP = 2*np.pi/w # scaled period (1 year becomes 2*pi)\ndt = P/timesteps_per_period\nNt = orbit_group_size*N_orbit_groups*timesteps_per_period\nT = Nt*dt\nt_mesh = np.linspace(0, T, Nt+1)\nE_orbit = []\n\n#print ' dt:', dt\nT_interval = P*orbit_group_size\nN = int(round(T_interval/dt))\n\n# set initial conditions\nif solver_ID == 'EC':\n A = [0,1,1,0]\nelif solver_ID == 'PEFRL':\n I = np.array([1, 0])\n V = np.array([0, 1])\nelse:\n A = [1,0,0,1]\n\nt1 = time.clock()\nfor i in range(N_orbit_groups):\n time_points = np.linspace(i*T_interval, (i+1)*T_interval,N+1)\n u_e = u_exact(time_points).transpose()\n if solver_ID == 'EC':\n solver = odespy.EulerCromer(f)\n solver.set_initial_condition(A)\n ui, ti = solver.solve(time_points)\n # Find error (correct final pos: x=1, y=0)\n orbit_error = np.sqrt(\n (ui[:,1]-u_e[:,0])**2 + (ui[:,3]-u_e[:,1])**2).max()\n elif solver_ID == 'PEFRL':\n # Note: every T_inverval is here counted from time 0\n ui, vi, ti = solver_PEFRL(I, V, f, dt, T_interval)\n # Find error (correct final pos: x=1, y=0)\n orbit_error = np.sqrt(\n (ui[:,0]-u_e[:,0])**2 + (ui[:,1]-u_e[:,1])**2).max()\n else:\n solver = eval('odespy.' + solver_ID(f)\n solver.set_initial_condition(A)\n ui, ti = solver.solve(time_points)\n # Find error (correct final pos: x=1, y=0)\n orbit_error = np.sqrt(\n (ui[:,0]-u_e[:,0])**2 + (ui[:,2]-u_e[:,1])**2).max()\n\n print ' Orbit no. %d, max error (per cent): %g' % \\\n ((i+1)*orbit_group_size, orbit_error)\n\n E_orbit.append(orbit_error)\n\n # set init. cond. for next time interval\n if solver_ID == 'EC':\n A = [ui[-1,0], ui[-1,1], ui[-1,2], ui[-1,3]]\n elif solver_ID == 'PEFRL':\n I = [ui[-1,0], ui[-1,1]]\n V = [vi[-1,0], vi[-1,1]]\n else: # RK4, adaptive rules, etc.\n A = [ui[-1,0], ui[-1,1], ui[-1,2], ui[-1,3]]\n\nt2 = time.clock()\nCPU_time = (t2 - t1)/(60.0*60.0) # in hours\nreturn dt, E_orbit, CPU_time\n\norbit_error_vs_dt(\nf_EC, f_RK4, g, solvers,\nN_orbit_groups=1000,\norbit_group_size=10):\n'''\nWith each solver in list \"solvers\": Simulate\norbit_group_size*N_orbit_groups orbits with different dt values.\nCollect final 2D position error for each dt and plot all errors.\n'''\n\nfor solver_ID in solvers:\n print 'Computing orbit with solver:', solver_ID\n E_values = []\n dt_values = []\n cpu_values = []\n for timesteps_per_period in 200, 400, 800, 1600:\n print '.......time steps per period: ', \\\n timesteps_per_period\n if solver_ID == 'EC':\n dt, E, cpu_time = compute_orbit_and_error(\n f_EC,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n elif solver_ID == 'PEFRL':\n dt, E, cpu_time = compute_orbit_and_error(\n g,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n else:\n dt, E, cpu_time = compute_orbit_and_error(\n f_RK4,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n\n dt_values.append(dt)\n E_values.append(np.array(E).max())\n cpu_values.append(cpu_time)\n print 'dt_values:', dt_values\n print 'E max with dt...:', E_values\n print 'cpu_values with dt...:', cpu_values\n\n\norbit_error_vs_years(\nf_EC, f_RK4, g, solvers,\nN_orbit_groups=1000,\norbit_group_size=100,\nN_time_steps = 1000):\n'''\nFor each solver in the list solvers:\nsimulate orbit_group_size*N_orbit_groups orbits with a fixed\ndt corresponding to N_time_steps steps per year.\nCollect max 2D position errors for each N_time_steps'th run,\nplot these errors and CPU. Finally, make an empirical\nformula for error and CPU as functions of a number\nof cycles.\n'''\ntimesteps_per_period = N_time_steps # fixed for all runs\n\nfor solver_ID in solvers:\n print 'Computing orbit with solver:', solver_ID\n if solver_ID == 'EC':\n dt, E, cpu_time = compute_orbit_and_error(\n f_EC,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n elif solver_ID == 'PEFRL':\n dt, E, cpu_time = compute_orbit_and_error(\n g,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n else:\n dt, E, cpu_time = compute_orbit_and_error(\n f_RK4,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n\n # E and cpu_time are for every N_orbit_groups cycle\n print 'E_values (fixed dt, changing no of years):', E\n print 'CPU (hours):', cpu_time\n years = np.arange(\n 0,\n N_orbit_groups*orbit_group_size,\n orbit_group_size)\n\n # Now make empirical formula\n\n def E_of_years(x, *coeff):\n return sum(coeff[i]*x**float((len(coeff)-1)-i) \\\n for i in range(len(coeff)))\n E = np.array(E)\n degree = 4\n # note index: polyfit finds p[0]*x**4 + p[1]*x**3 ...etc.\n p = np.polyfit(years, E, degree)\n p_str = map(str, p)\n formula = ' + '.join([p_str[i] + '*x**' + \\\n str(degree-i) for i in range(degree+1)])\n\n print 'Empirical formula (error with years): ', formula\n plt.figure()\n plt.plot(years,\n E, 'b-',\n years,\n E_of_years(years, *p), 'r--')\n plt.xlabel('Number of years')\n plt.ylabel('Orbit error')\n plt.title(solver_ID)\n filename = solver_ID + 'tmp_E_with_years'\n plt.savefig(filename + '.png')\n plt.savefig(filename + '.pdf')\n plt.show()\n\n print 'Predicted CPU time in hours (1 billion years):', \\\n cpu_time*10000\n print 'Predicted max error (1 billion years):', \\\n E_of_years(1E9, *p)\n\ncompute_orbit_error_and_CPU():\n'''\nOrbit error and associated CPU times are computed with\nsolvers: RK4, Euler-Cromer, PEFRL.'''\n\ndef f_EC(u, t):\n '''\n Return derivatives for the 1st order system as\n required by Euler-Cromer.\n '''\n vx, x, vy, y = u # u: array holding vx, x, vy, y\n d = -(x**2 + y**2)**(-3.0/2)\n return [d*x, vx, d*y, vy ]\n\ndef f_RK4(u, t):\n '''\n Return derivatives for the 1st order system as\n required by ordinary solvers in Odespy.\n '''\n x, vx, y, vy = u # u: array holding x, vx, y, vy\n d = -(x**2 + y**2)**(-3.0/2)\n return [vx, d*x, vy, d*y ]\n\ndef g(u, v):\n '''\n Return derivatives for the 1st order system as\n required by PEFRL.\n '''\n d = -(u[0]**2 + u[1]**2)**(-3.0/2)\n return np.array([d*u[0], d*u[1]])\n```\n\nThe standard way of solving the ODE by Odespy is then\n\n\n```python\ndef u_exact(t):\n \"\"\"Return exact solution at time t.\"\"\"\n return np.array([np.cos(t), np.sin(t)])\n\nu_e = u_exact(time_points).transpose()\n\nsolver = odespy.RK4(f_RK4)\nsolver.set_initial_condition(A)\nui, ti = solver.solve(time_points)\n\n# Find error (correct final pos: x=1, y=0)\norbit_error = np.sqrt(\n (ui[:,0]-u_e[:,0])**2 + (ui[:,2]-u_e[:,1])**2).max()\n```\n\nWe develop functions for computing errors and plotting results where we\ncan compare different methods. These functions are shown in the solution to\nitem d).\n\nRunning the code, the time step sizes become\n\n dt_values: [0.031415926535897934, 0.015707963267948967,\n 0.007853981633974483, 0.003926990816987242]\n\n\nCorresponding maximum errors (per cent) and CPU values (hours) are for the 4th-order Runge-Kutta given in the table below.\n\n\n\n\n\n\n\n\n\n\n
Quantity $\\Delta t_1$ $\\Delta t_2$ $\\Delta t_3$ $\\Delta t_4$
$\\Delta t$ 0.03 0.02 0.008 0.004
Error 1.9039 0.0787 0.0025 7.7e-05
CPU (h) 0.03 0.06 0.12 0.23
\nFor Euler-Cromer we these results:\n\n\n\n\n\n\n\n\n\n\n
Quantity $\\Delta t_1$ $\\Delta t_2$ $\\Delta t_3$ $\\Delta t_4$
$\\Delta t$ 0.03 0.02 0.008 0.004
Error 2.0162 2.0078 1.9634 0.6730
CPU (h) 0.01 0.02 0.05 0.09
\n\nThese results are as expected. The Runge-Kutta implementation is much more accurate than Euler-Cromer, but since it requires more computations, more CPU time is needed. For both methods, accuracy and CPU time both increase as\nthe step size is reduced, but the increase is much more pronounced for\nthe Euler-Cromer method.\n\n\n\n**d)**\nImplement a solver based on the PEFRL method from\nthe section [vib:ode2:PEFRL](#vib:ode2:PEFRL). Verify its 4th-order convergence\nusing an equation $u'' + u = 0$.\n\n\n\n**Solution.**\nHere is a solver function:\n\n\n```python\nimport numpy as np\nimport time\n\ndef solver_PEFRL(I, V, g, dt, T):\n \"\"\"\n Solve v' = - g(u,v), u'=v for t in (0,T], u(0)=I and v(0)=V,\n by the PEFRL method.\n \"\"\"\n dt = float(dt)\n Nt = int(round(T/dt))\n u = np.zeros((Nt+1, len(I)))\n v = np.zeros((Nt+1, len(I)))\n t = np.linspace(0, Nt*dt, Nt+1)\n\n # these values are from eq (20), ref to paper below\n xi = 0.1786178958448091\n lambda_ = -0.2123418310626054\n chi = -0.06626458266981849\n\n v[0] = V\n u[0] = I\n # Compare with eq 22 in http://arxiv.org/pdf/cond-mat/0110585.pdf\n for n in range(0, Nt):\n u_ = u[n] + xi*dt*v[n]\n v_ = v[n] + 0.5*(1-2*lambda_)*dt*g(u_, v[n])\n u_ = u_ + chi*dt*v_\n v_ = v_ + lambda_*dt*g(u_, v_)\n u_ = u_ + (1-2*(chi+xi))*dt*v_\n v_ = v_ + lambda_*dt*g(u_, v_)\n u_ = u_ + chi*dt*v_\n v[n+1] = v_ + 0.5*(1-2*lambda_)*dt*g(u_, v_)\n u[n+1] = u_ + xi*dt*v[n+1]\n #print 'v[%d]=%g, u[%d]=%g' % (n+1,v[n+1],n+1,u[n+1])\n return u, v, t\n```\n\nA proper test function for verification reads\n\n\n```python\ndef test_solver_PEFRL():\n \"\"\"Check 4th order convergence rate, using u'' + u = 0,\n I = 3.0, V = 0, which has the exact solution u_e = 3*cos(t)\"\"\"\n def g(u, v):\n return np.array([-u])\n def u_exact(t):\n return np.array([3*np.cos(t)]).transpose()\n I = u_exact(0)\n V = np.array([0])\n print 'V:', V, 'I:', I\n\n # Numerical parameters\n w = 1\n P = 2*np.pi/w\n dt_values = [P/20, P/40, P/80, P/160, P/320]\n T = 8*P\n error_vs_dt = []\n for n, dt in enumerate(dt_values):\n u, v, t = solver_PEFRL(I, V, g, dt, T)\n error = np.abs(u - u_exact(t)).max()\n print 'error:', error\n if n > 0:\n error_vs_dt.append(error/dt**4)\n for i in range(1, len(error_vs_dt)):\n #print abs(error_vs_dt[i]- error_vs_dt[0])\n assert abs(error_vs_dt[i]-\n error_vs_dt[0]) < 0.1\n```\n\n\n\n**e)**\nThe simulations done previously with the 4th-order Runge-Kutta and\nEuler-Cromer are now to be repeated with the PEFRL solver, so the\ncode must be extended accordingly. Then run the simulations and comment\non the performance of PEFRL compared to the other two.\n\n\n\n**Solution.**\nWith the PEFRL algorithm, we get\n\n E max with dt...: [0.0010452575786173163, 6.5310955829464402e-05,\n 4.0475768394248492e-06, 2.9391302503251016e-07]\n cpu_values with dt...: [0.01873611111111106, 0.037422222222222294,\n 0.07511666666666655, 0.14985]\n\n\n\n\n\n\n\n\n\n\n\n
Qantity $\\Delta t_1$ $\\Delta t_2$ $\\Delta t_3$ $\\Delta t_4$
$\\Delta t$ 0.03 0.02 0.008 0.004
Error 1.04E-3 6.53E-05 4.05E-6 2.94E-7
CPU (h) 0.02 0.04 0.08 0.15
\n\nThe accuracy is now dramatically improved compared to 4th-order Runge-Kutta (and Euler-Cromer).\nWith 1600 steps per orbit, the PEFRL maximum error is just below $3.0e-07$ per cent, while\nthe corresponding error with Runge-Kutta was about $7.7e-05$ per cent! This is striking,\nconsidering the fact that the 4th-order Runge-Kutta and the PEFRL schemes are both 4th-order accurate.\n\n\n\n**f)**\nUse the PEFRL solver to simulate 100000 orbits with a fixed time step\ncorresponding to 1600 steps per period. Record the maximum error\nwithin each subsequent group of 1000 orbits. Plot these errors and fit\n(least squares) a mathematical function to the data. Print also the\ntotal CPU time spent for all 100000 orbits.\n\nNow, predict the error and required CPU time for a simulation of 1\nbillion years (orbits). Is it feasible on today's computers to\nsimulate the planetary motion for one billion years?\n\n\n\n**Solution.**\nThe complete code (which also produces the printouts given previously) reads:\n\n\n```python\nimport scitools.std as plt\nimport sys\nimport odespy\nimport numpy as np\nimport time\n\ndef solver_PEFRL(I, V, g, dt, T):\n \"\"\"\n Solve v' = - g(u,v), u'=v for t in (0,T], u(0)=I and v(0)=V,\n by the PEFRL method.\n \"\"\"\n dt = float(dt)\n Nt = int(round(T/dt))\n u = np.zeros((Nt+1, len(I)))\n v = np.zeros((Nt+1, len(I)))\n t = np.linspace(0, Nt*dt, Nt+1)\n\n # these values are from eq (20), ref to paper below\n xi = 0.1786178958448091\n lambda_ = -0.2123418310626054\n chi = -0.06626458266981849\n\n v[0] = V\n u[0] = I\n # Compare with eq 22 in http://arxiv.org/pdf/cond-mat/0110585.pdf\n for n in range(0, Nt):\n u_ = u[n] + xi*dt*v[n]\n v_ = v[n] + 0.5*(1-2*lambda_)*dt*g(u_, v[n])\n u_ = u_ + chi*dt*v_\n v_ = v_ + lambda_*dt*g(u_, v_)\n u_ = u_ + (1-2*(chi+xi))*dt*v_\n v_ = v_ + lambda_*dt*g(u_, v_)\n u_ = u_ + chi*dt*v_\n v[n+1] = v_ + 0.5*(1-2*lambda_)*dt*g(u_, v_)\n u[n+1] = u_ + xi*dt*v[n+1]\n #print 'v[%d]=%g, u[%d]=%g' % (n+1,v[n+1],n+1,u[n+1])\n return u, v, t\n\ndef test_solver_PEFRL():\n \"\"\"Check 4th order convergence rate, using u'' + u = 0,\n I = 3.0, V = 0, which has the exact solution u_e = 3*cos(t)\"\"\"\n def g(u, v):\n return np.array([-u])\n def u_exact(t):\n return np.array([3*np.cos(t)]).transpose()\n I = u_exact(0)\n V = np.array([0])\n print 'V:', V, 'I:', I\n\n # Numerical parameters\n w = 1\n P = 2*np.pi/w\n dt_values = [P/20, P/40, P/80, P/160, P/320]\n T = 8*P\n error_vs_dt = []\n for n, dt in enumerate(dt_values):\n u, v, t = solver_PEFRL(I, V, g, dt, T)\n error = np.abs(u - u_exact(t)).max()\n print 'error:', error\n if n > 0:\n error_vs_dt.append(error/dt**4)\n for i in range(1, len(error_vs_dt)):\n #print abs(error_vs_dt[i]- error_vs_dt[0])\n assert abs(error_vs_dt[i]-\n error_vs_dt[0]) < 0.1\n\n\nclass PEFRL(odespy.Solver):\n \"\"\"Class wrapper for Odespy.\"\"\" # Not used!\n quick_desctiption = \"Explicit 4th-order method for v'=-f, u=v.\"\"\"\n\n def advance(self):\n u, f, n, t = self.u, self.f, self.n, self.t\n dt = t[n+1] - t[n]\n I = np.array([u[1], u[3]])\n V = np.array([u[0], u[2]])\n u, v, t = solver_PFFRL(I, V, f, dt, t+dt)\n return np.array([v[-1], u[-1]])\n\ndef compute_orbit_and_error(\n f,\n solver_ID,\n timesteps_per_period=20,\n N_orbit_groups=1000,\n orbit_group_size=10):\n '''\n For one particular solver:\n Calculte the orbits for a multiple of grouped orbits, i.e.\n number of orbits = orbit_group_size*N_orbit_groups.\n Returns: time step dt, and, for each N_orbit_groups cycle,\n the 2D position error and cpu time (as lists).\n '''\n def u_exact(t):\n return np.array([np.cos(t), np.sin(t)])\n\n w = 1\n P = 2*np.pi/w # scaled period (1 year becomes 2*pi)\n dt = P/timesteps_per_period\n Nt = orbit_group_size*N_orbit_groups*timesteps_per_period\n T = Nt*dt\n t_mesh = np.linspace(0, T, Nt+1)\n E_orbit = []\n\n #print ' dt:', dt\n T_interval = P*orbit_group_size\n N = int(round(T_interval/dt))\n\n # set initial conditions\n if solver_ID == 'EC':\n A = [0,1,1,0]\n elif solver_ID == 'PEFRL':\n I = np.array([1, 0])\n V = np.array([0, 1])\n else:\n A = [1,0,0,1]\n\n t1 = time.clock()\n for i in range(N_orbit_groups):\n time_points = np.linspace(i*T_interval, (i+1)*T_interval,N+1)\n u_e = u_exact(time_points).transpose()\n if solver_ID == 'EC':\n solver = odespy.EulerCromer(f)\n solver.set_initial_condition(A)\n ui, ti = solver.solve(time_points)\n # Find error (correct final pos: x=1, y=0)\n orbit_error = np.sqrt(\n (ui[:,1]-u_e[:,0])**2 + (ui[:,3]-u_e[:,1])**2).max()\n elif solver_ID == 'PEFRL':\n # Note: every T_inverval is here counted from time 0\n ui, vi, ti = solver_PEFRL(I, V, f, dt, T_interval)\n # Find error (correct final pos: x=1, y=0)\n orbit_error = np.sqrt(\n (ui[:,0]-u_e[:,0])**2 + (ui[:,1]-u_e[:,1])**2).max()\n else:\n solver = eval('odespy.' + solver_ID(f)\n solver.set_initial_condition(A)\n ui, ti = solver.solve(time_points)\n # Find error (correct final pos: x=1, y=0)\n orbit_error = np.sqrt(\n (ui[:,0]-u_e[:,0])**2 + (ui[:,2]-u_e[:,1])**2).max()\n\n print ' Orbit no. %d, max error (per cent): %g' % \\\n ((i+1)*orbit_group_size, orbit_error)\n\n E_orbit.append(orbit_error)\n\n # set init. cond. for next time interval\n if solver_ID == 'EC':\n A = [ui[-1,0], ui[-1,1], ui[-1,2], ui[-1,3]]\n elif solver_ID == 'PEFRL':\n I = [ui[-1,0], ui[-1,1]]\n V = [vi[-1,0], vi[-1,1]]\n else: # RK4, adaptive rules, etc.\n A = [ui[-1,0], ui[-1,1], ui[-1,2], ui[-1,3]]\n\n t2 = time.clock()\n CPU_time = (t2 - t1)/(60.0*60.0) # in hours\n return dt, E_orbit, CPU_time\n\ndef orbit_error_vs_dt(\n f_EC, f_RK4, g, solvers,\n N_orbit_groups=1000,\n orbit_group_size=10):\n '''\n With each solver in list \"solvers\": Simulate\n orbit_group_size*N_orbit_groups orbits with different dt values.\n Collect final 2D position error for each dt and plot all errors.\n '''\n\n for solver_ID in solvers:\n print 'Computing orbit with solver:', solver_ID\n E_values = []\n dt_values = []\n cpu_values = []\n for timesteps_per_period in 200, 400, 800, 1600:\n print '.......time steps per period: ', \\\n timesteps_per_period\n if solver_ID == 'EC':\n dt, E, cpu_time = compute_orbit_and_error(\n f_EC,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n elif solver_ID == 'PEFRL':\n dt, E, cpu_time = compute_orbit_and_error(\n g,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n else:\n dt, E, cpu_time = compute_orbit_and_error(\n f_RK4,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n\n dt_values.append(dt)\n E_values.append(np.array(E).max())\n cpu_values.append(cpu_time)\n print 'dt_values:', dt_values\n print 'E max with dt...:', E_values\n print 'cpu_values with dt...:', cpu_values\n\n\ndef orbit_error_vs_years(\n f_EC, f_RK4, g, solvers,\n N_orbit_groups=1000,\n orbit_group_size=100,\n N_time_steps = 1000):\n '''\n For each solver in the list solvers:\n simulate orbit_group_size*N_orbit_groups orbits with a fixed\n dt corresponding to N_time_steps steps per year.\n Collect max 2D position errors for each N_time_steps'th run,\n plot these errors and CPU. Finally, make an empirical\n formula for error and CPU as functions of a number\n of cycles.\n '''\n timesteps_per_period = N_time_steps # fixed for all runs\n\n for solver_ID in solvers:\n print 'Computing orbit with solver:', solver_ID\n if solver_ID == 'EC':\n dt, E, cpu_time = compute_orbit_and_error(\n f_EC,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n elif solver_ID == 'PEFRL':\n dt, E, cpu_time = compute_orbit_and_error(\n g,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n else:\n dt, E, cpu_time = compute_orbit_and_error(\n f_RK4,\n solver_ID,\n timesteps_per_period,\n N_orbit_groups,\n orbit_group_size)\n\n # E and cpu_time are for every N_orbit_groups cycle\n print 'E_values (fixed dt, changing no of years):', E\n print 'CPU (hours):', cpu_time\n years = np.arange(\n 0,\n N_orbit_groups*orbit_group_size,\n orbit_group_size)\n\n # Now make empirical formula\n\n def E_of_years(x, *coeff):\n return sum(coeff[i]*x**float((len(coeff)-1)-i) \\\n for i in range(len(coeff)))\n E = np.array(E)\n degree = 4\n # note index: polyfit finds p[0]*x**4 + p[1]*x**3 ...etc.\n p = np.polyfit(years, E, degree)\n p_str = map(str, p)\n formula = ' + '.join([p_str[i] + '*x**' + \\\n str(degree-i) for i in range(degree+1)])\n\n print 'Empirical formula (error with years): ', formula\n plt.figure()\n plt.plot(years,\n E, 'b-',\n years,\n E_of_years(years, *p), 'r--')\n plt.xlabel('Number of years')\n plt.ylabel('Orbit error')\n plt.title(solver_ID)\n filename = solver_ID + 'tmp_E_with_years'\n plt.savefig(filename + '.png')\n plt.savefig(filename + '.pdf')\n plt.show()\n\n print 'Predicted CPU time in hours (1 billion years):', \\\n cpu_time*10000\n print 'Predicted max error (1 billion years):', \\\n E_of_years(1E9, *p)\n\ndef compute_orbit_error_and_CPU():\n '''\n Orbit error and associated CPU times are computed with\n solvers: RK4, Euler-Cromer, PEFRL.'''\n\n def f_EC(u, t):\n '''\n Return derivatives for the 1st order system as\n required by Euler-Cromer.\n '''\n vx, x, vy, y = u # u: array holding vx, x, vy, y\n d = -(x**2 + y**2)**(-3.0/2)\n return [d*x, vx, d*y, vy ]\n\n def f_RK4(u, t):\n '''\n Return derivatives for the 1st order system as\n required by ordinary solvers in Odespy.\n '''\n x, vx, y, vy = u # u: array holding x, vx, y, vy\n d = -(x**2 + y**2)**(-3.0/2)\n return [vx, d*x, vy, d*y ]\n\n def g(u, v):\n '''\n Return derivatives for the 1st order system as\n required by PEFRL.\n '''\n d = -(u[0]**2 + u[1]**2)**(-3.0/2)\n return np.array([d*u[0], d*u[1]])\n\n print 'Find orbit error as fu. of dt...(10000 orbits)'\n solvers = ['RK4', 'EC', 'PEFRL']\n N_orbit_groups=1\n orbit_group_size=10000\n orbit_error_vs_dt(\n f_EC, f_RK4, g, solvers,\n N_orbit_groups=N_orbit_groups,\n orbit_group_size=orbit_group_size)\n\n print 'Compute orbit error as fu. of no of years (fixed dt)...'\n solvers = ['PEFRL']\n N_orbit_groups=100\n orbit_group_size=1000\n N_time_steps = 1600 # no of steps per orbit cycle\n orbit_error_vs_years(\n f_EC, f_RK4, g, solvers,\n N_orbit_groups=N_orbit_groups,\n orbit_group_size=orbit_group_size,\n N_time_steps = N_time_steps)\n\nif __name__ == '__main__':\n test_solver_PEFRL()\n compute_orbit_error_and_CPU()\n```\n\nThe maximum error develops with number of orbits as seen in the following plot,\nwhere the red dashed curve is from the mathematical model:\n\n\n\n\n

\n\n\n\n\n\nWe note that the maximum error achieved during the first 100000 orbits is only\nabout $1.2e-06$ per cent. Not bad!\n\nFor the printed CPU and empirical formula, we get:\n\n CPU (hours): 1.51591388889\n Empirical formula (E with years):\n 3.15992325978e-26*x**4 + -6.1772567063e-21*x**3 +\n 1.87983349496e-16*x**2 + 2.32924158693e-11*x**1 +\n 5.46989368301e-08*x**0\n\n\nSince the CPU develops linearly, the CPU time for 100000 orbits can just be multiplied by 10000 to get the\nestimated CPU time required for 1 billion years. This gives 15159 CPU hours (631 days), which is also printed.\n\nWith the derived empirical formula, the estimated orbit error after 1 billion years becomes 31593055529 per cent.\n\n[sl 2: Can we really use the plot and the function to predict max E during 1 billion years? Seems hard.]\n\n\n\nFilename: `vib_PEFRL`.\n\n\n\n### Remarks\n\nThis exercise investigates whether it is feasible to predict\nplanetary motion for the life time of a solar system.\n[hpl 3: Is it???]\n\n\n\n", "meta": {"hexsha": "4c2878f072bbb9ed69c397496d8d88637987ccf2", "size": 183797, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "fdm-devito-notebooks/01_vib/vib_app.ipynb", "max_stars_repo_name": "devitocodes/devito_book", "max_stars_repo_head_hexsha": "30405c3d440a1f89df69594fd0704f69650c1ded", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2020-07-17T13:19:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-27T05:21:09.000Z", "max_issues_repo_path": "fdm-devito-notebooks/01_vib/vib_app.ipynb", "max_issues_repo_name": "devitocodes/devito_book", "max_issues_repo_head_hexsha": "30405c3d440a1f89df69594fd0704f69650c1ded", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 73, "max_issues_repo_issues_event_min_datetime": "2020-07-14T15:38:52.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-25T11:54:59.000Z", "max_forks_repo_path": "fdm-devito-notebooks/01_vib/vib_app.ipynb", "max_forks_repo_name": "devitocodes/devito_book", "max_forks_repo_head_hexsha": "30405c3d440a1f89df69594fd0704f69650c1ded", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-27T05:21:14.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-27T05:21:14.000Z", "avg_line_length": 33.6871334311, "max_line_length": 246, "alphanum_fraction": 0.5159931881, "converted": true, "num_tokens": 38219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38861801254413975, "lm_q2_score": 0.31405055141190724, "lm_q1q2_score": 0.12204570112808658}} {"text": "###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License \u00a9 2019 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi\n\n\n```python\n# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../style/custom.css'\nHTML(open(css_file, \"r\").read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n# Exercise: How to sail without wind \n\nImagine, the Bsc-students of the \"Differential Equations in the Earth System\" course are organizing a sailing trip in the Kiel Bay area and baltic sea. Unfortunately, the strong wind gusts predicted by the meteorologists, become not even a small breeze. Sometimes even physicists are not able to predict the future. We will learn why in the next lecture.\n\nFortunately, the oceanographers can deliver sea current data of the specified area. So how can the students sail without wind and stay on course? By letting their thoughts and boat drift and solving the simplest, uncoupled ordinary differential equation, I can imagine.\n\n## Governing equations\n\nThe velocity vector field ${\\bf{V}} = (v_x,v_y)^T$ is componentwise related to the spatial coordinates ${\\bf{x}} = (x,y)^T$ by \n\n\\begin{equation}\nv_x = \\frac{dx}{dt},\\; v_y = \\frac{dy}{dt}\n\\end{equation}\n\nTo estimate the drift or **streamline** of our boat in the velocity vector field $\\bf{V}$, starting from an initial position ${\\bf{x_0}} = (x_0,y_0)^T$, we have to solve the uncoupled ordinary differential equations using the finite difference method introduced at the beginning of this class.\n\nApproximating the temporal derivatives in eqs. (1) using the **backward FD operator**\n\n\\begin{equation}\n\\frac{df}{dt} \\approx \\frac{f(t)-f(t-dt)}{dt} \\notag\n\\end{equation}\n\nwith the time sample interval $dt$ leads to \n\n\\begin{equation}\n\\begin{split}\nv_x &= \\frac{x(t)-x(t-dt)}{dt}\\\\\nv_y &= \\frac{y(t)-y(t-dt)}{dt}\\\\\n\\end{split}\n\\notag\n\\end{equation}\n\nAfter solving for $x(t), y(t)$, we get the **explicit time integration scheme**:\n\n\\begin{equation}\n\\begin{split}\nx(t) &= x(t-dt) + dt\\; v_x\\\\\ny(t) &= y(t-dt) + dt\\; v_y\\\\\n\\end{split}\n\\notag\n\\end{equation}\n\nand by introducing a temporal dicretization $t^n = n * dt$ with $n \\in [0,1,...,nt]$, where $nt$ denotes the maximum time steps, the final FD code becomes:\n\n\\begin{equation}\n\\begin{split}\nx^n &= x^{n-1} + dt\\; v_x^{n-1}\\\\\ny^n &= y^{n-1} + dt\\; v_y^{n-1}\\\\\n\\end{split}\n\\end{equation}\n\nThese equations simply state, that we can extrapolate the next position of our boat $(x^{(n)},y^{(n)})^T$ in the velocity vector field based on the position at a previous time step $(x^{(n-1)},y^{(n-1)})^T$, the velocity field at this previous position $(v_x^{(n-1)},v_y^{(n-1)})^T$ and a predefined time step $dt$. Before implementing the FD scheme in Python, let 's try to find a simple velocity vector field ...\n\n## Boring velocity vector field \n\nWe should start with a simple, boring velocity vector field, where we can easily predict the drift of the boat. Let's take - this:\n\n\\begin{equation}\n{\\bf{V}} = (y,-x)^T \\notag\n\\end{equation}\n\nand visualize it with Matplotlib using a `Streamplot`. First, we load all required libraries ...\n\n\n```python\n# Import Libraries \n# ----------------\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\n\n# Ignore Warning Messages\n# -----------------------\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n```\n\n... and define the coordinates for the `Streamplot`:\n\n\n```python\ndh = 50.\nx1 = -1000.\nx2 = 1000.\nX, Y = np.meshgrid(np.arange(x1, x2, dh), np.arange(x1, x2, dh))\n```\n\nFor more flexibility and avoid code redundances later on, we write a short function, which evaluates the velocity components $(v_x,v_y)^T$ at a given position $(x,y)^T$\n\n\n```python\n# compute velocity components V = (vx,vy)^T at position x,y\ndef vel_xy(x,y):\n \n vx = y / 1000.\n vy = -x / 1000.\n \n return vx, vy\n```\n\nAfter these preparations, we can plot the velocity vector field\n\n\n```python\n# Define figure size\nrcParams['figure.figsize'] = 8, 8\n\nfig1, ax1 = plt.subplots()\n\n# Define vector field components for coordinates X,Y\nVX,VY = vel_xy(X,Y)\n\nax1.set_title(r'Plot of velocity vector field $V=(y,-x)^T$')\nplt.axis('equal')\nQ = ax1.streamplot(X,Y,VX,VY)\n\nplt.xlabel('x [m]')\nplt.ylabel('y [m]')\n\nplt.savefig('Plot_vector_field_V_boring.pdf', bbox_inches='tight', format='pdf')\nplt.show()\n```\n\nSo the velocity vector field ${\\bf{V}} = (y,-x)^T$ is simply a large vortex with zero velocity at the origin and linear increasing velocities with distance.\n\n### Sailing in the boring vector field $V =(y,-x)^T$\n\nNext, we want to predict our sailing course in this large vortex. Eventhough it is unrealistic, we assume, that such a large vortex exists in the [Kiel Fjord](https://en.wikipedia.org/wiki/Kieler_F%C3%B6rde#/media/File:Kiel_Luftaufnahme.JPG), maybe related to some suspicous, top secret activity in the Kiel military harbor.\n\n##### Exercise 1\n\nComplete the following Python code `sailing_boring`, to predict the sailing course in the boring velocity vector field $V =(y,-x)^T$. Most of the code is already implemented, you only have to add the FD solution of the uncoupled, ordinary differential equations (2):\n\n\n```python\ndef sailing_boring(tmax, dt, x0, y0):\n \n # Compute number of time steps based on tmax and dt\n nt = (int)(tmax/dt)\n \n # vectors for storage of x, y positions\n x = np.zeros(nt + 1)\n y = np.zeros(nt + 1)\n \n # define initial position\n x[0] = x0\n y[0] = y0\n \n # start time stepping over time samples n\n for n in range(1,nt + 1):\n \n # compute velocity components at current position\n vx, vy = vel_xy(x[n-1],y[n-1])\n \n # compute new position using FD approximation of time derivative\n # ADD FD SOLUTION OF THE UNCOUPLED, ORDINARY DIFFERENTIAL EQUATIONS (2) HERE!\n x[n] = \n y[n] =\n \n # Define figure size\n rcParams['figure.figsize'] = 8, 8\n\n fig1, ax1 = plt.subplots()\n \n # Define vector field components for Streamplot\n VX,VY = vel_xy(X,Y)\n \n ax1.set_title(r'Streamplot of vector field $V=(y,-x)^T$')\n plt.axis('equal')\n Q = ax1.streamplot(X,Y,VX,VY)\n plt.plot(x,y,'r-',linewidth=3)\n \n # mark initial and final position\n plt.plot(x[0],y[0],'ro')\n plt.plot(x[nt],y[nt],'go')\n\n plt.xlabel('x [m]')\n plt.ylabel('y [m]')\n\n plt.savefig('sailing_boring.pdf', bbox_inches='tight', format='pdf')\n plt.show() \n```\n\n##### Exercise 2\n\nAfter completing the FD code `sailing_boring`, we can define some basic modelling parameters. How long do you want to sail, defined by the parameter $tmax [s]$. What time step $dt$ do you want to use? $dt=1.\\;s$ should work for the first test of your FD code. To solve the problem you also have to define the initial position of your boat. Let's assume that ${\\bf{x_{0}}}=(-900,0)^T$ is the location of some jetty on the western shore of the Kiel Fjord.\n\nBy executing the cell below (`SHIFT+ENTER`), the FD code `sailing_boring` should compute the course of the boat and plot it as red line on top of the `Streamplot`. Inital and final position are defined by a red and green dot, respectively. \n\nWhat course would you expect, based on the `Streamplot`? Is it confirmed by your FD code solution? If not, there might be an error in your FD implementation.\n\n\n```python\n# How long do you want to sail [s] ? \ntmax = 1000\n\n# Define time step dt\ndt = 1.\n\n# Define initial position\nx0 = -900.\ny0 = 0.\n\n# Sail for tmax s in the boring vector field\nsailing_boring(tmax, dt, x0, y0)\n```\n\n##### Exercise 3\n\nAt this point you might get an idea why the code is called `sailing_boring`. We start at the western shore of the Kiel Fjord, follow an enclosed streamline to the eastern shore and travel back to the initial position of the jetty - it 's a boring Kiel harbor tour. \n\nHow long will the boring tour actually take? Vary $tmax$ until the green dot of the final position coincides with the red dot of the initial position.\n\nYou also might think: why should I invest so much computation time into this boring tour. \nCopy the cell above, below this text box and increase the time step $dt$ to 20 s. How does the new FD solution differ from the one above with $dt=1\\; s$? Give a possible explanation.\n\n### Sailing in the more exciting vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$\n\nTime to sail in a more complex and exciting velocity vector field, like this one:\n\n\\begin{equation}\nV=(cos((x+y)/500),sin((x-y)/500))^T \\notag\n\\end{equation}\n\nAs in the case of the boring vector field, we define a function to compute the velocity components for a given ${\\bf{x}} = (x,y)^T$:\n\n\n```python\n# define new vector field \ndef vel_xy_1(x,y):\n \n vx = np.cos((x+y)/500)\n vy = np.sin((x-y)/500) \n \n return vx, vy\n```\n\nFor the visualization of this more complex vector field, I recommend to use a `Quiver` instead of the `Streamplot`\n\n\n```python\n# Define figure size\nrcParams['figure.figsize'] = 8, 8\n\nfig1, ax1 = plt.subplots()\n\n# Define vector field components for coordinates X,Y\nVX,VY = vel_xy_1(X,Y)\n\nax1.set_title(r'Plot of vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$')\nplt.axis('equal')\nQ = ax1.quiver(X,Y,VX,VY)\nplt.plot(392,392,'ro')\n\nplt.xlabel('x [m]')\nplt.ylabel('y [m]')\n\n#plt.savefig('Plot_vector_field_V_exciting.pdf', bbox_inches='tight', format='pdf')\nplt.show()\n```\n\n##### Exercise 4\n\nNow, this velocity vector field looks more exciting, than the previous one. The red dot at ${\\bf{x_{island}}}=(392,392)^T$ marks the location of an island you want to reach. To compute the course, we can recycle most parts of the `sailing_boring` code. \n\n- Rename the code below from `sailing_boring` to `sailing_exciting`\n- Add the FD solution of the uncoupled, ordinary differential equations (2) to the code\n- Replace in the new `sailing_exciting` code the function calls of the boring velocity field `vel_xy` by the new exciting velocity field `vel_xy_1`\n- Replace in `sailing_exciting` the `Streamplot` by a `Quiver` plot.\n- Mark the position of the island by a red dot by inserting \n```python\nplt.plot(392,392,'ro')\n```\nbelow the `Quiver` plot in `sailing_exciting` \n\n\n```python\ndef sailing_boring(tmax, dt, x0, y0):\n \n # Compute number of time steps\n nt = (int)(tmax/dt)\n \n # vectors for storage of x, y positions\n x = np.zeros(nt + 1)\n y = np.zeros(nt + 1)\n \n # define initial position\n x[0] = x0\n y[0] = y0\n \n # start time stepping\n for n in range(1,nt + 1):\n \n # compute velocity components at current position\n vx, vy = vel_xy(x[n-1],y[n-1])\n \n # compute new position using FD approximation of time derivative\n # ADD FD SOLUTION OF THE UNCOUPLED, ORDINARY DIFFERENTIAL EQUATIONS (2) HERE!\n x[n] =\n y[n] =\n \n # Define figure size\n rcParams['figure.figsize'] = 8, 8\n\n fig1, ax1 = plt.subplots()\n\n # Define vector field components for quiver plot\n VX,VY = vel_xy(X,Y)\n\n ax1.set_title(r'Plot of vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$')\n plt.axis('equal')\n Q = ax1.streamplot(X,Y,VX,VY)\n plt.plot(x,y,'r-',linewidth=3)\n \n # mark initial and final position\n plt.plot(x[0],y[0],'ro')\n plt.plot(x[nt],y[nt],'go')\n \n print(x[nt],y[nt])\n\n plt.xlabel('x [m]')\n plt.ylabel('y [m]')\n\n plt.savefig('sailing_exciting.pdf', bbox_inches='tight', format='pdf')\n plt.show() \n```\n\n##### Exercise 5\n\nTime to sail to the island. To make the problem more interesting, you have to find a course to the island from the north, south, east and west boundaries. In the four cells below the x0 and y0 coordinates of the given boundary is already defined. You only have to add and change the missing coordinate vector component until you reach the island. You might also have to modify $tmax$.\n\n**Approach from the northern boundary**\n\n\n```python\n# How long do you want to sail [s] ? \ntmax = 1000\n\n# Define time step dt\ndt = 2.\n\n# DEFINE INTIAL POSITION AT NORTHERN BOUNDARY HERE!\nx0 = \ny0 = 950.\n\n# Sail for tmax s in the boring vector field\nsailing_exciting(tmax, dt, x0, y0)\n```\n\n**Approach from the southern boundary**\n\n\n```python\n# How long do you want to sail [s] ? \ntmax = 1000\n\n# Define time step dt\ndt = 2.\n\n# DEFINE INTIAL POSITION AT SOUTHERN BOUNDARY HERE!\nx0 = \ny0 = -980.\n\n# Sail for tmax s in the boring vector field\nsailing_exciting(tmax, dt, x0, y0)\n```\n\n**Approach from the western boundary**\n\n\n```python\n# How long do you want to sail [s] ? \ntmax = 1000\n\n# Define time step dt\ndt = 2.\n\n# DEFINE INTIAL POSITION AT WESTERN BOUNDARY HERE!\nx0 = -950.\ny0 = \n\n# Sail for tmax s in the boring vector field\nsailing_exciting(tmax, dt, x0, y0)\n```\n\n**Approach from the eastern boundary**\n\n\n```python\n# How long do you want to sail [s] ? \ntmax = 1000\n\n# Define time step dt\ndt = 2.\n\n# DEFINE INTIAL POSITION AT EASTERN BOUNDARY HERE!\nx0 = 990.\ny0 = \n\n# Sail for tmax s in the boring vector field\nsailing_exciting(tmax, dt, x0, y0)\n```\n\n##### Bonus Exercise \n\nHow do you reach the blue island in the vector plot below?\n\n\n```python\n# Define figure size\nrcParams['figure.figsize'] = 8, 8\n\nfig1, ax1 = plt.subplots()\n\n# Define vector field components for coordinates X,Y\nVX,VY = vel_xy_1(X,Y)\n\nax1.set_title(r'Plot of vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$')\nplt.axis('equal')\nQ = ax1.quiver(X,Y,VX,VY)\nplt.plot(-392,-392,'bo')\n\nplt.xlabel('x [m]')\nplt.ylabel('y [m]')\n\nplt.show()\n```\n\n## What we learned\n\n- How to solve a simple system of ordinary differential equations by an explicit time integration scheme\n\n- The long-term impact of small inaccuracies in time integration schemes by choosing a too large time step $dt$\n\n- The solution to a problem is not only defined by a differential equation, but also by an initial condition\n\n- How to sail without wind, by using flow data and numerical solutions of ordinary differential equations\n", "meta": {"hexsha": "23600d4869531dfcc54351355ff93ad3966f5926", "size": 510453, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb", "max_stars_repo_name": "daniel-koehn/Differential-equations-earth-system", "max_stars_repo_head_hexsha": "3916cbc968da43d0971b7139476350c1dd798746", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 30, "max_stars_repo_stars_event_min_datetime": "2019-10-16T19:07:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:48:44.000Z", "max_issues_repo_path": "02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb", "max_issues_repo_name": "daniel-koehn/Differential-equations-earth-system", "max_issues_repo_head_hexsha": "3916cbc968da43d0971b7139476350c1dd798746", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb", "max_forks_repo_name": "daniel-koehn/Differential-equations-earth-system", "max_forks_repo_head_hexsha": "3916cbc968da43d0971b7139476350c1dd798746", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2020-11-19T08:21:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-10T09:33:37.000Z", "avg_line_length": 603.3723404255, "max_line_length": 163504, "alphanum_fraction": 0.9454719631, "converted": true, "num_tokens": 4866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4532618332444286, "lm_q2_score": 0.26894142136999516, "lm_q1q2_score": 0.12190088168552635}} {"text": "***\n## Title\n\nProgramming for Data Analysis - Assignment 2020. \nSubmitted by ***Jack Caffrey***\n***\n\n## Introduction\n\nThe following assignment was undertaken as part of the Higher Diploma in Science - Data Analytics course through the Galway-Mayo Institute of Technology, for the module Programming for Data Analysis. \n\n***Please note the following:***\n1. The assignment critera will be outlined in the below ***Problem Statement*** section of the assignment. \n2. All reference will be indicated using [#]. These references will be listed in the ***References*** section of this assignment.\n3. Markdown formatting references are as follows. \n * Basic Syntax. Matt Cone. https://www.markdownguide.org/basic-syntax/\n * Extended Syntax. Matt Cone. https://www.markdownguide.org/extended-syntax/\n * Motivating Examples. Jupyter Team. https://jupyternotebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html\n\n### Problem Statement\n\nUsing a Jupyter notebook explain the use on the ***numpy.random*** package in Python. \nThis explanation must include the following: \n\n1. The **purpose** of the package. \n2. The use of the **\"Simple random data\"** and **\"Permutations\"** functions. \n3. The use and purpose of at least five **\"Distributions\"** functions. \n4. Explain the use of **\"seeds\"** in generating pseudorandom numbers. \n\n***Note:*** \nThe problem statement was defined using the critera outlined in the Programming for Data Analysis Assignment [1]\n\n***\n\n\n\n## 1. History of NumPy. \n\nBefore exploring the purpose of the ***numpy.random*** package it is important to provide a brief history of the development of the ***numPy*** package as a whole. \n \nThe ***numPy*** package developed from the Python programmming language extensions ***Numeric*** and ***Numarray*** [2]. Numeric was largely developed by Jim Hugunin a software programmer in 1995, with contributions from many people including Jim Fulton, David Ascher, Paul DuBois, Konard Hinsen [3]. \nNumeric was originally developed with maximum performance as it's main aim. A consicence of solely focusing efforts on maximum performance, resulted in design choices that meant Numeric was not extremely efficient for very large data sets [4]. (A data set is \"*a collection of related sets of information that is composed of separate elements but can be manipulated as a unit by a computer*\") [5]. \n \nAs a result of this poor efficiency when dealing with large data sets the Numarray package was developed. Numarray was developed to have faster operating speeds for larger data sets when compared to Numeric, but this lead to Numarray having slower operating speeds for smaller data sets versus Numeric. This lead to both packages being used simultaneoulsy depending on the desired output [6]. \n\nAt the beginning of 2005, an American Data Scientist and Businessman Travis Oliphant [7] wanted to develop a single array package (An array \"*is a data structure consisting of a collection of elements, each identified by at least one array index or key.*\") [8]. The creation of this single package was called *SciPy core*, with the intention to implement this as part of the bigger scientific package *SciPy*. This approach lead to confusion of how the package operated, resulting in the name change to *numerix*. The *numerix* name was already trademarked by another organisation, as a result of this trademarking another name change was required and the package ***NumPy*** was born. [9]. \n\n### What is NumPy?\n\nNumpy is described as \"the fundamental package for scientific computing in Python\". It is through this Python library that is possible to complete fast operations on arrays including but not limited to the following: \n1. Mathematical. \n2. Logical.\n3. Shape Manipulation. \n4. Sorting.\n5. Random Simulation. [10]\n\nNote: For the purpose of the assignment random simulation (sampling) will be the main area of focus and the ***numpy.random*** package.\n\n\n### Random Sampling & the *numpy.random* Packcage\n\nRandom sampling is a method used to select a sample of data from a larger data population. \nIn random sampling each sample of the data population has an equal chance of being selected. This selection is meant to be a netural and unbiased portayal of the larger data population it is selected from. If the selection does not represent a netural and unbiased portayal of the larger data population this is known as \"*sampling error*\" [11]. \n \nRandom sampling is the best method of selecting samples from the data population you are interested in. The sample selected should portray the data population you are investigating and remove sampling bias. [12]. \n\nA package used by Python to generator these random values is ***numpy.random***. This ***numpy.random*** package is used to supplement the Python ***random*** function, with functions for efficiently generating whole arrays of sample values from many kinds of probability distributions [13]. \n \nIn order to generate the required psuedorandom numbers (samples) from a population a combination of a BitGenerator and a Generator is used. \n* **BitGenerator** - is used to create sequences.\n* **Generator** - uses the created sequences to sample from the required statistical distribution [14].\n* **Psuedorandom Numbers** - \"A set of values or elements that is statistically random, but it is derived from a known starting point and is typically repeated over and over. The algorithm can repeat the sequence, therefore the numbers are not entirely random\".[15]. \n\nToday by default, Generator uses bits provided by PCG64 (Permuted Congruential Generator (64-bit)) and has replaced the use of MT19937 (Mersenne Twister 19937). \n\nMT19937 is a legacy psuedorandom number generator. **RandomState** is used to provide access to the generator.It is best practice to only use this class when it is essential to have randoms that are identical to ones produced using previous version on Numpy, as it is not possible to reproduce the exact random values required using Generator for normal distrubitions or any other distribution. \n \nPCG64 provides bits to the **Generator** which has more efficent statistical properties when compared to **RandomState**.\n\n### PCG64 vs MT19337 Quick Comparison\n\nAll comparison information is provided from https://numpy.org/doc/stable/reference/random/new-or-different.html#new-or-different\n \n| Feature | Older Equivalent | Notes |\n|:--------:|:----------------:|:-----|\n|Generator | RandomState |Generator requires a stream source, called a BitGenerator A number of these are provided. RandomState uses the Mersenne Twister MT19937 by default, but can also be instantiated with any BitGenerator. |\n|random |random_sample, |Access the values in a BitGenerator, convert them to float64 in the interval [0.0., `` 1.0)``. In addition to the size kwarg, now supports dtype='d' or dtype='f', and an out kwarg to fill a user- supplied array.| \n|integers |randint, random_integers | Many other distributions are also supported.Use the endpoint kwarg to adjust the inclusion or exclution of the high interval endpoint |\n\nFor a more detailed comparison please see: https://numpy.org/doc/stable/reference/random/new-or-different.html#new-or-different\n\nThe **Seed** plays a vital role which enables both PCG64 and MT19337. \n\nMT19337 uses a random ***Seed*** to begin the pseudorandom number generator. Values can be any integer between 0 and 624 including the number 1. If no value is provided the BitGenerator will take values from the windows analogue or the clock[16]. \n\nPCG64 supports the advance method to support the pseudorandom number generator ***Seed***. This is represented by 2 128-bit unassigned integers. The first is the state of the PRNG (psuedorandom number generator) which is advanced by a LCG (Linear Congruential Generator). The second is a fixed odd increment used in LCG. [17]. \n***\n\n### Simple Random Data Functions\n***\n\nSimple Random Data is a population of values where each value of the population has an equal probability of being selected.This random sample is meant to be an unbiased representation of the population.[18]. \n\nThe following are Simple Random Data Functions used by the numpy.random package. Examples are generated per numpy.org documentation [20]:\n * numpy.random.Generator.integers\n * numpy.random.Generator.random\n * numpy.random.Generator.choice\n * numpy.random.Generator.bytes \n\n#### 1. Random Number Generator Interges\n\nMakes use of low, high, and size parameters and optional parameters dtype, and endpoint to produce an array of random numbers from a distrubution or population of data. \n\nNote: To return low inclusive to high inclusive random integers set endpoint=True.\n\n\n```python\n# numpy.random.Generator.integers\n\nimport numpy as np #importing numpy to access the function. \n\nx = np.random.default_rng() \n\nx.integers(5, size=10, endpoint=True) # produces a random order of values up to put not including 2 for a total of 10 values \n```\n\n\n\n\n array([3, 0, 2, 5, 1, 0, 1, 1, 0, 3], dtype=int64)\n\n\n\n#### 2. Random Number Generator Random \n\nMakes use of size parameters in the form int or tuple of ints and optional parameters dtype and out to produce an array of random floats of a set shape or size \nNote: if size = None then a single float is returned. \n\n\n```python\n# numpy.random.Generator.random\n\nimport numpy as np #importing numpy to access the function. \n\nrng = np.random.default_rng() \n\n2 * rng.random((2,2)) - 5 # Generators a random 2 by 2 array for random floats between -5 and -3\n```\n\n\n\n\n array([[-3.39989006, -4.42317879],\n [-3.06481884, -4.61347249]])\n\n\n\n#### 3. Random Number Generator Choice\n\nMakes use of the parameters a, and optional parameters size, replace, p, axis and shuffle to generate a random sample from a given 1-D array.\n\n\n```python\n# Random Number Generator Choice \n\nimport numpy as np #importing numpy to access the function. \n\nrng.choice(4, 10, p=[0.4, 0.1,0.2,0.3,]) # Generates a random sample up to but not including 4 returning 10 values in a 1-D array, with the probabilities asssociated with each entry in a defined.\n```\n\n\n\n\n array([3, 3, 3, 2, 2, 2, 0, 2, 3, 0], dtype=int64)\n\n\n\n#### 4. numpy.random.Generator.bytes \n\nMakes use of length parameters to return random bytes. \n\n\n```python\n# numpy.random.Generator.bytes \n\nimport numpy as np #importing numpy to access the function. \n\nnp.random.default_rng().bytes(15)#returns random bytes of () determined length. \n```\n\n\n\n\n b'\\xca\\xe5Zi\\xad\\xee\\xf2\\xebD\\xb11\\xb0\\xfc\\x1ak'\n\n\n\n***\n### Permutation Functions\n*** \n\nA Permutation function is an ordered arrangment of values from a population without any value being repeated.[19] \n\nThe following are Permutation Functions used by the numpy.random package. Examples are generated per numpy.org documentation [20]:\n * numpy.random.Generator.shuffle\n * numpy.random.Generator.permutation\n\n### 1. numpy.random.Generator.shuffle\n\nMakes use of parameter \"x\" and optional axis parameters to shuffle the contents of sub-arrays. \n \nNote: By using the shuffle method changes will be made to the original array. \n\n\n```python\nimport numpy as np #importing the numpy package\n\nrng = np.random.default_rng() # defining rng\narr = np.arange(9).reshape((3, 3)) #reshapes array to 3x3 array. \nrng.shuffle(arr) #Shuffles values with the array\narr #displays the shuffled array\n```\n\n\n\n\n array([[6, 7, 8],\n [3, 4, 5],\n [0, 1, 2]])\n\n\n\n### 2. numpy.random.Generator.permutation\n\nMakes use of parameter \"x\" and optional axis parameters to generate a permuted sequence or array sequence. \n\nNote: By using the permutation method a re-arranged array is returned but the original array remains un-changed. \n\n\n```python\nimport numpy as np\n\nrng = np.random.default_rng()\n\narr = np.arange(9).reshape((3, 3)) #reshapes array to 3x3 array.\nrng.permutation(arr) #displays the re-arranged array\n\n```\n\n\n\n\n array([[6, 7, 8],\n [0, 1, 2],\n [3, 4, 5]])\n\n\n\n### Distribution Functions\n*** \n\n1. Unifrom Distribution \n2. Pareto Distribution\n3. Normal Distribution\n4. Exponential Distribution\n5. Poisson Distribution \n\n***\n### Uniform Distrubtion \n\n\\begin{align}\n\\ p(x) & = (\\frac{1}{b-a})\n\\end{align}\n\n#### 1. Uniform Distribution\n\nUnifrom distrubtion is used to describe the probability where every evant has an equal chance of occuring. \n\nIt has three parameters: \n\n'a' lower bound - default 0.0. \n'b' upper bound - default 0.1. \n'size' The shape of the returned array. [21] \n\nThe unifrom distrubtion has no mode and no skweness. Due to this the mean and median correspond. \n\nUnifrom Distrubtion can take two forms: \n * 1 Discrete Random Variables - are variables that can only take a countable number of values. [22]\n * 2 Continuous Random Variables - are variables where the data can take infintely many values [23]\n\n\n```python\n# Example ref: https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.pareto.html#numpy.random.Generator.pareto\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ns = np.random.default_rng().uniform(-1,0,1000)\n\nnp.all(s >= -1) # Verify all values are within the given interval\nnp.all(s < 0)\n\ncount, bins, ignored = plt.hist(s, 15, density=True)\nplt.plot(bins, np.ones_like(bins), linewidth=2, color='r')\nplt.show()\n```\n\n***\n### Pareto Distribution \n\n\\begin{align}\n\\ p(x) & = \\frac{am^a}{x^{a+1}}\n\\end{align}\n\n#### 2. Pareto Distribution\n\nPareto Distrubtion is a standard quality mechanism used to help identify the most frequent occurance of any factors you can count and categotize. [24]\n\nIt has two parameters:\n\n'a' - shape parameter.\n'size' - The shape of the returned array. \n\nPareto's law the 80-20 Distribution works from the principle the 20% factors causes 80% outcome. [25]\n\n\n```python\n# Example ref: https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.pareto.html#numpy.random.Generator.pareto\nimport numpy as np\nimport matplotlib.pyplot as plt\n\na, m = 3., 2. # shape and mode\ns = (np.random.default_rng().pareto(a, 1000) + 1) * m \n\ncount, bins, _ = plt.hist(s, 100, density=True) # Formatting the graph\nfit = a*m**a / bins**(a+1)\nplt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r')\nplt.show()\n```\n\n***\n### Normal Distribution\n\n\\begin{align}\n\\ p(x) = \\frac{1}{\\sqrt{2\\pi}{\\sigma}}e^{-{\\frac{(x -\\mu)^2}{2\\sigma^2}}} \\\\\n\\end{align}\n\n#### 3. Normal Distribution\n\nThe normal distribution is a probability function that explains how the values contained in a data set or population are distributed. Extreme values at both tails of the distribution are unlikely. [26]\n\nIt has three parameters:\n\n'loc' - Mean where the peak of the bell exists. \n'scale' - Standard Deviation how the flat graph distrbution should be. \n'size' The shape of the returned array. \n\nThe normal Distribution is identified as one of the most if not the most important probability distribution in statistics. \n\n\n```python\n# Example https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.normal.html#numpy.random.Generator.normal\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nmu, sigma = 0, 0.1 # mean and standard deviation\ns = np.random.default_rng().normal(mu, sigma, 1000)\n\nabs(mu - np.mean(s))\nabs(sigma - np.std(s, ddof=1))\n\ncount, bins, ignored = plt.hist(s, 30, density=True)\nplt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')\nplt.show()\n\n```\n\n***\n### Exponential Distribution\n\n\\begin{align}\n\\ f(x;\\frac{1}{\\beta}) = \\frac{1}{\\beta}exp(-\\frac{x}{\\beta}) \\\\\n\\end{align}\n\n#### 4.Exponential Distribution\n\n\nThe Exponential Distribution is a continous probability distribution used to model the time required to wait before a given event will occur. [27]\n\nIt has two parameters: \n\n'scale'- inverse of rate - defaults to 1.0. \n'size' - The shape of the returned array.\n\n\n```python\n# Example ref: https://www.w3schools.com/python/numpy_random_exponential.asp\n\nimport numpy as np \nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.distplot(np.random.exponential(size=1000), hist=True,color='b')\n\nplt.show()\n```\n\n***\n### Poisson Distribution\n\n\\begin{align}\n\\ f(k; {\\lambda}) = \\frac{\\lambda^ke^{-\\lambda}}{k!}\\\\\n\\end{align}\n\n#### 5. Poisson Distribution\n\nThe Poisson Distribution is a discrete probability distribution that estimates how many times an event can happen in a pre determined timeframe. [28]\n\nIt has two parameters: \n'lam' - rate or known number of occurences. \n'size' - The shape of the returned array.\n\n\n```python\n# Example ref: https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.poisson.html#numpy.random.Generator.poisson\n\nimport numpy as np #\nimport matplotlib.pyplot as plt \n\nrng = np.random.default_rng()\ns = rng.poisson(5, 10000)\n\ncount, bins, ignored = plt.hist(s, 14, density=True)\nplt.show()\n```\n\n***\n### References\n[1] ProgDA_Assignment. GMIT. \n[2] People / Jim Hugunin. Peoplepill. https://peoplepill.com/people/jim-hugunin/ \n[3] The birth of Numeric. SciPy History_of_SciPy. https://scipy.github.io/old-wiki/pages/History_of_SciPy \n[4] Python Numeric. History. http://people.csail.mit.edu/jrennie/python/numeric/ \n[5] Definitions from Oxford Languages. Dictionary. https://www.google.com/search?client=firefox-b-d&q=what+is+a+data+set \n[6] NumPy. History. https://en.wikipedia.org/wiki/NumPy \n[7] People / Travis Oliphant. Peoplepill. https://peoplepill.com/people/travis-oliphant/ \n[8] Array data structure. Wikipedia. https://en.wikipedia.org/wiki/Array_data_structure \n[9] The reunion, aka the birth of NumPy. SciPy History_of_SciPy. https://scipy.github.io/old-wiki/pages/History_of_SciPy \n[10] What is NumPy?. Numpy. https://numpy.org/doc/stable/user/whatisnumpy.html# \n[11] The Economic Times .Definition of 'Random Sampling'. https://economictimes.indiatimes.com/definition/Random-Sampling \n[12] Saul Mcleod.Random Sampling. https://www.simplypsychology.org/sampling.html#:~:text=Random%20samples%20are%20the%20best,time%2C%20effort%20and%20money [13]lmiguelvargasf. Differences between numpy.random and random.random in Python.https://stackoverflow.com/questions/7029993/differences-between-numpy-random-and-random-random-in-python#:~:text=From%20Python%20for%20Data%20Analysis,many%20kinds%20of%20probability%20distributions \n[14] NumPy .Random sampling (numpy.random).https://numpy.org/doc/stable/reference/random/index.html \n[15] encyclopedia.pseudo-random numbers.https://www.pcmag.com/encyclopedia/term/pseudo-random-numbers \n[16] Numpy.Parameters. https://numpy.org/doc/stable/reference/random/legacy \n[17] Numpy. State and Seeding. https://numpy.org/doc/stable/reference/random/bit_generators/pcg64.html#numpy.random.PCG64 \n[18] Adam Hayes.Simple Random Sample. https://www.investopedia.com/terms/s/simple-random-sample.asp#:~:text=A%20simple%20random%20sample%20is,equal%20probability%20of%20being%20chosen.&text=In%20this%20case%2C%20the%20population,equal%20chance%20of%20being%20chosen. \n[19] Permutations function. Minitab\u00ae 18 Support. https://support.minitab.com/en-us/minitab/18/help-and-how-to/calculations-data-generation-and-matrices/calculator/calculator-functions/arithmetic-calculator-functions/permutations-function/#:~:text=A%20permutation%20is%20an%20ordered,from%20a%20group%20without%20repetitions.&text=Use%20the%20Permutation%20function%20to,possible%20outcomes%20(binomial%20experiment) \n[20] Numpy, Random Generator, https://numpy.org/doc/stable/reference/random/generator.html \n[21] w3schools, Uniform Distribution, https://www.w3schools.com/python/numpy_random_uniform.asp \n[22] Revision maths, Discrete Random Variables, https://revisionmaths.com/advanced-level-maths-revision/statistics/discrete-random-variables \n[23] Revision maths, Continuous Random Variables, https://revisionmaths.com/advanced-level-maths-revision/statistics/continuous-random-variables \n[24] Minitab, When to Use a Pareto Chart, https://blog.minitab.com/blog/understanding-statistics/when-to-use-a-pareto-chart
\n[25] w3schools ,Pareto Distribution,https://www.w3schools.com/python/numpy_random_pareto.asp \n[26] Jim Frost, Statistics By Jim, https://statisticsbyjim.com/basics/normal-distribution/ \n[27] Marco Taboga, PHD, Exponential distribution, https://www.statlect.com/probability-distributions/exponential-distribution
\n[28] w3schools, Poisson Distribution, https://www.w3schools.com/python/numpy_random_poisson.asp
\n***\n## End\n", "meta": {"hexsha": "763cb29640f007ed454efe013f86a1b90e453ff3", "size": 72835, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Assignment-2020 Programming for Data Analysis.ipynb", "max_stars_repo_name": "JackCaff/Prog-DA-Assignment", "max_stars_repo_head_hexsha": "5b87aa597cd9c5da4e6050fcc94edcd5b943cb41", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignment-2020 Programming for Data Analysis.ipynb", "max_issues_repo_name": "JackCaff/Prog-DA-Assignment", "max_issues_repo_head_hexsha": "5b87aa597cd9c5da4e6050fcc94edcd5b943cb41", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment-2020 Programming for Data Analysis.ipynb", "max_forks_repo_name": "JackCaff/Prog-DA-Assignment", "max_forks_repo_head_hexsha": "5b87aa597cd9c5da4e6050fcc94edcd5b943cb41", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.3656207367, "max_line_length": 13072, "alphanum_fraction": 0.8297796389, "converted": true, "num_tokens": 5108, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4378234991142019, "lm_q2_score": 0.2782567817320044, "lm_q1q2_score": 0.1218273578301629}} {"text": "```python\nimport sys\n!{sys.executable} -m pip install numpy\n!{sys.executable} -m pip install matplotlib\n!{sys.executable} -m pip install pandas\n!{sys.executable} -m pip install scipy\n```\n\n Requirement already satisfied: numpy in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (1.20.1)\n \u001b[33mWARNING: You are using pip version 21.0.1; however, version 22.0.2 is available.\n You should consider upgrading via the '/usr/local/Cellar/jupyterlab/3.0.10/libexec/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\n Requirement already satisfied: matplotlib in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (3.3.4)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from matplotlib) (2.4.7)\n Requirement already satisfied: numpy>=1.15 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from matplotlib) (1.20.1)\n Requirement already satisfied: pillow>=6.2.0 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from matplotlib) (8.1.2)\n Requirement already satisfied: python-dateutil>=2.1 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from matplotlib) (2.8.1)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from matplotlib) (1.3.1)\n Requirement already satisfied: cycler>=0.10 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from matplotlib) (0.10.0)\n Requirement already satisfied: six in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from cycler>=0.10->matplotlib) (1.15.0)\n \u001b[33mWARNING: You are using pip version 21.0.1; however, version 22.0.2 is available.\n You should consider upgrading via the '/usr/local/Cellar/jupyterlab/3.0.10/libexec/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\n Requirement already satisfied: pandas in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (1.3.4)\n Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from pandas) (2.8.1)\n Requirement already satisfied: pytz>=2017.3 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from pandas) (2021.1)\n Requirement already satisfied: numpy>=1.17.3 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from pandas) (1.20.1)\n Requirement already satisfied: six>=1.5 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\n \u001b[33mWARNING: You are using pip version 21.0.1; however, version 22.0.2 is available.\n You should consider upgrading via the '/usr/local/Cellar/jupyterlab/3.0.10/libexec/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\n Requirement already satisfied: scipy in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (1.7.3)\n Requirement already satisfied: numpy<1.23.0,>=1.16.5 in /usr/local/Cellar/jupyterlab/3.0.10/libexec/lib/python3.9/site-packages (from scipy) (1.20.1)\n \u001b[33mWARNING: You are using pip version 21.0.1; however, version 22.0.2 is available.\n You should consider upgrading via the '/usr/local/Cellar/jupyterlab/3.0.10/libexec/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\n\n\n# *Metody obliczeniowe/Metody numeryczne*\n## *Laboratorium 2*\n## *Rozwi\u0105zywanie r\u00f3wna\u0144 nieliniowych*\n\n## Spis tre\u015bci\n\n* [Wst\u0119p](#wstep)\n* [Przyk\u0142ad 1. Rozwi\u0105zywanie r\u00f3wna\u0144 nieliniowych](#rrn)\n* [Przyk\u0142ad 2. Kr\u00f3tko o pandas](#pandas)\n* [\u0106wiczenie 1](#cw1)\n* [Metoda bisekcji](#bisect)\n* [\u0106wiczenie 2](#cw2)\n* [Regula falsi](#rfalsi)\n* [\u0106wiczenie 3](#cw3) \n* [Metoda siecznych](#secant)\n* [\u0106wiczenie 4](#cw4) \n* [Metoda Newtona](#newton)\n* [\u0106wiczenie 5](#cw5)\n* [Metoda Newtona dla uk\u0142ad\u00f3w r\u00f3wna\u0144 nieliniowych](#newton_nles)\n* [Przyk\u0142ad 3. Wyznaczanie pochodnych cz\u0105stkowych](#pcz)\n* [\u0106wiczenie 6](#cw6)\n* [\u0106wiczenie 7](#cw7)\n* [\u0106wiczenie 8](#cw8)\n* [\u0106wiczenie 9](#cw9)\n* [\u0106wiczenie 10](#cw10)\n\n## Wst\u0119p \n\n
\nR\u00f3wnaniem nieliniowym nazywa si\u0119 r\u00f3wnanie w kt\u00f3rym jedna lub wi\u0119cej zmiennych wyst\u0119puje nieliniowo, np. funkcje trygonometryczne lub wielomian trzeciego stopnia. Rozwi\u0105zywanie takich r\u00f3wna\u0144(funkcji) polega na znajdowaniu takiego x dla kt\u00f3rego f(x) = 0. Na tym laboratorium b\u0119d\u0105 omawianie iteracyjne metody rozwi\u0105zywania takich r\u00f3wna\u0144, czyli takie kt\u00f3re z ka\u017cd\u0105 kolejn\u0105 iteracj\u0105 (krokiem), s\u0105 coraz bli\u017cej rozwi\u0105zania. Pierwszym krokiem przy iteracyjnym rozwi\u0105zywaniu zada\u0144, jest upewnienie si\u0119 \u017ce dane r\u00f3wnanie posiada miejsce zerowe. Nast\u0119pnym krokiem jest wyb\u00f3r odpowiedniego algorytmu. Kolejnym krokiem jest wyb\u00f3r warunku stopu, czyli okre\u015blenie jakie przybli\u017cenie jest dla nas zadowalaj\u0105ce i mo\u017cna przerwa\u0107 iteracj\u0119. Istniej\u0105 na to trzy sposoby: \n
\n\n\n* grube stopowanie na osi OY: \n\n\\begin{equation}\n |f(x_n)| < tol_{1}\n\\end{equation}\n\n\n* grube stopowanie na osi OX:\n\n\\begin{equation}\n|x_n - x_{n-1}| < tol_{2}\n\\end{equation}\n\n\n* drugi spos\u00f3b i sprawdzanie zmiany pomi\u0119dzy dwoma kolejnymi przybli\u017ceniami:\n\n\\begin{equation}\n|x_n - x_{n-1}| < tol_{2} \\wedge |x_{n} - x_{n-1}| \\leq |x_{n-1} - x_{n-2}|\n\\end{equation}\n
\n
\nPierwszy z nich i zarazem najbardziej naiwny, sprawdza odleg\u0142o\u015b\u0107 pomi\u0119dzy warto\u015bci\u0105 funkcji w wyznaczonym przybli\u017ceniu a dok\u0142adno\u015bci\u0105, kt\u00f3ra jest bardzo bliska zeru. Drugi spos\u00f3b por\u00f3wnuje odleg\u0142o\u015b\u0107 pomi\u0119dzy dwoma kolejnymi przybli\u017ceniami pierwiastka, natomiast trzeci jest rozszerzeniem drugiego warunku - dodatkowo sprawdzana jest r\u00f3\u017cnica pomi\u0119dzy dwoma ostatnimi przybli\u017ceniami. Trzeci warunek mo\u017ce by\u0107 stosowany tylko dla metod rekurencyjnych. W \u0107wiczeniach b\u0119dzie stosowany pierwszy warunek, gdy\u017c mo\u017cna go zastosowa\u0107 do wszystkich omawianych algorytm\u00f3w.\n
\n\n\n\n## Przyk\u0142ad 1. Rozwi\u0105zywanie r\u00f3wna\u0144 nieliniowych \n\n
\nDo rozwi\u0105zywania r\u00f3wna\u0144 nieliniowych (m.in.) s\u0142u\u017c\u0105 pakiety SciPy oraz SymPy. Pierwszy z nich s\u0142u\u017cy do oblicze\u0144 numerycznych, drugi natomiast do symbolicznych. Najpopularniejsz\u0105 funkcj\u0105 z pierwszego pakietu kt\u00f3ra rozwi\u0105zuje r\u00f3wnania jest funkcja fsolve. Jako argumenty funkcja ta przyyjmuje funkcj\u0119 w postaci wielomianiu, oraz tablic\u0119 pierwszych przybli\u017ce\u0144. Metod\u0119 t\u0105 mo\u017cna stosowa\u0107 zar\u00f3wno dla pojedynczych r\u00f3wna\u0144 jak ich uk\u0142ad\u00f3w. Funkcja ta jest wrapperem dla zdefiniowanych w pakiecie MINPACK algorytm\u00f3w hybrd oraz hybrj. Poni\u017cej przedstawiono przyk\u0142ad jej u\u017cycia. \n
\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom numpy import sign\nfrom scipy.optimize import fsolve \n\n# rozwi\u0105zywanie pojedy\u0144czego r\u00f3wnania kwadratowego x^2 + x - 5 = 0, \n# posiadaj\u0105cego dwa pierwiastki\nfun = lambda x: x ** 2 + x - 5\n\n# tablica pierwszych przybli\u017ce\u0144\nxguess = [-4, 2]\n\nroots = fsolve(fun, xguess)\ny = fun(roots)\n\nprint(f\"x = {roots} \\ny = {y}\")\n\n# sprawdzenie czy warto\u015bci y obliczonych pierwiastk\u00f3w s\u0105 bliskie 0\nprint(np.isclose(y, [0.0, 0.0]))\n```\n\n x = [-2.79128785 1.79128785] \n y = [0.0000000e+00 8.8817842e-16]\n [ True True]\n\n\n## Przyk\u0142ad 2. Kr\u00f3tko o pandas \n\n
\nPandas to biblioteka, powszechnie u\u017cywana w szeroko rozumianym data science. Udost\u0119pnia ona zestaw narz\u0119dzi do analizy i obr\u00f3bki danych, z kt\u00f3rych najbardziej znanym jest klasa DataFrame - dwuwymiarowa tablica do przechowywania danych. \n\n\n* klasa Series
\nmo\u017cna por\u00f3wna\u0107 j\u0105 do kolumny w tablicy\n\n```python\nimport pandas as pd\n\npdk = [2, 1, 3, 7]\n\n# tworzenie z tablicy, indeks domy\u015blny (czyli numeracja rz\u0119d\u00f3w od 0)\ns = pd.Series(pdk)\n\nprint(s)\n\n# 0 2\n# 1 1\n# 2 3\n# 3 7\n# dtype: int64\n\nd = {'jeden': 1, 'dwa' : 2}\n\n# tworzenie ze s\u0142ownika, indeks jako klucz\ns = pd.Series(d)\n\nprint(s)\n\n# jeden 1\n# dwa 2\n# dtype: int64\n```\n\n\n* klasa DataFrame
\nmo\u017cna nazwa\u0107 j\u0105 po\u0142\u0105czeniem Excela z kolumn\u0105 z bazy danych\n\n```python\nimport pandas as pd\n\ndane = {'Imie': ['Andrzej', 'Karol', 'Marta'],\n 'Rok urodzenia': [1998, 2001, 1995]}\n\n# tworzenie obiektu DataFrame z indeksem domy\u015blnym\ndf = pd.DataFrame(dane)\n\nprint(df)\n\n# Imie Rok urodzenia\n# 0 Andrzej 1998\n# 1 Karol 2001\n# 2 Marta 1995\n\n\n# odwo\u0142ywanie si\u0119 do kolumny odbywa si\u0119 za pomoc\u0105 jej nazwy\nprint(df['Imie'])\n\n# 0 Andrzej\n# 1 Karol\n# 2 Marta\n# Name: Imie, dtype: object\n\n\n# odwo\u0142ywanie si\u0119 do wiersza za pomoc\u0105 indeksu odbywa si\u0119 za pomoc\u0105 funkcji loc\nprint(df.loc[1])\n\n# Imie Karol\n# Rok urodzenia 2001\n# Name: 1, dtype: object\n\n# mo\u017cna te\u017c odwo\u0142ywa\u0107 si\u0119 do wielu wierszy jednocze\u015bnie\nprint(df.loc[[0, 2]])\n\n# Imie Rok urodzenia\n# 0 Andrzej 1998\n# 2 Marta 1995\n\n# do wst\u0119pnej analizy zestawu danych przydatne s\u0105 funkcje head, tail oraz info\n# kt\u00f3re s\u0142u\u017c\u0105 do wy\u015bwietlania kolejno 5 pierwszych, 5 ostatnich rekord\u00f3w oraz informacji o danych\n\ndf.head()\ndf.tail()\ndf.info()\n\n```\n
\n\n## \u0106wiczenie 1 \n\n
\nNapisz funkcj\u0119 $fun1$ b\u0119d\u0105c\u0105 implementacj\u0105 wzoru\n\\begin{equation}\nf(x) = x^3+x^2-3x-3\n\\tag{1}\n\\end{equation}\n\n\nDo rozwi\u0105zania tego punktu mo\u017cna u\u017cy\u0107 operatora pot\u0119gowania **\\*\\***, a sam wz\u00f3r zdefiniowa\u0107 jako zwyczajn\u0105 funkcj\u0119 lub wyra\u017cenie lambda.\n
\n\n\n```python\n# Przedzia\u0142 zawieraj\u0105cy zgrubne po\u0142o\u017cenie miejsc zerowych\nx = np.arange(-3, 3, 0.01) \n\n# Zdefiniowanie funkcji \n# TODO\n# fun1 = ...\n# lub \n# def fun1(x): ...\n\n# Warto\u015bci funkcji na przedziale x\ny = fun1(x)\n\n# Wykres pozwalaj\u0105cy na dok\u0142adniejsze oszacowanie miejsc zerowych\nplt.plot(x, y)\nplt.grid(True)\nplt.show()\n```\n\n## Metoda bisekcji \n
\nMetoda ta polega na iteracyjnym wyznaczaniu coraz mniejszych przedzia\u0142\u00f3w, co do kt\u00f3rych mamy pewno\u015b\u0107, \u017ce zawieraj\u0105 zero funkcji. Za\u0142\u00f3\u017cmy, \u017ce dana jest funkcja $f(x)$, kt\u00f3ra jest ci\u0105g\u0142a w przedziale $[a,b]$ i przyjmuje na jego brzegach warto\u015bci o przeciwnych znakach ($f(a)f(b)<0$). W takim przypadku przedzia\u0142 $[a,b]$ musi zawiera\u0107 pierwiastek funkcji. Za przybli\u017cone rozwi\u0105zanie mo\u017cemy uzna\u0107 punkt le\u017c\u0105cy w \u015brodku przedzia\u0142u $[a,b]$, czyli:\n\n\\begin{equation}\nc = \\frac{a+b}{2}\n\\tag{2}\n\\end{equation}\n
\n
\n
\nW przypadku gdy $f(c)=0$ miejsce zerowe zosta\u0142o odnalezione. Z regu\u0142y jednak $f(c) \\neq 0$ i poszukiwania trzeba kontynuowa\u0107. Zero b\u0119dzie znajdowa\u0107 si\u0119 albo w przedziale $[a,c]$, albo w przedziale $[c,b]$. Aby wybra\u0107 w\u0142a\u015bciwy przedzia\u0142 wystarczy sprawdzi\u0107 warto\u015b\u0107 funkcji w punkcie $c$. Je\u015bli $f(a)f(c)<0$ kontynuujemy proces w przedziale $[a,c]$, je\u015bli $f(c)f(b)<0$ oznacza to, \u017ce w nast\u0119pnej iteracji warto\u015b\u0107 $c$ b\u0119dzie trzeba podstawi\u0107 w miejsce $a$. Iteracje mo\u017cna przerywa\u0107 je\u015bli odnaleziona warto\u015b\u0107 $f(c)< tol$, gdzie $tol$ jest zadan\u0105 przez nas warto\u015bci\u0105.\n
\n
\n
\nMetoda bisekcji jest wolno zbie\u017cna (zbie\u017cno\u015b\u0107 jest liniowa), gdy\u017c w og\u00f3le nie korzysta z informacji jakie daje kszta\u0142t funkcji $f(x)$. Jej niew\u0105tpliw\u0105 zalet\u0105 jest natomiast to, \u017ce jest pewna. Bisekcja sprawuje si\u0119 dobrze, tam gdzie inne (szybsze) metody maj\u0105 problemy. W obliczeniach numerycznych jest rzadko u\u017cywana osobno. Za to cz\u0119sto stosuje si\u0119 j\u0105 do pocz\u0105tkowego przybli\u017cenia przedzia\u0142u zawieraj\u0105cego miejsce zerowe funkcji, aby potem skorzysta\u0107 z szybszych metod.\n \n
\nPseudo kod dla metody bisekcji:\n
\n\n Wej\u015bcie:\n
    \n
  • fun ← funkcja, dla kt\u00f3rej szukamy miejsca zerowego
  • \n
  • a, b ← granice przedzia\u0142u w kt\u00f3rym znajduje si\u0119 pierwiastek
  • \n
  • tol ← \u017c\u0105dana dok\u0142adno\u015b\u0107
  • \n
  • i_max ← maksymalna ilo\u015b\u0107 iteracji
  • \n
\n \n Wyj\u015bcie:\n
    \n
  • i → liczba iteracji
  • \n
  • c → znalezione przybli\u017cenie pierwiastka
  • \n
  • fc → warto\u015b\u0107 funkcji w punkcie c
  • \n
\n \n Algorytm:\n
    \n
  • fa ← fun(a)
  • \n
  • fb ← fun(b)
  • \n
  • if sign(fa) == sign(fb) then
    \n  return error
    \n end if
  • \n
  • for i ← 1 to i_max do
    \n
      \n
    • c ← (a + b) / 2
    • \n
    • fc ← fun(c)
    • \n
    • e ← (b - a) / 2
      \n
    • if |fc| < dok then
    • \n
    •   return i, c fc
    • \n
    • end if
    • \n
    • if sign(fc) != sign(fa) then
    • \n
    •   b ← c
    • \n
    •   fb ← fc
    • \n
    • else
    • \n
    •   a ← c
    • \n
    •   fa ← fc
    • \n
    • end if
    • \n
    \n
  • end for
  • \n
\n
\n\n\n## \u0106wiczenie 2 \n\n
\nUzupe\u0142nij kod funkcji implementuj\u0105cej metod\u0119 bisekcji. Znajduje si\u0119 on w poni\u017cszej kom\u00f3rce. \n\nJej wywo\u0142anie wygl\u0105da nast\u0119puj\u0105co:\n```python\n[c, yc, df] = bisect(fun, a, b, i_max, dok)\n```\n\ngdzie jej parametry to:\n\n*  fun ← funkcja, dla kt\u00f3rej szukamy miejsca zerowego\n*  a, b ← granice przedzia\u0142u w kt\u00f3rym znajduje si\u0119 pierwiastek \n*  dok ← \u017c\u0105dana dok\u0142adno\u015b\u0107 \n*  i_max ← maksymalna ilo\u015b\u0107 iteracji \n \na zwracane warto\u015bci to:\n\n*  c → znalezione przybli\u017cenie pierwiastka \n*  yc → warto\u015b\u0107 funkcji dla znalezionego przybli\u017cenia pierwiastka \n*  df → obiekt DataFrame z danymi z ka\u017cdej iteracji\n
\n\n\n\n```python\ndef bisect(fun, a, b, i_max, dok):\n \"\"\"\n :param fun: funkcja\n :param a: pocz\u0105tek przedzia\u0142u\n :param b: koniec przedzia\u0142u\n :param i_max: maksymalna liczba iteracji\n :param dok: dok\u0142adno\u015b\u0107\n :return: (przybli\u017cona warto\u015b\u0107 pierwiastka, warto\u015b\u0107 funkcji od przybli\u017cenia, tablica z danymi z ka\u017cdej iteracji)\n \"\"\"\n \n if i_max < 1:\n raise ValueError(\"i_max musi by\u0107 wi\u0119ksze od 0\")\n\n # Warto\u015bci funkcji na kra\u0144cach zadanego przedzia\u0142u\n ya = fun(a)\n yb = fun(b)\n\n # Sprawdzenie czy podano poprawne warto\u015bci kra\u0144cowe\n if ya * yb > 0:\n raise ValueError(\"Warto\u015bci funkcji nie posiadaj\u0105 r\u00f3\u017cnych znak\u00f3w na pocz\u0105tku i ko\u0144cu przedzia\u0142u\")\n\n \n df = pd.DataFrame(columns=[\"Krok\", \"a\", \"b\", \"c\", \"yc\", \"b\u0142\u0105d\"])\n \n for i in range(i_max):\n # TODO\n # c = ...\n yc = fun(c)\n err = (b - a) / 2\n\n df = df.append(pd.Series(data={\"Krok\": i + 1, \"a\": a, \"b\": b, \"c\": c, \"yc\": yc, \"b\u0142\u0105d\": err}), \n ignore_index=True)\n\n # Sprawdzenie zbie\u017cno\u015bci\n if abs(yc) < dok:\n print(f\"Metoda bisekcji zbie\u017cna po {i + 1} iteracjach\")\n break\n\n if sign(ya) == sign(yc):\n a = c\n ya = yc\n else:\n b = c\n yb = yc\n\n\n else:\n #\n print(\"Funkcja nie mog\u0142a znale\u017a\u0107 przybli\u017cenia z zadan\u0105 dok\u0142adno\u015bci\u0105 w podanej liczbie iteracji\")\n \n df = df.astype({\"Krok\": \"int64\"})\n df = df.set_index(\"Krok\")\n \n return c, yc, df\n\na, b , c = bisect(lambda x: x*x + x -3, 3, 1, 100, 0.1)\nc\n```\n\n Metoda bisekcji zbie\u017cna po 5 iteracjach\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
abcycb\u0142\u0105d
Krok
13.001.002.003.00-1.00
22.001.001.500.75-0.50
31.501.001.25-0.19-0.25
41.501.251.380.27-0.12
51.381.251.310.04-0.06
\n
\n\n\n\n## Regula falsi \n\n
\nPierwszym sposobem na to, aby wykorzysta\u0107 informacje o kszta\u0142cie funkcji do przyspieszenia zbie\u017cno\u015bci poszukiwania zera jest aproksymacja funkcji $f(x)$ za pomoc\u0105 linii prostej. W metodzie regula falsi, podobnie jak metodzie po\u0142owienia zaczynamy od przedzia\u0142u $[a,b]$, na kt\u00f3rym spe\u0142niony jest warunek $f(a)f(b)<0$. Przybli\u017con\u0105 warto\u015bci\u0105 pierwiastka w ka\u017cdym nast\u0119pnym kroku b\u0119dzie miejsce przeci\u0119cia osi OX i linii przechodz\u0105cej przez punkty $(a,f(a))$ i $(b,f(b))$, tzn:\n
\n\n\\begin{equation}\nc = b - \\frac{b-a}{f(b)-f(a)}f(b)\n \\tag{3}\n\\end{equation}\n
\n
\nJe\u015bli zero znajduje si\u0119 w przedziale $[a,c]$ zmienn\u0105 $a$ pozostawiamy bez zmian i podstawiamy $b = c$; w przeciwnym przypadku przypisujemy $a=c$, z kolei $b$ pozostaje takie samo.\n
\n\nPseudo kod dla metody regula falsi:\n
\n\n Wej\u015bcie:\n
    \n
  • fun ← funkcja, dla kt\u00f3rej szukamy miejsca zerowego
  • \n
  • a, b ← granice przedzia\u0142u w kt\u00f3rym znajduje si\u0119 pierwiastek
  • \n
  • dok ← \u017c\u0105dana dok\u0142adno\u015b\u0107
  • \n
  • i_max ← maksymalna ilo\u015b\u0107 iteracji
  • \n
\n \n Wyj\u015bcie:\n
    \n
  • i → liczba iteracji
  • \n
  • c → znalezione przybli\u017cenie pierwiastka
  • \n
  • fc → warto\u015b\u0107 funkcji w punkcie c
  • \n
\n \n Algorytm:\n
    \n
  • fa ← fun(a)
  • \n
  • fb ← fun(b)
  • \n
  • if sign(fa) == sign(fb) then
    \n  return error
    \n end if
  • \n
  • for i ← 1 to i_max do
    \n
      \n
    • c ← b - (b - a) / (fb - fa) * fb
    • \n
    • fc ← fun(c)
    • \n
    • if |fc| < dok then
    • \n
    •   return i, c fc
    • \n
    • end if
    • \n
    • if sign(fc) != sign(fa) then
    • \n
    •   b ← c
    • \n
    •   fb ← fc
    • \n
    • else
    • \n
    •   a ← c
    • \n
    •   fa ← fc
    • \n
    • end if
    • \n
    \n
  • end for
  • \n
\n
\n\n\n## \u0106wiczenie 3 \n\nUzupe\u0142nij kod funkcji implementuj\u0105cej metod\u0119 regula falsi.\n\n\n \n\n\n```python\ndef rfalsi(fun, a, b, i_max, dok):\n \"\"\"\n :param fun: funkcja\n :param a: pocz\u0105tek przedzia\u0142u\n :param b: koniec przedzia\u0142u\n :param i_max: maksymalna liczba iteracji\n :param dok: dok\u0142adno\u015b\u0107\n :return: (przybli\u017cona warto\u015b\u0107 pierwiastka, warto\u015b\u0107 funkcji od przybli\u017cenia, tablica z danymi z ka\u017cdej iteracji)\n \"\"\"\n \n if i_max < 1:\n raise ValueError(\"i_max musi by\u0107 wi\u0119ksze od 0\")\n\n # Warto\u015bci funkcji na kra\u0144cach zadanego przedzia\u0142u\n ya = fun(a)\n yb = fun(b)\n \n # Sprawdzenie czy podano poprawne warto\u015bci kra\u0144cowe\n if ya * yb > 0:\n raise ValueError(\"Warto\u015bci funkcji nie posiadaj\u0105 r\u00f3\u017cnych znak\u00f3w na pocz\u0105tku i ko\u0144cu przedzia\u0142u\")\n\n df = pd.DataFrame(columns=[\"Krok\", \"a\", \"b\", \"c\", \"yc\"])\n \n for i in range(i_max):\n # TODO\n # c = ...\n yc = fun(c)\n\n df = df.append(pd.Series(data={\"Krok\": i + 1, \"a\": a, \"b\": b, \"c\": c, \"yc\": yc}), \n ignore_index=True)\n \n # Sprawdzenie zbie\u017cno\u015bci\n if abs(yc) < dok:\n print(f\"Regula falsi zbie\u017cna po {i + 1} krokach\")\n break\n\n if sign(ya) == sign(yc):\n a = c\n ya = yc\n else:\n b = c\n yb = yc\n \n else:\n print(\"Funkcja nie mog\u0142a znale\u017a\u0107 przybli\u017cenia z zadan\u0105 dok\u0142adno\u015bci\u0105 w podanej liczbie iteracji\")\n\n df = df.astype({\"Krok\": \"int64\"})\n df = df.set_index(\"Krok\")\n \n return c, yc, df\n\na, b , c = rfalsi(lambda x: x*x + x - 5, -6, 1, 100, 0.1)\nc\n```\n\n Regula falsi zbie\u017cna po 8 krokach\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
abcyc
Krok
1-6.01.0000000.250000-4.687500
2-6.00.250000-0.736842-5.193906
3-6.0-0.736842-1.642202-3.945375
4-6.0-1.642202-2.236188-2.235652
5-6.0-2.236188-2.545142-1.067393
6-6.0-2.545142-2.686610-0.468737
7-6.0-2.686610-2.747591-0.198335
8-6.0-2.747591-2.773190-0.082605
\n
\n\n\n\n## Metoda siecznych \n\n
\nPodobnie jak w metodzie regula falsi, tak i w tej funkcja $f(x)$ jest aproksymowana lini\u0105 prost\u0105. Jednak metoda siecznych nie wymaga, aby w ka\u017cdej iteracji sprawdza\u0107 przedzia\u0142, w kt\u00f3rym znajduje si\u0119 pierwiastek. Aby rozpocz\u0105\u0107 algorytm wymagane s\u0105 dwa przybli\u017cenia pierwiastka ($x_0,x_{1}$) i nie musi by\u0107 spe\u0142niony warunek $f(x_0)f(x_{1})<0$. Kolejne aproksymowane warto\u015bci pierwiastka znajdujemy (podobnie jak w powy\u017cszej metodzie) jako miejsce przeci\u0119cia si\u0119 osi OX i linii przechodz\u0105cej przez dwie ostatnie aproksymacje pierwiastka $(x_{k-1},f(x_{k-1}))$ i $(x_{k},f(x_{k}))$.\n
\n
\n\\begin{equation}\nx_{k+1} = x_k - \\frac{x_k-x_{k-1}}{f(x_k)-f(x_{k-1})}f(x_k)\n \\tag{4}\n\\end{equation}\n
\n
\nPseudo kod dla metody siecznych: \n\n
\n\n Wej\u015bcie:\n
    \n
  • fun ← funkcja, dla kt\u00f3rej szukamy miejsca zerowego
  • \n
  • xn0, xn1 ← dwa pierwsze przybli\u017cenia pierwiastka
  • \n
  • dok ← \u017c\u0105dana dok\u0142adno\u015b\u0107
  • \n
  • i_max ← maksymalna ilo\u015b\u0107 iteracji
  • \n
\n \n Wyj\u015bcie:\n
    \n
  • i → liczba iteracji
  • \n
  • xn2 → znalezione przybli\u017cenie pierwiastka
  • \n
  • fn2 → warto\u015b\u0107 funkcji w punkcie xn2
  • \n
\n \n Algorytm:\n
    \n
  • for i ← 1 to i_max do
    \n
      \n
    • fn0 ← fun(xn0)
    • \n
    • fn1 ← fun(xn1)
    • \n
    • xn2 ← xn1 - (xn1 - xn0) / (fn1 - fn0) * fn1
    • \n
    • fn2 ← fun(xn2)
    • \n
    • xn0 ← xn1
    • \n
    • xn1 ← xn2
    • \n
    • if |fn2| < dok then
    • \n
    •   return i, xn2, fn2
    • \n
    • end if
    • \n
    \n
  • end for
  • \n
\n
\n\n\n## \u0106wiczenie 4 \n\nUzupe\u0142nij kod funkcji implementuj\u0105cej metod\u0119 siecznych.\n\n\n```python\ndef secant(fun, xn0, xn1, i_max, dok):\n \"\"\"\n :param fun: funkcja\n :param xn0: pierwsze przybli\u017cenie pierwiastka\n :param xn1: drugie przybli\u017cenie pierwiastka\n :param i_max: maksymalna liczba iteracji\n :param dok: dok\u0142adno\u015b\u0107\n :return: (przybli\u017cona warto\u015b\u0107 pierwiastka, warto\u015b\u0107 funkcji od przybli\u017cenia, tablica z danymi z ka\u017cdej iteracji)\n \"\"\"\n \n if i_max < 1:\n raise ValueError(\"i_max musi by\u0107 wi\u0119ksze od 0\")\n\n df = pd.DataFrame(columns=[\"Krok\", \"x_n\", \"x_n+1\", \"x_n+2\", \"f(x_n+2)\"])\n \n for i in range(i_max):\n \n # Warto\u015bci funkcji w dw\u00f3ch pierwszych przybli\u017ceniach\n yn0 = fun(xn0)\n yn1 = fun(xn1)\n \n # TODO\n # xn2 = ...\n \n yn2 = fun(xn2)\n\n df = df.append(pd.Series(data={\"Krok\": i + 1, \"x_n\": xn0, \"x_n+1\": xn1, \"x_n+2\": xn2, \"f(x_n+2)\": yn2}), \n ignore_index=True)\n \n # Sprawdzenie zbie\u017cno\u015bci\n if abs(yn2) < dok:\n print(f\"Metoda siecznych zbie\u017cna po {i + 1} krokach\")\n break\n\n xn0, xn1 = xn1, xn2\n\n else:\n print(\"Funkcja nie mog\u0142a znale\u017a\u0107 przybli\u017cenia z zadan\u0105 dok\u0142adno\u015bci\u0105 w podanej liczbie iteracji\")\n\n df = df.astype({\"Krok\": \"int64\"})\n df = df.set_index(\"Krok\")\n \n \n return xn2, yn2, df\n\na, b, c = secant(lambda x: x*x + x - 5, -6, 1, 100, 0.01)\nc\nprint(c.to_latex())\n```\n\n Metoda siecznych zbie\u017cna po 5 krokach\n \\begin{tabular}{lrrrr}\n \\toprule\n {} & x\\_n & x\\_n+1 & x\\_n+2 & f(x\\_n+2) \\\\\n Krok & & & & \\\\\n \\midrule\n 1 & -6.000000 & 1.000000 & 0.250000 & -4.687500 \\\\\n 2 & 1.000000 & 0.250000 & 2.333333 & 2.777778 \\\\\n 3 & 0.250000 & 2.333333 & 1.558140 & -1.014062 \\\\\n 4 & 2.333333 & 1.558140 & 1.765452 & -0.117729 \\\\\n 5 & 1.558140 & 1.765452 & 1.792681 & 0.006386 \\\\\n \\bottomrule\n \\end{tabular}\n \n\n\n## Metoda Newtona \n
\nW metodzie Newtona jako aproksymacj\u0119 funkcji $f(x)$ przyjmujemy styczn\u0105 do funkcji w $x_k$-tym miejscu. Nast\u0119pnym przybli\u017cenie miejsca zerowego jest przeci\u0119cie stycznej z osi\u0105 OX. Minusem takiego podej\u015bcia jest to, \u017ce dodatkowo jest wymagana znajomo\u015b\u0107 pochodnej zadanej funkcji. Metoda ta r\u00f3wnie\u017c nie gwarantuje zbie\u017cno\u015bci procesu. Jest ona jednak najszybsz\u0105 z podstawowych metod i st\u0105d cz\u0119sto si\u0119ga si\u0119 po ni\u0105 jako pierwsz\u0105. Aby obliczy\u0107\n$k+1$-sze przybli\u017cenie pierwiastka funkcji $f(x)$ nale\u017cy zastosowa\u0107 wz\u00f3r:\n
\n
\n\\begin{equation}\nx_{k+1} = x_k - \\frac{f(x_{k})}{f'(x_k)}\n \\tag{5}\n\\end{equation}\n \nPseudo kod dla metody Newtona: \n\n
\n\n Wej\u015bcie:\n
    \n
  • fun ← funkcja, dla kt\u00f3rej szukamy miejsca zerowego
  • \n
  • dfun ← pochodna funkcji fun
  • \n
  • xn0 ← pierwsze przybli\u017cenia pierwiastka
  • \n
  • tol ← \u017c\u0105dana dok\u0142adno\u015b\u0107
  • \n
  • i_max ← maksymalna ilo\u015b\u0107 iteracji
  • \n
\n \n Wyj\u015bcie:\n
    \n
  • i → liczba iteracji
  • \n
  • xn1 → znalezione przybli\u017cenie pierwiastka
  • \n
  • fn1 → warto\u015b\u0107 funkcji w punkcie xn1
  • \n
\n \n Algorytm:\n
    \n
  • for i ← 1 to i_max do
    \n
      \n
    • xn1 ← xn0 - fun(xn0) / dfun(xn0)
    • \n
    • fn1 ← fun(xn1)
    • \n
    • if |fn1| < dok then
    • \n
    •   return i, xn2, fn2
    • \n
    • end if
    • \n
    • xn0 ← xn1
    • \n
    \n
  • end for
  • \n
\n
\n

\n\n## \u0106wiczenie 5 \n\nUzupe\u0142nij kod funkcji implementuj\u0105cej metod\u0119 Newtona.\n\n\n```python\ndef newton(fun, dfun, xn0, i_max, dok):\n \"\"\"\n :param fun: funkcja\n :param dfun: pochodna funkcji f\n :param xn0: przybli\u017cenie pierwiastka\n :param i_max: maksymalna liczba iteracji\n :param dok: dok\u0142adno\u015b\u0107\n :return: (przybli\u017cona warto\u015b\u0107 pierwiastka, warto\u015b\u0107 funkcji od przybli\u017cenia, tablica z danymi z ka\u017cdej iteracji)\n \"\"\"\n \n if i_max < 1:\n raise ValueError(\"i_max musi by\u0107 wi\u0119ksze od 0\")\n\n df = pd.DataFrame(columns=[\"Krok\", \"x_n\", \"x_n+1\", \"f(x_n+1)\"])\n \n for i in range(i_max):\n \n # TODO\n # xn1 = ...\n yn1 = fun(xn1)\n\n df = df.append(pd.Series(data={\"Krok\": i + 1, \"x_n\": xn0, \"x_n+1\": xn1, \"f(x_n+1)\": yn1}), \n ignore_index=True)\n\n if abs(yn1) < dok:\n print(f\"Metoda Newtona zbie\u017cna po {i + 1} krokach\")\n break\n\n xn0 = xn1\n else:\n print(\"Funkcja nie mog\u0142a znale\u017a\u0107 przybli\u017cenia z zadan\u0105 dok\u0142adno\u015bci\u0105 w podanej liczbie iteracji\")\n\n df = df.astype({\"Krok\": \"int64\"})\n df = df.set_index(\"Krok\")\n \n return xn1, yn1, df\n```\n\n## Metoda Newtona dla uk\u0142ad\u00f3w r\u00f3wna\u0144 nieliniowych \n
\nMetoda Newtona dla uk\u0142ad\u00f3w r\u00f3wna\u0144 nieliniowych jest uog\u00f3lnieniem tej metody dla pojedy\u0144czych r\u00f3wna\u0144. Jej algorytm jest identyczny, wymaga tylko rozszerzenia bazowego r\u00f3wnania tak aby pasowa\u0142 dla macierzy r\u00f3wna\u0144. W ten spos\u00f3b dla uk\u0142adu n r\u00f3wna\u0144, funkcja przyjmuje wektor n pierwszych przybli\u017ce\u0144 pierwiastk\u00f3w, w miejsce pojedy\u0144czego r\u00f3wnania przyjmuje uk\u0142ad r\u00f3wna\u0144, a zamiast pochodnej funkcji - macierz pochodnych cz\u0105stkowych uk\u0142adu r\u00f3wna\u0144, czyli tzw. jakobian. Przytaczaj\u0105c wz\u00f3r 5 z metody Newtona \n
\n
\n\\begin{equation}\nx_{k+1} = x_k - \\frac{f(x_{k})}{f'(x_k)}\n\\end{equation}\n
\ndla uk\u0142adu r\u00f3wna\u0144 mo\u017cna przekszta\u0142ci\u0107 go do postaci\n
\n\\begin{equation}\nX_{k+1} - X_{k} = - Y(X_{k})^{-1} {J(X_k)}\n \\tag{6}\n\\end{equation}\n
\n
\nZak\u0142adaj\u0105c teraz \u017ce lewa strona r\u00f3wnania oznacza przyrost, nale\u017cy rozwi\u0105za\u0107 r\u00f3wnanie korzystaj\u0105c z arytmetyki macierzy\n
\n \n\\begin{equation}\ndX = -Y(X_{k})^{-1} {J(X_k)}\n \\tag{7}\n\\end{equation}\n\nW j\u0119zyku Python, takie r\u00f3wnanie mo\u017cna rozwi\u0105za\u0107 z pomoc\u0105 biblioteki numpy, w nast\u0119puj\u0105cy spos\u00f3b:\n\n```python\nimport numpy as np\n \n\nJ = ...\nY = ...\n \ndX = -1 * inv(J).dot(Y)\n# lub bardziej optymalny spos\u00f3b\ndX = -1 * np.linalg.solve(J, Y)\n```\n\nNale\u017cy przy tym pami\u0119ta\u0107, \u017ce dX ma znak ujemny \n \nPodstawiaj\u0105c teraz r\u00f3wnanie 7 do rownania 6 otrzymuje si\u0119 wz\u00f3r na kolejne przybli\u017cenie wektora pierwiastk\u00f3w\n \n\\begin{equation}\nX_{k+1} = X_{k} - dX\n \\tag{8}\n\\end{equation}\n\nWarunkiem stopu dla algorytmu dla uk\u0142ad\u00f3w r\u00f3wna\u0144, mo\u017ce by\u0107 np. norma wektora dX, kt\u00f3r\u0105 r\u00f3wnie\u017c mo\u017cna wyznaczy\u0107 za pomoc\u0105 biblioteki numpy.\n \n```python\nimport numpy as np\n \n\nnorm = np.linalg.norm(dX)\n``` \n
\n\n\n## Przyk\u0142ad 3. Wyznaczanie pochodnych cz\u0105stkowych r\u00f3wna\u0144 \n\nBior\u0105c za przyk\u0142ad poni\u017cszy uk\u0142ad r\u00f3wna\u0144\n\n\\begin{cases} \n f_{1} = x^{2} + y^{2} - 25 \\\\ \n f_{2} = x + y - 7\n\\end{cases}\n
\n
\nsk\u0142ada si\u0119 on z dw\u00f3ch r\u00f3wna\u0144 oraz dw\u00f3ch niewiadomych. M\u00f3wi to o kszta\u0142cie macierzy Jacobi'ego kt\u00f3ra b\u0119dzie mia\u0142a rozmiar 2x2. Pierwszy rz\u0105d tej macierzy b\u0119dzie si\u0119 sk\u0142ada\u0142 kolejno z pochodnej wzgl\u0119dem zmiennej x, a nast\u0119pnie z pochodnej wzgl\u0119dem zmiennej y pierwszego r\u00f3wnania. Drugi rz\u0105d podobnie, tylko pod uwag\u0119 brane jest drugie r\u00f3wnanie. \n
\n
\n\\begin{bmatrix}\n{\\frac{\\partial f_{1}}{\\partial x}} & {\\frac{\\partial f_{1}}{\\partial y}}\\\\\n{\\frac{\\partial f_{2}}{\\partial x}} & {\\frac{\\partial f_{2}}{\\partial y}}\n\\end{bmatrix}\n
\n
\nTraktuj\u0105c teraz funkcj\u0119 jako wielomian, pochodna cz\u0105stkowa wzgl\u0119dem $x$, b\u0119dzie sum\u0105 pochodnych wyraz\u00f3w w kt\u00f3rych ta zmienna wyst\u0119puje. Je\u015bli w wyrazie wyst\u0119puj\u0105 inne zmienne pozostaj\u0105 one w tym wyrazie. Czyli np. dla wyrazu \n$x^{2}y$ pochodna cz\u0105stkowa wzgl\u0119dem $x$ b\u0119dzie wynosi\u0107 $2xy$. Tak samo post\u0119puje si\u0119 dla pozosta\u0142ych zmiennych.\n
\n
\nJakobian dla uk\u0142adu r\u00f3wna\u0144 wygl\u0105da nast\u0119puj\u0105co:\n
\n\\begin{bmatrix}\n2x & 2y \\\\\n1 & 1\n\\end{bmatrix} \n\n## \u0106wiczenie 6 \n\nUzupe\u0142nij kod funkcji implementuj\u0105cej metod\u0119 Newtona dla uk\u0142ad\u00f3w r\u00f3wna\u0144.\n\n\n```python\nfrom numpy.linalg import inv, norm\n\n\ndef newton_nles(fun: np.array, jacobian: np.array, Xn0, i_max, dok):\n \"\"\"\n :param fun: uk\u0142ad X r\u00f3wna\u0144\n :param jacobian: jakobian r\u00f3wna\u0144\n :param Xn0: tablica pierwszych X przybli\u017ce\u0144\n :param i_max: maksymalna liczba iteracji\n :param dok: dok\u0142adno\u015b\u0107\n :return: (przybli\u017cona warto\u015b\u0107 pierwiastka, warto\u015b\u0107 funkcji od przybli\u017cenia, tablica z danymi z ka\u017cdej iteracji)\n \"\"\"\n \n if i_max < 1:\n raise ValueError(\"i_max musi by\u0107 wi\u0119ksze od 0\")\n \n df = pd.DataFrame(columns=[\"Krok\", \"X_n\", \"f(X_n)\"])\n \n for i in range(int(i_max)):\n J = jacobian(*Xn0) \n Y = fun(*Xn0) \n # TODO\n # dX = ...\n Xn1 = Xn0 - dX\n row = pd.Series(data={\"Krok\": i + 1, \"X_n\": Xn1, \"f(X_n)\": Y})\n df = df.append(row, ignore_index=True)\n\n if np.linalg.norm(dX) < dok:\n print(f'Metoda Newtona dla uk\u0142adu r\u00f3wna\u0144 nieliniowych zbie\u017cna po {i + 1} krokach')\n break\n \n Xn0 = Xn1\n \n df = df.astype({\"Krok\": \"uint32\"})\n df = df.set_index(\"Krok\")\n \n return Xn1, df\n```\n\n## \u0106wiczenie 7 \n\nPor\u00f3wnaj zbie\u017cno\u015b\u0107 poszczeg\u00f3lnych metod szukaj\u0105c miejsca zerowego dla tych samych parametr\u00f3w.\n\n\n```python\n\n```\n\n## \u0106wiczenie 8 \n\nSkorzystaj z funkcji om\u00f3wionej w [przyk\u0142adzie 1](#rrn), aby obliczy\u0107 pierwiastki r\u00f3wnania \n\n\\begin{equation}\nx^3+x^2-3x-3=0\n\\tag{9}\n\\end{equation}\n\n\n```python\n\n```\n\n## \u0106wiczenie 9 \n
\nZaproponuj takie dane wej\u015bciowe dla metody stycznych/siecznych, aby dla funkcji posiadaj\u0105cej miejsce zerowe w pobli\u017cu pierwszych/-ego przybli\u017cenia/- \u017ce\u0144 metoda nie by\u0142a zbie\u017cna. Jakie cechy funkcji spowodowa\u0142y rozbie\u017cno\u015b\u0107?\n Zadanie to mo\u017cna wykona\u0107 modyfikuj\u0105c przybli\u017cenie pocz\u0105tkowe dla funkcji fun1 lub definiuj\u0105c inn\u0105/ne funkcj\u0119/-je.\n
\n\n\n```python\n\n```\n\n## \u0106wiczenie 10 \n\nR\u00f3wnanie z [\u0107wiczenia 8](#cw8) mo\u017cna przekszta\u0142ci\u0107 do postaci, kt\u00f3ra umo\u017cliwia wykorzystanie metody iteracji prostej $x=\\Phi(x)$ na kilka sposob\u00f3w, m.in.:\n\n\n$x = \\frac{x^3+x^2-3}{3}$;   $x = \\sqrt{-x^3+3x+3}$; \n\n\n$x = \\frac{-x^3+3}{x-3}$;   $x = (-x^2+3x+3)^{\\frac{1}{3}}$\n\nZbadaj zbie\u017cno\u015b\u0107 metody iteracji prostej dla tych wzor\u00f3w przyjmuj\u0105c r\u00f3\u017cne przybli\u017cenia pocz\u0105tkowe pierwiastka, np: $x_0 = 1$ lub $x_0 = 0.1$.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9c841a17340516f48347d78b33c9f51c9dd2642b", "size": 65377, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lab2_rownania_nieliniowe.ipynb", "max_stars_repo_name": "szaramewa/Instrukcja_rownania_nieliniowe", "max_stars_repo_head_hexsha": "4cafe7341ead82ebc5574a64a3edf9d8d56c4bf7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lab2_rownania_nieliniowe.ipynb", "max_issues_repo_name": "szaramewa/Instrukcja_rownania_nieliniowe", "max_issues_repo_head_hexsha": "4cafe7341ead82ebc5574a64a3edf9d8d56c4bf7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lab2_rownania_nieliniowe.ipynb", "max_forks_repo_name": "szaramewa/Instrukcja_rownania_nieliniowe", "max_forks_repo_head_hexsha": "4cafe7341ead82ebc5574a64a3edf9d8d56c4bf7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9325197416, "max_line_length": 12988, "alphanum_fraction": 0.599660431, "converted": true, "num_tokens": 13635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.39606815201671963, "lm_q2_score": 0.30735801052067535, "lm_q1q2_score": 0.12173471923445936}} {"text": "```\n%pylab inline\nfigsize( 12.5, 4)\n```\n\n \n Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline].\n For more information, type 'help(pylab)'.\n\n\n c:\\Python27\\lib\\site-packages\\matplotlib\\__init__.py:794: UserWarning: Bad val \"348ABD, A60628, 7A68A6, 467821, CF4457, 188487, E24A33\" on line #79\n \t\"patch.facecolor : 348ABD, A60628, 7A68A6, 467821, CF4457, 188487, E24A33\n \"\n \tin file \"c:\\Python27\\lib\\site-packages\\matplotlib\\mpl-data\\matplotlibrc\"\n \t348ABD, A60628, 7A68A6, 467821, CF4457, 188487, E24A33 does not look like a color arg\n Color tuples must be length 3\n \"%s\"\\n\\t%s' % (val, cnt, line, fname, msg))\n\n\n#Chapter 6\n\nThis chapter focuses on the most discussed part of Bayesian methodologies: how to choose a prior distribution. We also present a very interesting connection between priors and linear regression.\n\n## Getting our priorities straight\n\n\nUp until now, we have been mostly ignored our priors. This is unfortunate as we can be very expressive with our priors, but we also must be careful about choosing them. \n\n###Subjective vs Objective priors\n\nBayesian priors can be classified into two classes: *objective* priors, which aim to allow the data to speak the most, and *subjective* priors, which allow the practioner to express his or her views into the prior. \n\nWhat is an example of a objective prior? We have seen a some already, including the *flat* prior (which is a uniform distribution over the entire range of the unknown). This implies we give each possible value an equal chance. Choosing this type of prior is invoking what is called \"The Principle of Indifference.\"\n\nConversely, if we added more probability to certain areas of the prior, and less elsewhere, we are biasing our results towards the former area. This is an subjective, or *informative* prior. A subjective prior does not always reflect the practioner's opinion: more often the subjective prior was once a posterior to a previous problem, and know the practioner is updating this posterior with new data.\n\n\n```\nimport scipy.stats as stats\nfigsize(12.5,2.5)\nx = np.linspace(0,1)\n\ny1, y2 = stats.beta.pdf(x, 1,1), stats.beta.pdf(x, 10,10)\n\nplt.plot( x, y1, \n label='Objective \\n( uninformative, \\n\"Principle of Indifference\" )' )\nplt.plot( x,y2 ,\n label = \"Subjective \\n( informative)\" )\n\nleg = plt.legend()\nleg.get_frame().set_alpha(0.4)\nplt.title(\"Comparing objective vs. subjective priors\" );\n```\n\nThe choice, either *objective* or *subjective* mostly depend on the problem being solved, but there are a few cases where one is preferred over the other. In instances of scientific research, the choice of an objective prior is obvious. This eliminiates any biases in the results, and two researchers who might have differing prior opionions would feel an objective prior is fair. Consider an more extreme situation:\n\n> A tobacco company publishes an report with a Bayesian methodology that retreated 60 years of medical research on tobacco use, would you believe them? No, probably because they choose a subjective prior that too strongly biased results in there favour.\n\nUnfortunately, choosing a objective prior is not obvious, and the topic even today the problem is still not completely solved. Gelmen *et al.* [3] suggest that using a uniform distribution, with a large margin, is probably the best.\n\nWe must remember that choosing a prior, whether subjective or objective, is still part of the modelling process. To quote Gelman [3]\n\n>...after the model has been \ufb01t, one should look at the posterior distribution\nand see if it makes sense. If the posterior distribution does not make sense, this implies\nthat additional prior knowledge is available that has not been included in the model,\nand that contradicts the assumptions of the prior distribution that has been used. It is\nthen appropriate to go back and alter the prior distribution to be more consistent with\nthis external knowledge.\n\nIf the posterior does not make sense, then clearly one had an idea what the posterior *should* look like (not what one *hopes* it looks like), implying that the current prior does not contain all the prior information and should be updated. At this point, we can discard the objective prior and choose a more reflective one.\n\nAlthough, one should be wary about using Uniform objective priors with large margins. Do you really think the unknown could be incredibly large? Often quantities are naturally biased towards 0. A Normal random variable with large variance (small precision) is often a better choice. \n\nEither way, if using a subjective prior, **it is important to explain the rationale**, else you are no better than the tobacoo company's guilty parties. \n\nWe next introduce the Beta distribution.\n\n### The Beta distribution\n\nYou may have seen the term `beta` in previous code in this book. Often, I was implementing a Beta distribution. The Beta distribution is very useful in Bayesian statistics. A random variable $X$ has a $\\text{Beta}$ distribution, with paramters $(\\alpha, \\beta)$, if its density function is:\n\n$$f_X(x | \\; \\alpha, \\beta ) = \\frac{ x^{(\\alpha - 1)}(1-x)^{ (\\beta - 1) } }{B(\\alpha, \\beta) }$$\n\nwhere $B$ is the [Beta function](http://en.wikipedia.org/wiki/Beta_function) (hence the name). The random variable $X$ is only allowed in [0,1], making the Beta distribution a popular distribution for decimal values, probabilities and proportions. The values of $\\alpha$ and $\\beta$, both positive values, provide great flexibility in the shape of the distribution. Below we plot some distributions:\n\n\n```\nfigsize( 12.5, 5 )\nimport scipy.stats as stats\n\nparams = [ (1,1), (2,5), (0.5, 0.5), ( 5, 5), (20, 4), (5, 1) ]\ncolors = [\"#348ABD\", \"#A60628\", \"#7A68A6\", \"#467821\", \n \"#CF4457\", \"#188487\", \"#E24A33\" ]\nx = np.linspace( 0, 1, 100 )\nbeta = stats.beta\nfor c, (a, b) in zip(colors,params):\n y = beta.pdf( x, a, b )\n plt.plot( x, y, label = \"(%.1f,%.1f)\"%(a,b) )\n plt.fill_between( x, 0, y, alpha = 0.2, color = c )\n\nplt.ylim(0)\nplt.legend(loc = 'upper left');\n```\n\nOne thing I'd like the reader to notice is the flat distribution above, specified by parameters $(1,1)$. This is the Uniform distribution. Hence the Beta distribution is a generalization of the Uniform distribution, something we will revisit many times.\n\nThere is an interesting connection beween the Beta distribution and the Binomial distribution. Suppose we are interested in some unknown proportion or probability $p$. We assign a $\\text{Beta}(\\alpha, \\beta)$ prior to $p$. We observe some data genereted by a Binomial process, say $X \\sim \\text{Binomial}(N, p)$. Then our posterior *is a Beta disitribution*, i.e. $p | X \\sim \\text{Beta}( \\alpha + X, \\beta + N -X )$. Succinctly, one can relate the two by \"a Beta prior with Binomial observations creates a Beta posterior\". This is a very useful property, both computationally and heuristically.\n\nIn light of the above two paragraphs, if we start with a $\\text{Beta}(1,1)$ prior on $p$ (which is a Uniform), observe data $X \\sim \\text{Binomial}(N, p)$, then our posterior is $\\text{Beta}(1 + X, 1 + N - X)$. \n\n\n#####Example: Bayesian Multi-Armed Bandits\n*Adapted from an example by Ted Dunning of MapR Technologies*\n\n> Suppose you are faced with $N$ slot machines (colourfully called multi-armed bandits). Each bandit has an unknown probability of distributing a prize (assume for now the prizes are the same for each bandit, only the probabilities differ). Some bandits are very generous, others not so much. Of course, you don't know what these probabilities are. By only choosing one bandit per round, our task is devise a strategy to maximize our winnings.\n\nOf course, if we knew the bandit with the largest probability, then always picking this bandit would yield the maximum winnings. So our task can be phrased as \"Find the best bandit, and as quickly as possible\". \n\nThe task is complicated by the stochastic nature of the bandits. A suboptimal bandit can return many winnings, purely by chance, which would make us believe that it is a very profitable bandit. Similarly, the best bandit can return many duds. Should we keep trying losers then, or give up? \n\nA more troublesome problem is, if we have a found a bandit that returns *pretty good* results, do we keep drawing from it to maintain our *pretty good score*, or do we try other bandits in hopes of finding an *even-better* bandit? This is the exploration vs. exploitation dilemma.\n\n### Applications\n\n\nThe Multi-Armed Bandit problem at first seems very artificial, something only a mathematician would love, but that is only before we address some applications:\n\n1. Internet display advertising: companies have a suite of potential ads they can display to visitors, but the company is not sure which ad strategy to follow to maximize sales. This is similar to A/B testing, but has the added advantage of naturally minimizing strategies that do not work (and generalizes to A/B/C/D... strategies)\n2. Ecology: animals have a finite amount of energy to expend, and following certain behaviours has uncertain rewards. How does the animal maximize its fitness?\n3. Finance: which stock option gives the highest return, under time-varying return profiles.\n4. Clinical trials: a researcher would like to find the best treatment, out of many possible treatment, while minimizing losses. \n5. Psychology: how does punishment and reward effect our behaviour? How do humans' learn?\n\nMany of these questions above a fundamental to the application's field.\n\nIt turns out the *optimal solution* is incredibly difficult, and it took decades for an overall solution to develop. There are also many approximately-optimal solutions which are quite good. The one I wish to discuss is one of the few solutions that can scale incredibly well. The solution is known as *Bayesian Bandits*.\n\n\n### A Proposed Solution\n\n\nAny proposed strategy is called an *online algorithm* (not in the internet sense, but in the continuously-being-updated sense), and more specifically a reinforcement learning algorithm. The algorithm starts in an ignorant state, where it knows nothing, and begins to acquire data by testing the system. As it acquires data and results, it learns what the best and worst behaviours are (in this case, it learns which bandit is the best). With this in mind, perhaps we can add an additional application of the Multi-Armed Bandit problem:\n\n- Psychology: how does punishment and reward effect our behaviour? How do humans' learn?\n\n\nThe Bayesian solution begins by assuming priors on the probability of winning for each bandit. In our vignette we assumed complete ignorance of the these probabilities. So a very natural prior is the flat prior over 0 to 1. The algorithm proceeds as follows:\n\nFor each round:\n\n1. Sample a random variable $X_b$ from the prior of bandit $b$, for all $b$.\n2. Select the bandit with largest sample, i.e. select $B = \\text{argmax}\\;\\; X_b$.\n3. Observe the result of pulling bandit $B$, and update your prior on bandit $B$.\n4. Return to 1.\n\nThat's it. Computationally, the algorithm involves sampling from $N$ distributions. Since the initial priors are $\\text{Beta}(\\alpha=1,\\beta=1)$ (a uniform distribution), and the observed result $X$ (a win or loss, encoded 1 and 0 respectfully) is Binomial, the posterior is a $\\text{Beta}(\\alpha=1+X,\\beta=1+1\u2212X)$.\n\nTo answer our question from before, this algorithm suggests that we should not discard losers, but we should pick them at a decreasing rate as we gather confidence that there exist *better* bandits. This follows because there is always a non-zero chance that a loser will achieve the status of $B$, but the probability of this event decreases as we play more rounds (see figure below).\n\nBelow we implement Bayesian Bandits using two classes, `Bandits` that defines the slot machines, and `BayesianStrategy` which implements the above learning strategy.\n\n\n```\nfrom pymc import rbeta\n\nrand = np.random.rand\n\nclass Bandits(object):\n \"\"\"\n This class represents N bandits machines.\n\n parameters:\n p_array: a (n,) Numpy array of probabilities >0, <1.\n\n methods:\n pull( i ): return the results, 0 or 1, of pulling \n the ith bandit.\n \"\"\"\n def __init__(self, p_array):\n self.p = p_array\n self.optimal = np.argmax(p_array)\n \n def pull( self, i ):\n #i is which arm to pull\n return rand() < self.p[i]\n \n def __len__(self):\n return len(self.p)\n\n \nclass BayesianStrategy( object ):\n \"\"\"\n Implements a online, learning strategy to solve\n the Multi-Armed Bandit problem.\n \n parameters:\n bandits: a Bandit class with .pull method\n \n methods:\n sample_bandits(n): sample and train on n pulls.\n\n attributes:\n N: the cumulative number of samples\n choices: the historical choices as a (N,) array\n bb_score: the historical score as a (N,) array\n\n \"\"\"\n \n def __init__(self, bandits):\n \n self.bandits = bandits\n n_bandits = len( self.bandits )\n self.wins = np.zeros( n_bandits )\n self.trials = np.zeros(n_bandits )\n self.N = 0\n self.choices = []\n self.bb_score = []\n\n \n def sample_bandits( self, n=1 ):\n \n bb_score = np.zeros( n )\n choices = np.zeros( n )\n \n for k in range(n):\n #sample from the bandits's priors, and select the largest sample\n choice = np.argmax( rbeta( 1 + self.wins, 1 + self.trials - self.wins) )\n \n #sample the chosen bandit\n result = self.bandits.pull( choice )\n \n #update priors and score\n self.wins[ choice ] += result\n self.trials[ choice ] += 1\n bb_score[ k ] = result \n self.N += 1\n choices[ k ] = choice\n \n self.bb_score = np.r_[ self.bb_score, bb_score ]\n self.choices = np.r_[ self.choices, choices ]\n return \n```\n\nBelow we visualize the learning of the Bayesian Bandit solution.\n\n\n```\nimport scipy.stats as stats\nfigsize( 11.0, 10)\n\nbeta = stats.beta\nx = np.linspace(0,1,200)\n\ndef plot_priors(bayesian_strategy, prob):\n ## plotting function\n wins = bayesian_strategy.wins\n trials = bayesian_strategy.trials\n for i in range( prob.shape[0] ):\n y = beta( 1+wins[i], 1 + trials[i] - wins[i] )\n p = plt.plot( x, y.pdf(x) )\n c = p[0].get_markeredgecolor()\n plt.fill_between(x,y.pdf(x),0, color = c, alpha = 0.3, \n label=\"underlying probability: %.2f\"%prob[i])\n plt.vlines( prob[i], 0, y.pdf(prob[i]) ,\n colors = c, linestyles = \"--\" )\n plt.title(\"Posteriors After %d pull\"%bayesian_strategy.N +\\\n \"s\"*(bayesian_strategy.N>1) )\n return\n\n\nhidden_prob = np.array([0.85, 0.60, 0.75] )\nbandits = Bandits( hidden_prob )\nbayesian_strat = BayesianStrategy( bandits )\n\nfor j,i in enumerate([1, 1, 3, 10, 10, 25, 50, 100, 200, 600 ]):\n subplot( 5, 2, j+1) \n bayesian_strat.sample_bandits(i)\n plot_priors( bayesian_strat, hidden_prob )\n #plt.legend()\n\nplt.tight_layout()\n\n```\n\nNote that we don't real care how accurate we become about inference of the hidden probabilities -- for this problem we are more interested in choosing the best bandit (or more accurately, becoming *more confident* in choosing the best bandit). For this reason, the distribution of the red bandit is very wide (representing ignorance about what that hidden probability might be) but we are reasonably confident that it is not the best, so the algorithm chooses to ignore it.\n\n\n### A Measure of *Good*\n\nWe need a metric to calculate how well we are doing. Recall the absolute *best* we can do is to always pick the bandit with the largest probability of winning. Denote this best bandit's probability of $w^*$. Our score should be relative to how well we would have done had we chosen the best bandit from the beginning. This motivates the *total regret* of a strategy, defined:\n\n\\begin{align}\nR_T & = \\sum_{i=1}^{T} \\left( w^* - w_{B(i)} \\right)\\\\\\\\\n& = Tw^* - \\sum_{i=1}^{T} \\; w_{B(i)} \n\\end{align}\n\n\nwhere $w_{B(i)}$ is the probability of a prize of the chosen bandit in the $i$ round. A total regret of 0 means the strategy is matching the best possible score. This is likely not possible, as initially our algorithm will often make the wrong choice. Ideally, a strategy's total regret should flatten as it learns the best bandit. (Mathematically we achieve $w_{B(i)}=w^*$ often)\n\n\nBelow we plot the total regret of this simulation, including the score if we randomly chose a bandit.\n\n\n```\nfigsize( 12.5, 3 )\n\ndef regret( probabilities, choices ):\n w_opt = probabilities.max()\n return ( w_opt - probabilities[choices.astype(int)] ).cumsum()\n\n\n#train 9000 more times. It's over 9000 now.\nbayesian_strat.sample_bandits(9000)\n\nbayesian_strategy_regret = regret( hidden_prob, \n bayesian_strat.choices ) \nrandom_strategy_regret = regret( hidden_prob, \n np.random.randint( 0, 3, size=10000) )\n\n#plotit\nplt.plot(bayesian_strategy_regret, label = \"Bayesian Bandit Strategy\" )\nplt.plot( random_strategy_regret, label = \"Random guessing\" )\nplt.title(\"Total Regret of Bayesian Bandits Strategy vs. Random guessing\" )\nplt.xlabel(\"Number of pulls\")\nplt.ylabel(\"Regret after $n$ pulls\");\nplt.legend(loc = \"upper left\")\n\n\nfigure()\nplt.plot(bayesian_strategy_regret, label = \"Bayesian Bandit Strategy\" )\nplt.title(\"Total Regret of Bayesian Bandits Strategy\" )\nplt.xlabel(\"Number of pulls\")\nplt.ylabel(\"Regret after $n$ pulls\");\nplt.legend(loc = \"upper left\")\n```\n\nLike we wanted, the graph starts to flatten, representing we are achieving optimal choices. To be more scientific so as to remove any possible luck in the above simulation, we should instead look at the *expected total regret*:\n\n$$\\bar{R_T} = E[ R_T ] $$\n\nIt can be shown that any *sub-optimal* strategy's expected total regret is bounded below logarithmically. Formally,\n\n$$ E[R_T] = \\Omega \\left( \\;\\log(T)\\; \\right)$$\n\nThus, any strategy that matches logarithmic-growing regret is said to \"solve\" the Multi-Armed Bandit problem [3].\n\nUsing the Law of Large Numbers, we can approximate Bayesian Bandit's expected total regret by performing the same experiment many times (25 times, to be fair):\n\n\n```\nexpected_total_regret = np.zeros(10000)\ntrials = 25\n\nfor i in range(trials):\n bayesian_strat = BayesianStrategy( bandits )\n bayesian_strat.sample_bandits(10000)\n bayesian_strategy_regret = regret( hidden_prob, \n bayesian_strat.choices ) \n expected_total_regret += bayesian_strategy_regret\n \nplot(expected_total_regret/trials )\n#plt.ylim( 0, expected_total_regret.max() )\nplt.title(\"Expected Total Regret of Bayesian Bandits Strategy\" )\nplt.xlabel(\"Number of pulls\")\nplt.ylabel(\"Exepected Total Regret after $n$ pulls\");\n\nfigure()\nplot(expected_total_regret/trials )\n#plt.ylim( 0, expected_total_regret.max() )\nplt.title(\"Expected Total Regret of Bayesian Bandits Strategy, log-scale\" )\nplt.xlabel(\"Log Number of pulls\")\nplt.ylabel(\"Exepected Total Regret \\n after $\\log{n}$ pulls\");\nplt.xscale(\"log\")\n\n```\n\n### Extending the algorithm \n\n\nBecause of the algorithm's simplicity, it is easy to extend it. Some possibilities:\n\n1. If interested in the *minimum* probability (eg: where prizes are are bad thing), simply choose $B = \\text{argmin} \\; X_b$ and proceed.\n\n2. Adding learning rates: Suppose the underlying environment may change over time. Technically the standard Bayesian Bandit algorithm would self-update itself (awesome) by noting that what it thought was the best is starting to fail more often, we can motivate the algorithm to learn changing environments quicker. We simply need to add a *rate* term upon updating:\n\n self.wins[ choice ] = rate*self.wins[ choice ] + result\n self.trials[ choice ] = rate*self.trials[ choice ] + 1\n\n If `rate < 1`, the algorithm will *forget* its previous wins quicker and there will be a downward pressure towards ignorance. Conversely, setting `rate > 1` implies your algorithm will act more risky, and bet on earlier winners more often and be more resistant to changing environments. \n\n2. Hierarchical algorithms: We can setup a Bayesian Bandit algorithm on top of smaller bandit algorithms. Suppose we have $N$ Bayesian Bandit models, each varying in some behavior (for example different `rate` parameters, representing varying sensitivity to changing environments). On top of these $N$ models is another Bayesian Bandit learner that will select a sub-Bayesian Bandit. This chosen Bayesian Bandit will then make an internal choice as to which machine to pull. The super-Bayesian Bandit updates itself depending on whether the sub-Bayesian Bandit was correct or not. \n\n \n\n\n```\nfigsize( 12.0, 12)\nbeta = stats.beta\nhidden_prob = beta.rvs(1,1, size = 10 )\nprint hidden_prob\nbandits = Bandits( hidden_prob )\nbayesian_strat = BayesianStrategy( bandits )\n\nfor j,i in enumerate([1, 1, 3, 10, 10, 25, 50, 100, 200, 600 ]):\n subplot( 5, 2, j+1) \n bayesian_strat.sample_bandits(i)\n plot_priors( bayesian_strat, hidden_prob )\n #plt.legend()\n\n\n```\n\n\n```\ndef regret( probabilities, choices ):\n w_opt = probabilities.max()\n return ( w_opt - probabilities[choices.astype(int)] ).cumsum()\n\n\n#train 9000 more times. It's over 9000 now.\n\nbayesian_strat.sample_bandits(9000)\n\nbayesian_strategy_regret = regret( hidden_prob, \n bayesian_strat.choices )\n\nfigure()\nplt.plot(bayesian_strategy_regret, label = \"Bayesian Bandit Strategy\" )\nplt.title(\"Total Regret of Bayesian Bandits Strategy\" )\nplt.xlabel(\"Number of pulls\")\nplt.ylabel(\"Regret after $n$ pulls\");\nplt.legend(loc = \"upper left\")\nplt.xscale(\"log\")\n\n\n```\n\n##Effect of the prior as $N$ increases\n\nIn the first Chapter, I proposed that as the amount of observations, or data, that we posses, the less the prior matters. This is intuitive. After all, our prior is based on previous information, and eventually enough new information will shadow our previous information's value. The smothering of the prior by enough data is also helpful: if our prior is significantly wrong, then the self-correcting nature of the data will present to us a *less wrong*, and eventually *correct*, posterior. \n\nWe can see this mathematically. First, recall Bayes Theorem from Chapter 1 that relates the prior to the posterior. The following is a sample from [What is the relationship between sample size and the influence of prior on posterior?](http://stats.stackexchange.com/questions/30387/what-is-the-relationship-between-sample-size-and-the-influence-of-prior-on-poste)[1] on CrossValidated.\n\n>The posterior distribution for a parameter $\\theta$, given a data set ${\\bf X}$ can be written as \n\n\\begin{equation}\np(\\theta | {\\bf X}) \\propto \\underbrace{p({\\bf X} | \\theta)}_{{\\rm likelihood}} \\cdot \\underbrace{ p(\\theta)}_{{\\rm prior} }\n\\end{equation}\n\n\n>or, as is more commonly displayed on the log scale, \n\n$$ \\log( p(\\theta | {\\bf X}) ) = c + L(\\theta;{\\bf X}) + \\log(p(\\theta)) $$\n\n>The log-likelihood, $L(\\theta;{\\bf X}) = \\log \\left( p({\\bf X}|\\theta) \\right)$, **scales with the sample size**, since it is a function of the data, while the prior density does not. Therefore, as the sample size increases, the absolute value of $L(\\theta;{\\bf X})$ is getting larger while $\\log(p(\\theta))$ stays fixed (for a fixed value of $\\theta$), thus the sum $L(\\theta;{\\bf X}) + \\log(p(\\theta))$ becomes more heavily influenced by $L(\\theta;{\\bf X})$ as the sample size increases. \n\nThere is an interesting consequence not immediately apparent. For large sample sizes, the choice prior has less influence. Hence inference converges regardless of chosen prior, so long as areas of the prior assign non-zero probabilities are the same. \n\nBelow we visualize this. We examine the convergence of two posteriors of a Binomial's parameter $\\theta$, one with a flat prior and the other with a biased prior towards 0. As the sample size increases, the posteriors, and hence the inference, converge.\n\n\n```\nfigsize( 12.5, 15)\nimport scipy.stats as stats\nimport pymc as mc\n\np = 0.6\nbeta1_params = np.array( [1.,1.] )\nbeta2_params = np.array( [2,10] )\nbeta = stats.beta\n\nx = np.linspace(0.00, 1, 125)\ndata = mc.rbernoulli(p, size=500)\n\nfigure()\nfor i,N in enumerate([0,4,8, 32,64, 128, 500]):\n s = data[:N].sum() \n subplot(8,1,i+1)\n params1 = beta1_params + np.array( [s, N-s] )\n params2 = beta2_params + np.array( [s, N-s] )\n y1,y2 = beta.pdf( x, *params1), beta.pdf( x, *params2)\n plt.plot( x,y1, label = r\"flat prior\", lw =3 )\n plt.plot( x, y2, label = \"biased prior\", lw= 3 )\n plt.fill_between( x, 0, y1, color =\"#348ABD\", alpha = 0.15) \n plt.fill_between( x, 0, y2, color =\"#A60628\", alpha = 0.15) \n plt.legend(title = \"N=%d\"%N)\n plt.vlines( p, 0.0, 7.5, linestyles = \"--\", linewidth=1)\n #plt.ylim( 0, 10)#\n\n```\n\nKeep in mind, not all posteriors will \"forget\" the prior this quickly. This example was just to show that *eventually* the prior is forgotten. \n\n### Bayesian perspective of Penalized Linear Regressions\n\nThere is a very interesting relationship between the penalized least-squares regression and Bayesian priors. We will first describe the probablistic interpretation of Linear Regression. Denote our response variable $Y$, and features are contained in the data matrix $X$. The standard linear model is:\n\n\\begin{equation}\nY = X\\beta + \\epsilon\n\\end{equation}\n\nwhere $\\epsilon \\sim \\text{Normal}( {\\bf 0}, \\sigma{\\bf I })$. Simpley, the observed $Y$ is a linear function of $X$ (with coeffcients $\\beta$) plus some noise term. Our unknown to be learned is $\\beta$. We use the following property of Normal random variables:\n\n$$ \\mu' + \\text{Normal}( \\mu, \\sigma ) \\sim \\text{Normal}( \\mu' + \\mu , \\sigma ) $$\n\nto rewrite the above linear model as:\n\n\\begin{align}\n& Y = X\\beta + \\text{Normal}( {\\bf 0}, {\\bf I }) \\\\\\\\\n& Y = \\text{Normal}( X\\beta , {\\bf I }) \\\\\\\\\n\\end{align}\n\nIn probabilitistic notation, denote $f_Y(y \\; | \\; \\beta )$ the probabilitiy distribution of $Y$, and recall the density function for a Normal random variable:\n\n$$ f_Y( Y \\; |\\; \\beta, X) = L(\\beta|\\; X,Y)= \\frac{1}{\\sqrt{ 2\\pi\\sigma} } \\exp \\left( \\frac{1}{2\\sigma^2} (Y - X\\beta)^T(Y - X\\beta) \\right) $$\n\nThis is the likelihood function for $\\beta$ Taking the $\\log$:\n\n$$ \\ell(\\beta) = K - c(Y - X\\beta)^T(Y - X\\beta) $$\n\nwhere $K$ and $c>0$ are constants. Maximum likelihood techniques wish to maximize this for $\\beta$, \n\n$$\\hat{ \\beta } = \\text{argmax}_{\\beta} \\;\\; - (Y - X\\beta)^T(Y - X\\beta) $$\n\nEquivilantly we can minimize the negative of the above:\n\n$$\\hat{ \\beta } = \\text{argmin}_{\\beta} \\;\\; (Y - X\\beta)^T(Y - X\\beta) $$\n\nThis is probably the familiar least-squares linear regression equation. We showed that the solution to this is the same as the maximum likelihood assuming Normal noise. Next we extend this to show how we can arrive at penalized linear regression by a suitable choice of priors on $\\beta$. \n\n#### Penalized least-squares\n\nIn the above, once we have the likelihood, we can include a prior distribution on $\\beta$ to move to the equation for the posterior distribution:\n\n$$P( \\beta | Y, X ) = L(\\beta|\\;X,Y)p( \\beta )$$\n\nwhere $p(\\beta)$ is a prior on the elements of $\\beta$. What are some interesting priors? \n\n1\\. If we include *no explicit* prior term, we are actually including an uniformative prior, $P( \\beta ) \\propto 1$. \n\n2\\. If we have reason to believe the elements of $\\beta$ are not too large, we can suppose that *apriori*:\n\n$$ \\beta \\sim \\text{Normal}({\\bf 0 }, \\lambda {\\bf I } ) $$\n\nThe resulting posterior density function for $\\beta$ is *proportional to*:\n\n$$ \\exp \\left( \\frac{1}{2\\sigma^2} (Y - X\\beta)^T(Y - X\\beta) \\right) \\exp \\left( \\frac{1}{2\\lambda^2} \\beta^T\\beta \\right) $$\n\nand taking the $\\log$ of this, and combining some constants, we get:\n\n$$ \\ell(\\beta) \\propto K - (Y - X\\beta)^T(Y - X\\beta) - \\alpha \\beta^T\\beta $$\n\nwe arrive at the function we wish to maximize (recall the point that maximizes the posterior distribution is the MAP, or *maximum a posterior*):\n\n$$\\hat{ \\beta } = \\text{argmax}_{\\beta} \\;\\; -(Y - X\\beta)^T(Y - X\\beta) - \\alpha \\;\\beta^T\\beta $$\n\nEquivilantly, we can minimize the negative of the above, and rewriting $\\beta^T\\beta = ||\\beta||_2^2:\n\n$$\\hat{ \\beta } = \\text{argmin}_{\\beta} \\;\\; (Y - X\\beta)^T(Y - X\\beta) + \\alpha \\;||\\beta||_2^2$$\n\nThis above term is exactly Ridge Regression. Thus we can see that ridge regression corresponds to the MAP of a linear model with Normal errors and a Normal prior on $\\beta$.\n\n3\\. Similarly, if we assume a *Laplace* prior on $\\beta$, ie. \n\n$$ f_\\beta( \\beta) \\propto \\exp \\left(- \\lambda ||\\beta||_1 \\right)$$\n\nand following the same steps as above, we recover:\n\n$$\\hat{ \\beta } = \\text{argmin}_{\\beta} \\;\\; (Y - X\\beta)^T(Y - X\\beta) + \\alpha \\;||\\beta||_1$$\n\nwhich is LASSO regression. Some important notes about this equilivance. The sparsity that is a result of using a LASSO regularization is not a result of the prior assigning high probability to sparsity. Quite the opposite actually. It is the combination of the $|| \\cdot ||_1$ function and using the MAP that creates sparsity on $\\beta$: purely a geometric argument. The prior does contribute to an overall shrinking of the coefficients towards 0 though. An interesing disucssion of this can be found in [2].\n\n##### References\n\n1. http://stats.stackexchange.com/questions/30387/what-is-the-relationship-between-sample-size-and-the-influence-of-prior-on-poste\n\n2. Starck, J.-L., , et al. \"Sparsity and the Bayesian Perspective.\" Astronomy & Astrophysics. (2013): n. page. Print.\n\n3. Kuleshov, Volodymyr, and Doina Precup. \"Algorithms for the multi-armed bandit problem.\" Journal of Machine Learning Research. (2000): 1-49. Print.\n\n\n```\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "733abeaa2da4c899f33368514ed6d3a4975e99ef", "size": 873543, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter6_Priorities/Priors.ipynb", "max_stars_repo_name": "elyase/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "1b587fe9652168553d25fdde7ba46a3d4e08ff0d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-06-22T16:15:11.000Z", "max_stars_repo_stars_event_max_datetime": "2018-06-22T16:15:11.000Z", "max_issues_repo_path": "Chapter6_Priorities/Priors.ipynb", "max_issues_repo_name": "elyase/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "1b587fe9652168553d25fdde7ba46a3d4e08ff0d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter6_Priorities/Priors.ipynb", "max_forks_repo_name": "elyase/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "1b587fe9652168553d25fdde7ba46a3d4e08ff0d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 996.0581527936, "max_line_length": 227604, "alphanum_fraction": 0.93455388, "converted": true, "num_tokens": 8124, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.2658804672827598, "lm_q1q2_score": 0.12154372388220097}} {"text": "```python\nfrom IPython.core.display import display_html\nfrom urllib.request import urlopen\n\ncssurl = 'http://j.mp/1DnuN9M'\ndisplay_html(urlopen(cssurl).read(), raw=True)\n```\n\n\n\n\n\n\n\n\n\n\n# Modelado matem\u00e1tico y simulaci\u00f3n de sistema carro-pendulo\n\n## Problema\n\nDado el sistema de la siguiente figura:\n\n\n\ndescribir matem\u00e1ticamente su comportamiento ante perturbaciones externas.\n\n## Modelado matem\u00e1tico\n\nUtilizaremos el enfoque de Euler-Lagrange, el cual nos dice que el primer paso para conseguir el modelo matem\u00e1tico es calcular el Lagrangiano $L$ del sistema, definido por:\n\n$$\nL = K - U\n$$\n\nen donde $K$ es la energ\u00eda cin\u00e9tica del sistema y $U$ es la energia potencial del sistema.\n\nEl estado del sistema estar\u00e1 descrito por una distancia $x$ del centro del carro a un marco de referencia y el angulo $\\theta$ del pendulo con respecto a la horizontal.\n\n$$\nq = \\begin{pmatrix} x \\\\ \\theta \\end{pmatrix} = \\begin{pmatrix} q_1 \\\\ q_2 \\end{pmatrix} \\implies\n\\dot{q} = \\begin{pmatrix} \\dot{x} \\\\ \\dot{\\theta} \\end{pmatrix} = \\begin{pmatrix} \\dot{q}_1 \\\\ \\dot{q}_2 \\end{pmatrix} \\implies\n\\ddot{q} = \\begin{pmatrix} \\ddot{x} \\\\ \\ddot{\\theta} \\end{pmatrix} = \\begin{pmatrix} \\ddot{q}_1 \\\\ \\ddot{q}_2 \\end{pmatrix}\n$$\n\nPara calcular la energ\u00eda cin\u00e9tica del sistema, obtenemos $K_1$ y $K_2$ asociadas al carro y al pendulo, en donde $K_i = \\frac{1}{2} m_i v_i^2$, por lo que tenemos:\n\n$$\nK_1 = \\frac{1}{2} m_1 v_1^2 = \\frac{1}{2} m_1 \\dot{x}^2 = \\frac{1}{2} m_1 \\dot{q}_1^2\n$$\n\n$$\nK_2 = \\frac{1}{2} m_2 v_2^2 = \\frac{1}{2} m_2 \\left[ \\left( \\dot{x} + \\dot{x}_2 \\right)^2 + \\dot{y}_2^2 \\right]\n$$\n\ncon $x_2 = l \\cos{\\theta}$ y $y_2 = l \\sin{\\theta}$, por lo que sus derivadas son $\\dot{x}_2 = -\\dot{\\theta} l \\sin{\\theta}$ y $\\dot{y}_2 = \\dot{\\theta} l \\cos{\\theta}$, por lo que $K_2$ queda:\n\n$$\n\\begin{align}\nK_2 &= \\frac{1}{2} m_2 \\left[ \\left( \\dot{x} -\\dot{\\theta} l \\sin{\\theta} \\right)^2 + \\left( \\dot{\\theta} l \\cos{\\theta} \\right)^2 \\right] \\\\\n&= \\frac{1}{2} m_2 \\left[ \\left( \\dot{x} -\\dot{\\theta} l \\sin{\\theta} \\right)^2 + \\left( \\dot{\\theta} l \\cos{\\theta} \\right)^2 \\right] \\\\\n&= \\frac{1}{2} m_2 \\left[ \\left( \\dot{q}_1 -\\dot{q}_2 l \\sin{q_2} \\right)^2 + \\left( \\dot{q}_2 l \\cos{q_2} \\right)^2 \\right]\n\\end{align}\n$$\n\nConfirmemos estos calculos:\n\n\n```python\nfrom IPython.display import display\n\nfrom sympy import var, simplify, collect, expand, solve, sqrt, sin, cos, Matrix, Integer, diff, Function, Rational\nfrom sympy.physics.mechanics import mlatex, mechanics_printing\nmechanics_printing()\n```\n\n\n```python\nvar(\"l t m1 m2 g\")\n```\n\n\n```python\nq1 = Function(\"q_1\")(t)\nq2 = Function(\"q_2\")(t)\n\nx = Function(\"x\")(t)\n```\n\n\n```python\nx1 = q1\ny1 = Integer(0)\nv1 = sqrt(x1.diff(t)**2 + y1.diff(t)**2)\nv1, v1**2\n```\n\n\n```python\nx2 = q1 + l*cos(q2)\ny2 = l*sin(q2)\nv2 = sqrt(x2.diff(t)**2 + y2.diff(t)**2)\nv2, v2**2\n```\n\n\n```python\nk1 = Rational(1, 2)*m1*v1**2\nk2 = Rational(1, 2)*m2*v2**2\nk1, k2\n```\n\n\n```python\nK = (k1 + k2).expand().trigsimp().factor(q1.diff(t), q2.diff(t))\nK\n```\n\nEntonces la energ\u00eda cin\u00e9tica ser\u00e1:\n\n$$\nK = \\frac{1}{2} \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 l \\sin{q_2} \\dot{q}_1 \\dot{q}_2 \\right]\n$$\n\nLo cual puede ser escrito como una forma matricial cuadratica:\n\n$$\nK = \\frac{1}{2}\n\\begin{pmatrix}\n\\dot{q}_1 & \\dot{q}_2\n\\end{pmatrix}\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 l \\sin{q_2} \\\\\n-m_2 l \\sin{q_2} & m_2 l^2\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n$$\n\n\n```python\nM = Matrix([[m1 + m2, -m2*l*sin(q2)],\n [-m2*l*sin(q2), m2*l**2]])\nqp = Matrix([[q1.diff(t)],\n [q2.diff(t)]])\n\n(Rational(1, 2)*qp.T*M*qp)[0].expand().trigsimp().factor(q1.diff(t), q2.diff(t))\n```\n\nEn este punto, introducimos una variable intermedia, tan solo para reducir un poco la notaci\u00f3n:\n\n$$\n\\lambda = l \\sin{q_2}\n$$\n\ny su derivada con respecto al tiempo, la representamos como:\n\n$$\n\\dot{\\lambda} = \\lambda' \\dot{q}_2 \\implies \\lambda' = l \\cos{q_2}\n$$\n\npor lo que la energ\u00eda cinetica queda como:\n\n$$\nK = \\frac{1}{2}\n\\begin{pmatrix}\n\\dot{q}_1 & \\dot{q}_2\n\\end{pmatrix}\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 \\lambda \\\\\n-m_2 \\lambda & m_2 l^2\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n$$\n\nen donde el termino matricial es la matriz de masa $M(q)$ y $K(q, \\dot{q}) = \\dot{q}^T M(q) \\dot{q}$.\n\nPor otro lado, para calcular la energ\u00eda potencial del sistema, tenemos que:\n\n$$\nU_1 = m_1 g h_1 = 0\n$$\n\n$$\nU_2 = m_2 g h_2 = m_2 g l \\sin{q_1} = m_2 g \\lambda\n$$\n\npor lo que la energ\u00eda potencial del sistema ser\u00e1:\n\n$$\nU = m_2 g \\lambda\n$$\n\n\n```python\nu1 = 0\nu2 = m2*g*l*sin(q2)\nU = u1 + u2\nU\n```\n\nCuando aplicamos la primer condici\u00f3n de optimalidad al Lagrangiano $L(t, q(t), \\dot{q}(t)) = K(q(t), \\dot{q}(t)) - U(q)$, tenemos la ecuaci\u00f3n de Euler-Lagrage, la cual nos dice que:\n\n$$\n\\frac{d}{dt} L_{\\dot{q}} - L_q = 0\n$$\n\npor lo que debemos encontrar la derivada del Lagrangiano, con respecto a $q$, $\\dot{q}$ y derivar esta ultima con respecto al tiempo. Empecemos con la derivada con respecto a $q$:\n\n$$\nL_q = K_q - U_q\n$$\n\nen donde:\n\n$$\n\\begin{align}\nK_q &= \\frac{\\partial}{\\partial q} \\left\\{ \\frac{1}{2} \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 \\lambda \\dot{q}_1 \\dot{q}_2 \\right] \\right\\} \\\\\n&= \\frac{1}{2}\n\\begin{pmatrix}\n\\frac{\\partial}{\\partial q_1} \\left\\{ \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 \\lambda \\dot{q}_1 \\dot{q}_2 \\right] \\right\\} \\\\\n\\frac{\\partial}{\\partial q_2} \\left\\{ \\left[ (m_1 + m_2) \\dot{q}_1^2 + m_2 l^2 \\dot{q}_2^2 - 2 m_2 \\lambda \\dot{q}_1 \\dot{q}_2 \\right] \\right\\}\n\\end{pmatrix} \\\\\n&= \\frac{1}{2}\n\\begin{pmatrix}\n0 \\\\\n- 2 m_2 \\lambda' \\dot{q}_1 \\dot{q}_2\n\\end{pmatrix} = - m_2 \\lambda'\n\\begin{pmatrix}\n0 & 0 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n\\end{align}\n$$\n\n\n```python\nKq = Matrix([[K.diff(q1)], [K.diff(q2)]])\nKq\n```\n\ny la derivada de la energ\u00eda potencial con respecto a $q$:\n\n$$\nU_q = \\frac{\\partial}{\\partial q} \\left\\{ m_2 g \\lambda \\right\\} =\n\\begin{pmatrix}\n\\frac{\\partial}{\\partial q_1} \\left\\{ m_2 g \\lambda \\right\\} \\\\\n\\frac{\\partial}{\\partial q_2} \\left\\{ m_2 g \\lambda \\right\\}\n\\end{pmatrix} =\n\\begin{pmatrix}\n0 \\\\\nm_2 g \\lambda'\n\\end{pmatrix}\n$$\n\n\n```python\nUq = Matrix([[U.diff(q1)], [U.diff(q2)]])\nUq\n```\n\n\n```python\nLq = Kq - Uq\nLq\n```\n\nAhora obtenemos la derivada con respecto a $\\dot{q}$:\n\n$$\nL_{\\dot{q}} = K_{\\dot{q}} - U_{\\dot{q}}\n$$\n\nen donde:\n\n$$\nK_{\\dot{q}} = \\frac{1}{2} \\frac{\\partial}{\\partial \\dot{q}} \\left\\{ \\dot{q}^T M(q) \\dot{q} \\right\\} = M(q) \\dot{q}\n$$\n\n$$\nU_{\\dot{q}} = 0\n$$\n\n\n```python\nKqp = Matrix([[K.diff(q1.diff(t)).simplify()], [K.diff(q2.diff(t)).simplify()]])\nKqp\n```\n\n\n```python\nM*qp\n```\n\n\n```python\nUqp = Matrix([[U.diff(q1.diff(t)).simplify()], [U.diff(q2.diff(t)).simplify()]])\nUqp\n```\n\n\n```python\nLqp = Kqp - Uqp\nLqp\n```\n\nDerivando con respeto al tiempo estas ultimas expresiones, obtenemos:\n\n$$\n\\frac{d}{dt} K_{\\dot{q}} = \\dot{M}(q, \\dot{q}) \\dot{q} + M(q) \\ddot{q}\n$$\n\nen donde\n\n$$\n\\begin{align}\n\\dot{M}(q, \\dot{q}) &= \\frac{d}{dt} M(q) = \\frac{d}{dt}\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 \\lambda \\\\\n-m_2 \\lambda & m_2 l^2\n\\end{pmatrix} \\\\\n&=\n\\begin{pmatrix}\n0 & -m_2 \\lambda' \\dot{q}_2 \\\\\n-m_2 \\lambda' \\dot{q}_2 & 0\n\\end{pmatrix} = -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\end{align}\n$$\n\n\n```python\nM.diff(t)\n```\n\n\n```python\nqpp = qp.diff(t)\nqpp\n```\n\n\n```python\nM*qpp + M.diff(t)*qp\n```\n\n\n```python\nKqp.diff(t) - Uqp.diff(t)\n```\n\ny ya tenemos todos los elementos que integran nuestra ecuaci\u00f3n de Euler-Lagrange:\n\n$$\n\\begin{align}\n\\frac{d}{dt} L_{\\dot{q}} - L_q &= 0 \\\\\nM(q) \\ddot{q} + \\dot{M}(q, \\dot{q}) \\dot{q} - K_q + U_q &= 0\n\\end{align}\n$$\n\n\n```python\nLqp.diff(t) - Lq\n```\n\nTan solo cabe recalcar que el segundo y tercer termino se pueden reducir a uno solo:\n\n$$\n\\begin{align}\n\\dot{M}(q, \\dot{q}) \\dot{q} - K_q &= -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} + m_2 \\lambda'\n\\begin{pmatrix}\n0 & 0 \\\\\n\\dot{q}_2 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} \\\\\n&= -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n0 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} = C(q, \\dot{q})\\dot{q}\n\\end{align}\n$$\n\nPor lo que finalmente tenemos:\n\n$$\nM(q) \\ddot{q} + C(q, \\dot{q}) \\dot{q} + U_q = 0\n$$\n\ncon:\n\n$$\nM(q) =\n\\begin{pmatrix}\nm_1 + m_2 & -m_2 \\lambda \\\\\n-m_2 \\lambda & m_2 l^2\n\\end{pmatrix} \\quad\nC(q, \\dot{q}) = -m_2 \\lambda'\n\\begin{pmatrix}\n0 & \\dot{q}_2 \\\\\n0 & 0\n\\end{pmatrix} \\quad\nU_q =\n\\begin{pmatrix}\n0 \\\\\nm_2 g \\lambda'\n\\end{pmatrix}\n$$\n\n\n```python\nC = Matrix([[0, -m2*l*cos(q2)*q2.diff(t)], [0, 0]])\nM*qpp + C*qp + Uq\n```\n\n## Comentario final del modelado\n\nSabemos que la energ\u00eda total del sistema no cambia, es decir:\n\n$$\n\\frac{dE}{dt} = 0\n$$\n\ny la energ\u00eda total del sistema es $E = K + U$, lo cual atraves de simplificaci\u00f3n algebraica implica que:\n\n$$\n\\dot{q}^T \\left[ \\dot{M}(q, \\dot{q}) - 2 C(q, \\dot{q}) \\right] \\dot{q} = 0\n$$\n\npor lo que una manera relativamente sencilla de comprobar que nuestros calculos son correctos es comprobar que esta matriz sea antisimetrica:\n\n\n```python\nM.diff(t) - 2*C\n```\n\n## Simulaci\u00f3n\n\nPara la simulaci\u00f3n de este sistema podemos utilizar Simulink de MATLAB para introducir esta ecuaci\u00f3n como un conjunto de bloques como los operadores matriciales e integradores. La formula que queremos usar es:\n\n$$\n\\ddot{q} = M^{-1}(q)\\left[ -C(q, \\dot{q}) \\dot{q} - U_q \\right]\n$$\n\nEl diagrama de bloques de esto se ver\u00eda as\u00ed:\n\n\n\nY una vez simulado esto, podemos guardar los datos de la simulaci\u00f3n e importarlos aqu\u00ed:\n\n\n```python\n# Se importa libreria de entrada y salida y se importan datos de simulacion en MATLAB\nimport scipy.io\ndatos = scipy.io.loadmat('./MATLAB/carropendulo.mat')\n```\n\n\n```python\n# Se importan datos de simulaci\u00f3n a variables de Python para poder manipularlas\ndatos_q1 = datos.get(\"q1\")\ndatos_q2 = datos.get(\"q2\")\n```\n\n\n```python\nts, simq1 = zip(*datos_q1)\nts, simq2 = zip(*datos_q2)\nlen(simq1), len(simq2)\n```\n\nUna vez importados los datos podemos ingenuamente graficarlos y tendriamos los puntos por los que pasa las dos masas:\n\n\n```python\nfrom matplotlib.pyplot import plot, style, figure\nstyle.use(\"ggplot\")\n```\n\n\n```python\nfrom numpy import cos, sin, zeros, pi, arange, sqrt, linspace\n```\n\n\n```python\ntau = 2*pi\n```\n\n\n```python\nf = figure(figsize=(6, 6))\nplot(simq1 + cos(simq2), sin(simq2), \".\")\nplot(simq1, zeros(len(simq1)), \".\")\nax = f.gca()\nax.set_xlim(-0.5, 1.5)\nax.set_ylim(-1.5, 0.5);\nf.savefig('./imagenes/tray.png')\n```\n\n\n\nSin embargo es muy dificil encontrar un sentido f\u00edsico a esto, ya que no vemos la trayectoria que sigue, por lo que podemos hacer algo aun mejor:\n\n\n```python\nfrom matplotlib import animation\nfrom matplotlib.patches import Rectangle, Circle\n```\n\n\n```python\n2.34*4\n```\n\n\n```python\nfig = figure(figsize=(6, 6))\n\nax = fig.add_subplot(111, autoscale_on=False,\n xlim=(-0.5, 1.5), ylim=(-1.5, 0.5))\n\nlinea, = ax.plot([], [], 'o-', lw=1.5, color='gray')\ncarro = Rectangle((10,10), 0.5, 0.25, lw=1.5, fc='white')\n\ndef init():\n linea.set_data([], [])\n carro.set_xy((-0.25, -0.125))\n ax.add_patch(carro)\n return linea, carro\n\ndef animate(i):\n thisy = [0, sin(simq2[i])]\n thisx = [simq1[i], simq1[i] + cos(simq2[i])]\n\n linea.set_data(thisx, thisy)\n carro.set_xy((simq1[i] - 0.25, -0.125))\n return linea, carro\n\nani = animation.FuncAnimation(fig, animate, arange(1, len(simq1)), interval=25,\n blit=True, init_func=init)\n\nani.save('./imagenes/pendulum.gif', writer='imagemagick');\n```\n\n\n\n\u00a1Esto si parece un pendulo en un carro!\n\nPero aun no estamos satisfechos, podemos hacerlo incluso sin MATLAB, lo \u00fanico que tenemos que hacer es incrementar la dimensi\u00f3n del sistema para reducir el orden de la ecuaci\u00f3n diferencial.\n\nNuestro estado aumentado del sistema ser\u00e1:\n\n$$\n\\begin{pmatrix}\nq_1 \\\\\nq_2 \\\\\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix}\n$$\n\nPor lo que ahora tenemos que encontrar $f_1$, $f_2$, $f_3$ y $f_4$ tales que:\n\n$$\n\\begin{align}\n\\frac{d}{dt}q_1 &= f_1(q_1, q_2, \\dot{q}_1, \\dot{q}_2) \\\\\n\\frac{d}{dt}q_2 &= f_2(q_1, q_2, \\dot{q}_1, \\dot{q}_2) \\\\\n\\frac{d}{dt}\\dot{q}_1 &= f_3(q_1, q_2, \\dot{q}_1, \\dot{q}_2) \\\\\n\\frac{d}{dt}\\dot{q}_2 &= f_4(q_1, q_2, \\dot{q}_1, \\dot{q}_2) \\\\\n\\end{align}\n$$\n\nLas primeras dos ecuaciones son faciles:\n\n$$\n\\begin{align}\n\\frac{d}{dt}q_1 &= \\dot{q}_1 \\\\\n\\frac{d}{dt}q_2 &= \\dot{q}_2 \\\\\n\\end{align}\n$$\n\ny las otras dos, son las que ya teniamos:\n\n$$\n\\frac{d}{dt}\n\\begin{pmatrix}\n\\dot{q}_1 \\\\\n\\dot{q}_2\n\\end{pmatrix} =\n\\ddot{q} = M^{-1}(q)\\left[ -C(q, \\dot{q}) \\dot{q} - U_q \\right]\n$$\n\n\n```python\ndef f(estado, tiempo):\n from numpy import zeros\n from numpy import matrix\n \n m1 = 1\n m2 = 1\n g = 9.81\n l = 1\n \n q1, q2, q1p, q2p = estado\n \n q = matrix([[q1], [q2]])\n qp = matrix([[q1p], [q2p]])\n \n lam = l*sin(q2)\n lamp = l*cos(q2)\n \n M = matrix([[m1 + m2, -m2*lam], [-m2*lam, m2*l**2]])\n C = -m2*lamp*matrix([[0, q2p], [0, 0]])\n U = matrix([[0], [m2*g*lamp]])\n \n qpp = M.I*(-C*qp - U)\n \n dydx = zeros(4)\n \n dydx[0] = q1p\n dydx[1] = q2p\n dydx[2] = qpp[0]\n dydx[3] = qpp[1]\n \n return dydx\n```\n\n\n```python\nfrom scipy.integrate import odeint\n```\n\n\n```python\nts = linspace(0, 2.08, 100)\nestado_inicial = [0, 0, 0, 0]\n```\n\n\n```python\nestados = odeint(f, estado_inicial, ts)\nq1, q2 = estados[:, 0], estados[:, 1]\n```\n\n\n```python\nfig = figure(figsize=(8, 6))\n\nax = fig.add_subplot(111, autoscale_on=False,\n xlim=(-0.8333, 1.8333), ylim=(-1.25, 0.75))\n\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\n\nax.axes.spines[\"right\"].set_color(\"none\")\nax.axes.spines[\"left\"].set_color(\"none\")\nax.axes.spines[\"top\"].set_color(\"none\")\nax.axes.spines[\"bottom\"].set_color(\"none\")\n\nax.set_axis_bgcolor('#F2F1EC')\n\nlinea, = ax.plot([], [], 'o-', lw=1.5, color='#393F40')\ncarro = Rectangle((10,10), 0.8, 0.35, lw=1.5, fc='#E5895C')\nguia = Rectangle((10, 10), 2.6666, 0.1, lw=1.5, fc='#A4B187')\npendulo = Circle((10, 10), 0.125, lw=1.5, fc='#F3D966')\n\ndef init():\n linea.set_data([], [])\n guia.set_xy((-0.8333, -0.05))\n carro.set_xy((-0.4, -0.175))\n pendulo.center = (1, 0)\n ax.add_patch(guia)\n ax.add_patch(carro)\n ax.add_patch(pendulo)\n return linea, carro, pendulo\n\ndef animate(i):\n xs = [q1[i], q1[i] + cos(q2[i])]\n ys = [0, sin(q2[i])]\n\n linea.set_data(xs, ys)\n carro.set_xy((xs[0] - 0.4, ys[0] - 0.175))\n pendulo.center = (xs[1], ys[1])\n return linea, carro, pendulo\n\nani = animation.FuncAnimation(fig, animate, arange(1, len(q1)), interval=25,\n blit=True, init_func=init)\n\nani.save('./imagenes/pendulumpython.gif', writer='imagemagick');\n```\n\n\n\nEspero te hayas divertido con esta larga explicaci\u00f3n y al final sepas un truco mas.\n\nSi deseas compartir este Notebook de IPython utiliza la siguiente direcci\u00f3n:\n\nhttp://bit.ly/1M2tenc\n\no bien el siguiente c\u00f3digo QR:\n\n\n\n\n```python\n# Codigo para generar codigo :)\nfrom qrcode import make\nimg = make(\"http://bit.ly/1M2tenc\")\nimg.save(\"./codigos/carropendulo.jpg\")\n```\n", "meta": {"hexsha": "91169b2dc4c58bd8601d88782fb25571e937111a", "size": 97662, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "IPythonNotebooks/Control Optimo/Carro Pendulo.ipynb", "max_stars_repo_name": "robblack007/DCA", "max_stars_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IPythonNotebooks/Control Optimo/Carro Pendulo.ipynb", "max_issues_repo_name": "robblack007/DCA", "max_issues_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IPythonNotebooks/Control Optimo/Carro Pendulo.ipynb", "max_forks_repo_name": "robblack007/DCA", "max_forks_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-20T12:44:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T12:44:13.000Z", "avg_line_length": 60.0258143823, "max_line_length": 4366, "alphanum_fraction": 0.7162253487, "converted": true, "num_tokens": 6575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.37022537869825406, "lm_q2_score": 0.3276682876897044, "lm_q1q2_score": 0.12131111589732928}} {"text": "# [ATM 623: Climate Modeling](../index.ipynb)\n\n[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany\n\n# Lecture 7: Elementary greenhouse models\n\n### About these notes:\n\nThis document uses the interactive [`Jupyter notebook`](https://jupyter.org) format. The notes can be accessed in several different ways:\n\n- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware\n- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)\n- A complete snapshot of the notes as of May 2017 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2017/Notes/index.html).\n\n[Also here is a legacy version from 2015](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).\n\nMany of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab\n\n\n```python\n# Ensure compatibility with Python 2 and 3\nfrom __future__ import print_function, division\n```\n\n## Contents\n\n1. [A single layer atmosphere](#section1)\n2. [Introducing the two-layer grey gas model](#section2)\n3. [Tuning the grey gas model to observations](#section3)\n4. [Level of emission](#section4)\n5. [Radiative forcing in the 2-layer grey gas model](#section5)\n6. [Radiative equilibrium in the 2-layer grey gas model](#section6)\n7. [Summary](#section7)\n\n____________\n\n\n## 1. A single layer atmosphere\n____________\n\nWe will make our first attempt at quantifying the greenhouse effect in the simplest possible greenhouse model: a single layer of atmosphere that is able to absorb and emit longwave radiation.\n\n\n\n### Assumptions\n\n- Atmosphere is a single layer of air at temperature $T_a$\n- Atmosphere is **completely transparent to shortwave** solar radiation.\n- The **surface** absorbs shortwave radiation $(1-\\alpha) Q$\n- Atmosphere is **completely opaque to infrared** radiation\n- Both surface and atmosphere emit radiation as **blackbodies** ($\\sigma T_s^4, \\sigma T_a^4$)\n- Atmosphere radiates **equally up and down** ($\\sigma T_a^4$)\n- There are no other heat transfer mechanisms\n\nWe can now use the concept of energy balance to ask what the temperature need to be in order to balance the energy budgets at the surface and the atmosphere, i.e. the **radiative equilibrium temperatures**.\n\n\n### Energy balance at the surface\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n(1-\\alpha) Q + \\sigma T_a^4 &= \\sigma T_s^4 \\\\\n\\end{align}\n\nThe presence of the atmosphere above means there is an additional source term: downwelling infrared radiation from the atmosphere.\n\nWe call this the **back radiation**.\n\n### Energy balance for the atmosphere\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n\\sigma T_s^4 &= A\\uparrow + A\\downarrow = 2 \\sigma T_a^4 \\\\\n\\end{align}\n\nwhich means that \n$$ T_s = 2^\\frac{1}{4} T_a \\approx 1.2 T_a $$\n\nSo we have just determined that, in order to have a purely **radiative equilibrium**, we must have $T_s > T_a$. \n\n*The surface must be warmer than the atmosphere.*\n\n### Solve for the radiative equilibrium surface temperature\n\nNow plug this into the surface equation to find\n\n$$ \\frac{1}{2} \\sigma T_s^4 = (1-\\alpha) Q $$\n\nand use the definition of the emission temperature $T_e$ to write\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n*In fact, in this model, $T_e$ is identical to the atmospheric temperature $T_a$, since all the OLR originates from this layer.*\n\nSolve for the surface temperature:\n$$ T_s = 2^\\frac{1}{4} T_e $$\n\nPutting in observed numbers, $T_e = 255$ K gives a surface temperature of \n$$T_s = 303 ~\\text{K}$$\n\nThis model is one small step closer to reality: surface is warmer than atmosphere, emissions to space generated in the atmosphere, atmosphere heated from below and helping to keep surface warm.\n\nBUT our model now overpredicts the surface temperature by about 15\u00baC (or K).\n\nIdeas about why?\n\nBasically we just need to read our **list of assumptions** above and realize that none of them are very good approximations:\n\n- Atmosphere absorbs some solar radiation.\n- Atmosphere is NOT a perfect absorber of longwave radiation\n- Absorption and emission varies strongly with wavelength *(atmosphere does not behave like a blackbody)*.\n- Emissions are not determined by a single temperature $T_a$ but by the detailed *vertical profile* of air temperture.\n- Energy is redistributed in the vertical by a variety of dynamical transport mechanisms (e.g. convection and boundary layer turbulence).\n\n\n\n____________\n\n\n## 2. Introducing the two-layer grey gas model\n____________\n\nLet's generalize the above model just a little bit to build a slighly more realistic model of longwave radiative transfer.\n\nWe will address two shortcomings of our single-layer model:\n1. No vertical structure\n2. 100% longwave opacity\n\nRelaxing these two assumptions gives us what turns out to be a very useful prototype model for **understanding how the greenhouse effect works**.\n\n### Assumptions\n\n- The atmosphere is **transparent to shortwave radiation** (still)\n- Divide the atmosphere up into **two layers of equal mass** (the dividing line is thus at 500 hPa pressure level)\n- Each layer **absorbs only a fraction $\\epsilon$ ** of whatever longwave radiation is incident upon it.\n- We will call the fraction $\\epsilon$ the **absorptivity** of the layer.\n- Assume $\\epsilon$ is the same in each layer\n\nThis is called the **grey gas** model, where grey here means the emission and absorption have no spectral dependence.\n\nWe can think of this model informally as a \"leaky greenhouse\".\n\nNote that the assumption that $\\epsilon$ is the same in each layer is appropriate if the absorption is actually carried out by a gas that is **well-mixed** in the atmosphere.\n\nOut of our two most important absorbers:\n\n- CO$_2$ is well mixed\n- H$_2$O is not (mostly confined to lower troposphere due to strong temperature dependence of the saturation vapor pressure).\n\nBut we will ignore this aspect of reality for now.\n\nIn order to build our model, we need to introduce one additional piece of physics known as **Kirchoff's Law**:\n\n$$ \\text{absorptivity} = \\text{emissivity} $$\n\nSo if a layer of atmosphere at temperature $T$ absorbs a fraction $\\epsilon$ of incident longwave radiation, it must emit\n\n$$ \\epsilon ~\\sigma ~T^4 $$\n\nboth up and down.\n\n### A sketch of the radiative fluxes in the 2-layer atmosphere\n\n\n\n- Surface temperature is $T_s$\n- Atm. temperatures are $T_0, T_1$ where $T_0$ is closest to the surface.\n- absorptivity of atm layers is $\\epsilon$\n- Surface emission is $\\sigma T_s^4$\n- Atm emission is $\\epsilon \\sigma T_0^4, \\epsilon \\sigma T_1^4$ (up and down)\n- Absorptivity = emissivity for atmospheric layers\n- a fraction $(1-\\epsilon)$ of the longwave beam is **transmitted** through each layer\n\n### A fun aside: symbolic math with the `sympy` package\n\nThis two-layer grey gas model is simple enough that we can work out all the details algebraically. There are three temperatures to keep track of $(T_s, T_0, T_1)$, so we will have 3x3 matrix equations.\n\nWe all know how to work these things out with pencil and paper. But it can be tedious and error-prone. \n\nSymbolic math software lets us use the computer to automate a lot of tedious algebra.\n\nThe [sympy](http://www.sympy.org/en/index.html) package is a powerful open-source symbolic math library that is well-integrated into the scientific Python ecosystem. \n\n\n```python\nimport sympy\n# Allow sympy to produce nice looking equations as output\nsympy.init_printing()\n# Define some symbols for mathematical quantities\n# Assume all quantities are positive (which will help simplify some expressions)\nepsilon, T_e, T_s, T_0, T_1, sigma = \\\n sympy.symbols('epsilon, T_e, T_s, T_0, T_1, sigma', positive=True)\n# So far we have just defined some symbols, e.g.\nT_s\n```\n\n\n```python\n# We have hard-coded the assumption that the temperature is positive\nsympy.ask(T_s>0)\n```\n\n\n\n\n True\n\n\n\n### Longwave emissions\n\nLet's denote the emissions from each layer as\n\\begin{align}\nE_s &= \\sigma T_s^4 \\\\\nE_0 &= \\epsilon \\sigma T_0^4 \\\\\nE_1 &= \\epsilon \\sigma T_1^4 \n\\end{align}\n\nrecognizing that $E_0$ and $E_1$ contribute to **both** the upwelling and downwelling beams.\n\n\n```python\n# Define these operations as sympy symbols \n# And display as a column vector:\nE_s = sigma*T_s**4\nE_0 = epsilon*sigma*T_0**4\nE_1 = epsilon*sigma*T_1**4\nE = sympy.Matrix([E_s, E_0, E_1])\nE\n```\n\n### Shortwave radiation\nSince we have assumed the atmosphere is transparent to shortwave, the incident beam $Q$ passes unchanged from the top to the surface, where a fraction $\\alpha$ is reflected upward out to space.\n\n\n```python\n# Define some new symbols for shortwave radiation\nQ, alpha = sympy.symbols('Q, alpha', positive=True)\n# Create a dictionary to hold our numerical values\ntuned = {}\ntuned[Q] = 341.3 # global mean insolation in W/m2\ntuned[alpha] = 101.9/Q.subs(tuned) # observed planetary albedo\ntuned[sigma] = 5.67E-8 # Stefan-Boltzmann constant in W/m2/K4\ntuned\n# Numerical value for emission temperature\n#T_e.subs(tuned)\n```\n\n### Upwelling beam\n\nLet $U$ be the upwelling flux of longwave radiation. \n\nThe upward flux from the surface to layer 0 is\n$$ U_0 = E_s $$\n(just the emission from the suface).\n\n\n```python\nU_0 = E_s\nU_0\n```\n\nFollowing this beam upward, we can write the upward flux from layer 0 to layer 1 as the sum of the transmitted component that originated below layer 0 and the new emissions from layer 0:\n\n$$ U_1 = (1-\\epsilon) U_0 + E_0 $$\n\n\n```python\nU_1 = (1-epsilon)*U_0 + E_0\nU_1\n```\n\nContinuing to follow the same beam, the upwelling flux above layer 1 is\n$$ U_2 = (1-\\epsilon) U_1 + E_1 $$\n\n\n```python\nU_2 = (1-epsilon) * U_1 + E_1\n```\n\nSince there is no more atmosphere above layer 1, this upwelling flux is our Outgoing Longwave Radiation for this model:\n\n$$ OLR = U_2 $$\n\n\n```python\nU_2\n```\n\nThe three terms in the above expression represent the **contributions to the total OLR that originate from each of the three levels**. \n\nLet's code this up explicitly for future reference:\n\n\n```python\n# Define the contributions to OLR originating from each level\nOLR_s = (1-epsilon)**2 *sigma*T_s**4\nOLR_0 = epsilon*(1-epsilon)*sigma*T_0**4\nOLR_1 = epsilon*sigma*T_1**4\n\nOLR = OLR_s + OLR_0 + OLR_1\n\nprint( 'The expression for OLR is')\nOLR\n```\n\n### Downwelling beam\n\nLet $D$ be the downwelling longwave beam. Since there is no longwave radiation coming in from space, we begin with \n\n\n```python\nfromspace = 0\nD_2 = fromspace\n```\n\nBetween layer 1 and layer 0 the beam contains emissions from layer 1:\n\n$$ D_1 = (1-\\epsilon)D_2 + E_1 = E_1 $$\n\n\n```python\nD_1 = (1-epsilon)*D_2 + E_1\nD_1\n```\n\nFinally between layer 0 and the surface the beam contains a transmitted component and the emissions from layer 0:\n\n$$ D_0 = (1-\\epsilon) D_1 + E_0 = \\epsilon(1-\\epsilon) \\sigma T_1^4 + \\epsilon \\sigma T_0^4$$\n\n\n```python\nD_0 = (1-epsilon)*D_1 + E_0\nD_0\n```\n\nThis $D_0$ is what we call the **back radiation**, i.e. the longwave radiation from the atmosphere to the surface.\n\n____________\n\n\n## 3. Tuning the grey gas model to observations\n____________\n\nIn building our new model we have introduced exactly one parameter, the absorptivity $\\epsilon$. We need to choose a value for $\\epsilon$.\n\nWe will tune our model so that it **reproduces the observed global mean OLR** given **observed global mean temperatures**.\n\nTo get appropriate temperatures for $T_s, T_0, T_1$, let's revisit the [global, annual mean lapse rate plot from NCEP Reanalysis data](Lecture06 -- Radiation.ipynb) from the previous lecture.\n\n### Temperatures\n\nFirst, we set \n$$T_s = 288 \\text{ K} $$\n\nFrom the lapse rate plot, an average temperature for the layer between 1000 and 500 hPa is \n\n$$ T_0 = 275 \\text{ K}$$\n\nDefining an average temperature for the layer between 500 and 0 hPa is more ambiguous because of the lapse rate reversal at the tropopause. We will choose\n\n$$ T_1 = 230 \\text{ K}$$\n\nFrom the graph, this is approximately the observed global mean temperature at 275 hPa or about 10 km.\n\n\n```python\n# add to our dictionary of values:\ntuned[T_s] = 288.\ntuned[T_0] = 275.\ntuned[T_1] = 230.\ntuned\n```\n\n### OLR\n\nFrom the [observed global energy budget](Lecture01 -- Planetary energy budget.ipynb) we set \n\n$$ OLR = 238.5 \\text{ W m}^{-2} $$\n\n### Solving for $\\epsilon$\n\nWe wrote down the expression for OLR as a function of temperatures and absorptivity in our model above. \n\nWe just need to equate this to the observed value and solve a **quadratic equation** for $\\epsilon$.\n\nThis is where the real power of the symbolic math toolkit comes in. \n\nSubsitute in the numerical values we are interested in:\n\n\n```python\n# the .subs() method for a sympy symbol means\n# substitute values in the expression using the supplied dictionary\n# Here we use observed values of Ts, T0, T1 \nOLR2 = OLR.subs(tuned)\nOLR2\n```\n\nWe have a quadratic equation for $\\epsilon$.\n\nNow use the `sympy.solve` function to solve the quadratic:\n\n\n```python\n# The sympy.solve method takes an expression equal to zero\n# So in this case we subtract the tuned value of OLR from our expression\neps_solution = sympy.solve(OLR2 - 238.5, epsilon)\neps_solution\n```\n\nThere are two roots, but the second one is unphysical since we must have $0 < \\epsilon < 1$.\n\nJust for fun, here is a simple of example of *filtering a list* using powerful Python *list comprehension* syntax:\n\n\n```python\n# Give me only the roots that are between zero and 1!\nlist_result = [eps for eps in eps_solution if 0\n\n## 4. Level of emission\n____________\n\nEven in this very simple greenhouse model, there is **no single level** at which the OLR is generated.\n\nThe three terms in our formula for OLR tell us the contributions from each level.\n\n\n```python\nOLRterms = sympy.Matrix([OLR_s, OLR_0, OLR_1])\nOLRterms\n```\n\nNow evaluate these expressions for our tuned temperature and absorptivity:\n\n\n```python\nOLRtuned = OLRterms.subs(tuned)\nOLRtuned\n```\n\nSo we are getting about 67 W m$^{-2}$ from the surface, 79 W m$^{-2}$ from layer 0, and 93 W m$^{-2}$ from the top layer.\n\nIn terms of fractional contributions to the total OLR, we have (limiting the output to two decimal places):\n\n\n```python\nsympy.N(OLRtuned / 239., 2)\n```\n\nNotice that the largest single contribution is coming from the top layer. This is in spite of the fact that the emissions from this layer are weak, because it is so cold.\n\nComparing to observations, the actual contribution to OLR from the surface is about 22 W m$^{-2}$ (or about 9% of the total), not 67 W m$^{-2}$. So we certainly don't have all the details worked out yet!\n\nAs we will see later, to really understand what sets that observed 22 W m$^{-2}$, we will need to start thinking about the spectral dependence of the longwave absorptivity.\n\n____________\n\n\n## 5. Radiative forcing in the 2-layer grey gas model\n____________\n\nAdding some extra greenhouse absorbers will mean that a greater fraction of incident longwave radiation is absorbed in each layer.\n\nThus **$\\epsilon$ must increase** as we add greenhouse gases.\n\nSuppose we have $\\epsilon$ initially, and the absorptivity increases to $\\epsilon_2 = \\epsilon + \\delta_\\epsilon$.\n\nSuppose further that this increase happens **abruptly** so that there is no time for the temperatures to respond to this change. **We hold the temperatures fixed** in the column and ask how the radiative fluxes change.\n\n**Do you expect the OLR to increase or decrease?**\n\nLet's use our two-layer leaky greenhouse model to investigate the answer.\n\nThe components of the OLR before the perturbation are\n\n\n```python\nOLRterms\n```\n\nAfter the perturbation we have\n\n\n```python\ndelta_epsilon = sympy.symbols('delta_epsilon')\nOLRterms_pert = OLRterms.subs(epsilon, epsilon+delta_epsilon)\nOLRterms_pert\n```\n\nLet's take the difference\n\n\n```python\ndeltaOLR = OLRterms_pert - OLRterms\ndeltaOLR\n```\n\nTo make things simpler, we will neglect the terms in $\\delta_\\epsilon^2$. This is perfectly reasonably because we are dealing with **small perturbations** where $\\delta_\\epsilon << \\epsilon$.\n\nTelling `sympy` to set the quadratic terms to zero gives us\n\n\n```python\ndeltaOLR_linear = sympy.expand(deltaOLR).subs(delta_epsilon**2, 0)\ndeltaOLR_linear\n```\n\nRecall that the three terms are the contributions to the OLR from the three different levels. In this case, the **changes** in those contributions after adding more absorbers.\n\nNow let's divide through by $\\delta_\\epsilon$ to get the normalized change in OLR per unit change in absorptivity:\n\n\n```python\ndeltaOLR_per_deltaepsilon = \\\n sympy.simplify(deltaOLR_linear / delta_epsilon)\ndeltaOLR_per_deltaepsilon\n```\n\nNow look at the **sign** of each term. Recall that $0 < \\epsilon < 1$. **Which terms in the OLR go up and which go down?**\n\n**THIS IS VERY IMPORTANT, SO STOP AND THINK ABOUT IT.**\n\nThe contribution from the **surface** must **decrease**, while the contribution from the **top layer** must **increase**.\n\n**When we add absorbers, the average level of emission goes up!**\n\n### \"Radiative forcing\" is the change in radiative flux at TOA after adding absorbers\n\nIn this model, only the longwave flux can change, so we define the radiative forcing as\n\n$$ R = - \\delta OLR $$\n\n(with the minus sign so that $R$ is positive when the climate system is gaining extra energy).\n\nWe just worked out that whenever we add some extra absorbers, the emissions to space (on average) will originate from higher levels in the atmosphere. \n\nWhat does this mean for OLR? Will it increase or decrease?\n\nTo get the answer, we just have to sum up the three contributions we wrote above:\n\n\n```python\nR = -sum(deltaOLR_per_deltaepsilon)\nR\n```\n\nIs this a positive or negative number? The key point is this:\n\n**It depends on the temperatures, i.e. on the lapse rate.**\n\n### Greenhouse effect for an isothermal atmosphere\n\nStop and think about this question:\n\nIf the **surface and atmosphere are all at the same temperature**, does the OLR go up or down when $\\epsilon$ increases (i.e. we add more absorbers)?\n\nUnderstanding this question is key to understanding how the greenhouse effect works.\n\n#### Let's solve the isothermal case\n\nWe will just set $T_s = T_0 = T_1$ in the above expression for the radiative forcing.\n\n\n```python\nR.subs([(T_0, T_s), (T_1, T_s)])\n```\n\nwhich then simplifies to\n\n\n```python\nsympy.simplify(R.subs([(T_0, T_s), (T_1, T_s)]))\n```\n\n#### The answer is zero\n\nFor an isothermal atmosphere, there is **no change** in OLR when we add extra greenhouse absorbers. Hence, no radiative forcing and no greenhouse effect.\n\nWhy?\n\nThe level of emission still must go up. But since the temperature at the upper level is the **same** as everywhere else, the emissions are exactly the same.\n\n### The radiative forcing (change in OLR) depends on the lapse rate!\n\nFor a more realistic example of radiative forcing due to an increase in greenhouse absorbers, we can substitute in our tuned values for temperature and $\\epsilon$. \n\nWe'll express the answer in W m$^{-2}$ for a 1% increase in $\\epsilon$.\n\nThe three components of the OLR change are\n\n\n```python\ndeltaOLR_per_deltaepsilon.subs(tuned) * 0.01\n```\n\nAnd the net radiative forcing is\n\n\n```python\nR.subs(tuned) * 0.01\n```\n\nSo in our example, **the OLR decreases by 2.2 W m$^{-2}$**, or equivalently, the radiative forcing is +2.2 W m$^{-2}$.\n\nWhat we have just calculated is this:\n\n*Given the observed lapse rates, a small increase in absorbers will cause a small decrease in OLR.*\n\nThe greenhouse effect thus gets stronger, and energy will begin to accumulate in the system -- which will eventually cause temperatures to increase as the system adjusts to a new equilibrium.\n\n____________\n\n\n## 6. Radiative equilibrium in the 2-layer grey gas model\n____________\n\nIn the previous section we:\n\n- made no assumptions about the processes that actually set the temperatures. \n- used the model to calculate radiative fluxes, **given observed temperatures**. \n- stressed the importance of knowing the lapse rates in order to know how an increase in emission level would affect the OLR, and thus determine the radiative forcing.\n\nA key question in climate dynamics is therefore this:\n\n**What sets the lapse rate?**\n\nIt turns out that lots of different physical processes contribute to setting the lapse rate. \n\nUnderstanding how these processes acts together and how they change as the climate changes is one of the key reasons for which we need more complex climate models.\n\nFor now, we will use our prototype greenhouse model to do the most basic lapse rate calculation: the **radiative equilibrium temperature**.\n\nWe assume that\n\n- the only exchange of energy between layers is longwave radiation\n- equilibrium is achieved when the **net radiative flux convergence** in each layer is zero.\n\n### Compute the radiative flux convergence\n\nFirst, the **net upwelling flux** is just the difference between flux up and flux down:\n\n\n```python\n# Upwelling and downwelling beams as matrices\nU = sympy.Matrix([U_0, U_1, U_2])\nD = sympy.Matrix([D_0, D_1, D_2])\n# Net flux, positive up\nF = U-D\nF\n```\n\n#### Net absorption is the flux convergence in each layer\n\n(difference between what's coming in the bottom and what's going out the top of each layer)\n\n\n```python\n# define a vector of absorbed radiation -- same size as emissions\nA = E.copy()\n\n# absorbed radiation at surface\nA[0] = F[0]\n# Compute the convergence\nfor n in range(2):\n A[n+1] = -(F[n+1]-F[n])\n\nA\n```\n\n#### Radiative equilibrium means net absorption is ZERO in the atmosphere\n\nThe only other heat source is the **shortwave heating** at the **surface**.\n\nIn matrix form, here is the system of equations to be solved:\n\n\n```python\nradeq = sympy.Equality(A, sympy.Matrix([(1-alpha)*Q, 0, 0]))\nradeq\n```\n\nJust as we did for the 1-layer model, it is helpful to rewrite this system using the definition of the **emission temperture** $T_e$\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n\n```python\nradeq2 = radeq.subs([((1-alpha)*Q, sigma*T_e**4)])\nradeq2\n```\n\nIn this form we can see that we actually have a **linear system** of equations for a set of variables $T_s^4, T_0^4, T_1^4$.\n\nWe can solve this matrix problem to get these as functions of $T_e^4$.\n\n\n```python\n# Solve for radiative equilibrium \nfourthpower = sympy.solve(radeq2, [T_s**4, T_1**4, T_0**4])\nfourthpower\n```\n\nThis produces a dictionary of solutions for the fourth power of the temperatures!\n\nA little manipulation gets us the solutions for temperatures that we want:\n\n\n```python\n# need the symbolic fourth root operation\nfrom sympy.simplify.simplify import nthroot\n\nfourthpower_list = [fourthpower[key] for key in [T_s**4, T_0**4, T_1**4]]\nsolution = sympy.Matrix([nthroot(item,4) for item in fourthpower_list])\n# Display result as matrix equation!\nT = sympy.Matrix([T_s, T_0, T_1])\nsympy.Equality(T, solution)\n```\n\nIn more familiar notation, the radiative equilibrium solution is thus\n\n\\begin{align} \nT_s &= T_e \\left( \\frac{2+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_0 &= T_e \\left( \\frac{1+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_1 &= T_e \\left( \\frac{ 1}{2 - \\epsilon} \\right)^{1/4}\n\\end{align}\n\nPlugging in the tuned value $\\epsilon = 0.586$ gives\n\n\n```python\nTsolution = solution.subs(tuned)\n# Display result as matrix equation!\nsympy.Equality(T, Tsolution)\n```\n\nNow we just need to know the Earth's emission temperature $T_e$!\n\n(Which we already know is about 255 K)\n\n\n```python\n# Here's how to calculate T_e from the observed values\nsympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)\n```\n\n\n```python\n# Need to unpack the list\nTe_value = sympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)[0]\nTe_value\n```\n\n#### Now we finally get our solution for radiative equilibrium\n\n\n```python\n# Output 4 significant digits\nTrad = sympy.N(Tsolution.subs([(T_e, Te_value)]), 4)\nsympy.Equality(T, Trad)\n```\n\nCompare these to the values we derived from the **observed lapse rates**:\n\n\n```python\nsympy.Equality(T, T.subs(tuned))\n```\n\nThe **radiative equilibrium** solution is substantially **warmer at the surface** and **colder in the lower troposphere** than reality.\n\nThis is a very general feature of radiative equilibrium, and we will see it again very soon in this course.\n\n____________\n\n\n## 7. Summary\n____________\n\n## Key physical lessons\n\n- Putting a **layer of longwave absorbers** above the surface keeps the **surface substantially warmer**, because of the **backradiation** from the atmosphere (greenhouse effect).\n- The **grey gas** model assumes that each layer absorbs and emits a fraction $\\epsilon$ of its blackbody value, independent of wavelength.\n\n- With **incomplete absorption** ($\\epsilon < 1$), there are contributions to the OLR from every level and the surface (there is no single **level of emission**)\n- Adding more absorbers means that **contributions to the OLR** from **upper levels** go **up**, while contributions from the surface go **down**.\n- This upward shift in the weighting of different levels is what we mean when we say the **level of emission goes up**.\n\n- The **radiative forcing** caused by an increase in absorbers **depends on the lapse rate**.\n- For an **isothermal atmosphere** the radiative forcing is zero and there is **no greenhouse effect**\n- The radiative forcing is positive for our atmosphere **because tropospheric temperatures tends to decrease with height**.\n- Pure **radiative equilibrium** produces a **warm surface** and **cold lower troposphere**.\n- This is unrealistic, and suggests that crucial heat transfer mechanisms are missing from our model.\n\n### And on the Python side...\n\nDid we need `sympy` to work all this out? No, of course not. We could have solved the 3x3 matrix problems by hand. But computer algebra can be very useful and save you a lot of time and error, so it's good to invest some effort into learning how to use it. \n\nHopefully these notes provide a useful starting point.\n\n### A follow-up assignment\n\nYou are now ready to tackle [Assignment 5](../Assignments/Assignment05 -- Radiative forcing in a grey radiation atmosphere.ipynb), where you are asked to extend this grey-gas analysis to many layers. \n\nFor more than a few layers, the analytical approach we used here is no longer very useful. You will code up a numerical solution to calculate OLR given temperatures and absorptivity, and look at how the lapse rate determines radiative forcing for a given increase in absorptivity.\n\n
\n[Back to ATM 623 notebook home](../index.ipynb)\n
\n\n____________\n## Version information\n____________\n\n\n\n```python\n%load_ext version_information\n%version_information sympy\n```\n\n Loading extensions from ~/.ipython/extensions is deprecated. We recommend managing extensions like any other Python packages, in site-packages.\n\n\n\n\n\n
SoftwareVersion
Python3.7.3 64bit [Clang 4.0.1 (tags/RELEASE_401/final)]
IPython7.6.0
OSDarwin 17.7.0 x86_64 i386 64bit
sympy1.4
Wed Jul 03 14:50:34 2019 EDT
\n\n\n\n____________\n\n## Credits\n\nThe author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\nIt was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php)\n\nDevelopment of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.\n____________\n\n\n```python\n\n```\n", "meta": {"hexsha": "a554f97ba17e780fd50c8ad751fbe20857fba508", "size": 260008, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture07 -- Elementary greenhouse models.ipynb", "max_stars_repo_name": "adityarn/ClimateModeling_courseware", "max_stars_repo_head_hexsha": "f83e749c3428190204cf44ab414c8a5fdbba9e80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/Lecture07 -- Elementary greenhouse models.ipynb", "max_issues_repo_name": "adityarn/ClimateModeling_courseware", "max_issues_repo_head_hexsha": "f83e749c3428190204cf44ab414c8a5fdbba9e80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture07 -- Elementary greenhouse models.ipynb", "max_forks_repo_name": "adityarn/ClimateModeling_courseware", "max_forks_repo_head_hexsha": "f83e749c3428190204cf44ab414c8a5fdbba9e80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-07-28T16:16:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-28T16:16:05.000Z", "avg_line_length": 96.6572490706, "max_line_length": 12384, "alphanum_fraction": 0.829616781, "converted": true, "num_tokens": 7522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.37387583672470853, "lm_q2_score": 0.32423539898095244, "lm_q1q2_score": 0.1212237810897733}} {"text": "# Introduction\n\nWhat are we going to do ?\n\n\n Move Data into R -> Tranfrom it -> Visualise it -> Model it. \n\n\n## Before all that; what is **Jupyter** and how to use it ?\n\n\nJupyter documents are called \"notebooks\" and can be seen as many things at once. For example, notebooks allow:\n\n* coding in a standard web browser\n* direct sharing with results\n* using text with styles (such as italics and titles) to be explicitly marked using a wikitext language\n* easy creation and display of beautiful equations\n* easy creation and display of interactive visualizations\n\n\n\\begin{equation}\n e^{i\\pi} +1=0 \n\\end{equation}\n\n\n```R\nprint(\"nithin\")\n```\n\n [1] \"nithin\"\n\n\n\n### The . Green mode \nThis is the edit mode
\nTo enter into green select all cell and press **Enter** & to exit to blue mode press **Esc**
\n
\n\n### The . Blue mode\nThis is the command mode
\nWe can do all sorts of edits on cell level in this mode .
\nInterseting functionalities can be found in help **HH**\n \n **Note**\n - Code mode can be selected by pressing 'y' this is the default one \n - Markdown mode can selected by pressing 'm'\n \n For more on jupyter notebooks please refer [1]\n\n\n\n\n```R\nprint(NIthin)\n```\n\nExercise 1\n\n- Write your name and address in markdown mode cell\n- In the next cell evaluate sum digits of your Age code cell \n\n\"nithin\"\n\n\n```R\n2 + 4\n```\n\n\n6\n\n\n# R Programming \n\nR is a language and environment for statistical computing and graphics. It is an integrated suite of software facilities for data manipulation, calculation and graphical display. It includes\n\n - an effective data handling and storage facility,\n - a suite of operators for calculations on arrays, in particular matrices,\n - a large, coherent, integrated collection of intermediate tools for data analysis,\n - graphical facilities for data analysis and display either on-screen or on hardcopy, and\n - a well-developed, simple and effective programming language which includes conditionals, loops, user-defined recursive functions and input and output facilities.\n\n\n## The Essentials \n\n - Statements and Variables \n - Data-Types and a few in-built functions\n - Conditions and Loops \n - Functions\n - DataFrame\n\n\n```R\nprint(\"hello World!\")\n```\n\n [1] \"hello World!\"\n\n\n\n```R\ninstall.packages(\"tidyverse\")\n```\n\n Updating HTML index of packages in '.Library'\n Making 'packages.html' ... done\n\n\n\n```R\ninstall.packages(\"nycflights13\")\n```\n\n### What can we do with R \n\n\n\n```R\nlibrary(tidyverse)\n```\n\n \u2500\u2500 Attaching packages \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 tidyverse 1.3.0 \u2500\u2500\n \u2714 ggplot2 3.2.1 \u2714 purrr 0.3.3\n \u2714 tibble 2.1.3 \u2714 dplyr 0.8.4\n \u2714 tidyr 1.0.2 \u2714 stringr 1.4.0\n \u2714 readr 1.3.1 \u2714 forcats 0.4.0\n \u2500\u2500 Conflicts \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 tidyverse_conflicts() \u2500\u2500\n \u2716 dplyr::filter() masks stats::filter()\n \u2716 purrr::flatten() masks jsonlite::flatten()\n \u2716 dplyr::lag() masks stats::lag()\n\n\n\n```R\n\nrmax <- 30\nout.df <- matrix(NA, ncol = 2, nrow = 0)\na <- 0.01\nr <- seq(0, rmax, by = 0.01)\nn <- 100\n\nfor (z in 1:length(r)) {\n\n xl <- vector()\n xl[1] <- 10\n for (i in 2:n) {\n\n xl[i] <- xl[i - 1] * r[z] * exp(-a * xl[i - 1])\n\n }\n uval <- unique(xl[40:n])\n ### Here is where we can save the output for ggplot\n out.df <- rbind(out.df, cbind(rep(r[z], length(uval)), uval))\n}\nout.df <- as.data.frame(out.df)\ncolnames(out.df) <- c(\"r\", \"N\")\nggplot(out.df, aes(x = r, y = N)) + geom_point(size = 0.5)\n\n```\n\n\n```R\n\n\n# ButteRflies design with Lorenz attractor\n\nlibrary(deSolve)\nlibrary(scatterplot3d)\n\n\n# Parameters for the solver \nparam <- c(alpha = 10,\n beta = 8/3,\n c = 26.48)\n\n# Initial state \nyini <- c(x = 0.01, y = 0.0, z = 0.0)\n\n# Lorenz function\nlorenz <- function(Time, State, Param) {\n with(as.list(c(State, Param)), {\n xdot <- alpha * (y - x)\n ydot <- x * (c - z) - y\n zdot <- x*y - beta*z\n return(list(c(xdot, ydot, zdot)))\n })\n}\n\n# Run function\nrunIt <- function(times) {\n out <- as.data.frame(ode(func = lorenz, y = yini, parms = param, times = times))\n \n scatterplot3d(x=out[,2],\n y=out[,3],\n z=out[,4],\n color=\"red\",\n type=\"l\",\n box=FALSE,\n highlight.3d=F,\n grid=F,\n axis=F,\n xlab=NULL,\n ylab=NULL,\n zlab=NULL,\n main=NULL)\n \n}\n\n# Run All function combining functions\nrunAll <- function() {\n runIt(seq(0, 100, by=0.01))\n}\n\n# Command to produce graphical output\nrunAll()\n```\n\n# Reference\n\n[1] https://jupyter.brynmawr.edu/services/public/dblank/Jupyter%20Notebook%20Users%20Manual.ipynb\n\n\n```R\n\n```\n", "meta": {"hexsha": "5df12afee9a799e298112138314e258a9538af5b", "size": 210946, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01-Introduction.ipynb", "max_stars_repo_name": "nithinivi/DataScienceInR", "max_stars_repo_head_hexsha": "06a007cb60da11e191e04fd5277970d3c8ef21c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01-Introduction.ipynb", "max_issues_repo_name": "nithinivi/DataScienceInR", "max_issues_repo_head_hexsha": "06a007cb60da11e191e04fd5277970d3c8ef21c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-Introduction.ipynb", "max_forks_repo_name": "nithinivi/DataScienceInR", "max_forks_repo_head_hexsha": "06a007cb60da11e191e04fd5277970d3c8ef21c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 498.6903073286, "max_line_length": 117228, "alphanum_fraction": 0.9324898315, "converted": true, "num_tokens": 1392, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881357207956, "lm_q2_score": 0.24508501313237172, "lm_q1q2_score": 0.12062793570672875}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n# Examples: \n# Factored form: 1/(x**2*(x**2 + 1))\n# Expanded form: 1/(x**4+x**2)\n\nimport sympy as sym\nfrom IPython.display import Latex, display, Markdown, Javascript, clear_output\nfrom ipywidgets import widgets, Layout # Interactivity module\n```\n\n## Partial fraction decomposition - Input by function\n\nWhen Laplace transform is used for system analysis, the Laplace transform of the output signal is obtained as a product of the transfer function and the Laplace transform of the input signal. The result of this multiplication can usually be quite difficult to comprehend. In order to execute the inverse Laplace transform we first perform the partial fraction decomposition. This example demonstrates this procedure.\n\n---\n\n### How to use this notebook?\nToggle between the option *Input by function* or *Input by polynomial coefficients*.\n\n1. *Input by function*:\n * Example: To insert the function $\\frac{1}{x^2(x^2 + 1)}$ (factored form) type 1/(x\\*\\*2\\*(x\\*\\*2 + 1)); to insert the same function in the expanded form ($\\frac{1}{x^4+x^2}$) type 1/(x\\*\\*4+x\\*\\*2).\n\n2. *Input by polynomial coefficients*:\n * Use the sliders to select the order of the numerator and denominator of a rational function of interest.\n * Insert the coefficients for both numerator and denominator in the dedicated textboxes and click *Confirm*.\n\n\n```python\n## System selector buttons\nstyle = {'description_width': 'initial'}\ntypeSelect = widgets.ToggleButtons(\n options=[('Input by function', 0), ('Input by polynomial coefficients', 1),],\n description='Select: ',style={'button_width':'230px'})\n\nbtnReset=widgets.Button(description=\"Reset\")\n\n# function\ntextbox=widgets.Text(description=('Insert the function:'),style=style)\nbtnConfirmFunc=widgets.Button(description=\"Confirm\") # ex btnConfirm\n\n# poly\nbtnConfirmPoly=widgets.Button(description=\"Confirm\") # ex btn\n\ndisplay(typeSelect)\n\ndef on_button_clickedReset(ev):\n display(Javascript(\"Jupyter.notebook.execute_cells_below()\"))\n\ndef on_button_clickedFunc(ev):\n eq = sym.sympify(textbox.value)\n\n if eq==sym.factor(eq):\n display(Markdown('Input function $%s$ is written in a factored form. ' %sym.latex(eq) + 'Its expanded form is $%s$.' %sym.latex(sym.expand(eq))))\n \n else:\n display(Markdown('Input function $%s$ is written in an expanded form. ' %sym.latex(eq) + 'Its factored form is $%s$.' %sym.latex(sym.factor(eq))))\n \n display(Markdown('The result of the partial fraction decomposition is: $%s$' %sym.latex(sym.apart(eq)) + '.'))\n display(btnReset)\n \ndef transfer_function(num,denom):\n num = np.array(num, dtype=np.float64)\n denom = np.array(denom, dtype=np.float64)\n len_dif = len(denom) - len(num)\n if len_dif<0:\n temp = np.zeros(abs(len_dif))\n denom = np.concatenate((temp, denom))\n transferf = np.vstack((num, denom))\n elif len_dif>0:\n temp = np.zeros(len_dif)\n num = np.concatenate((temp, num))\n transferf = np.vstack((num, denom))\n return transferf\n\ndef f(orderNum, orderDenom):\n global text1, text2\n text1=[None]*(int(orderNum)+1)\n text2=[None]*(int(orderDenom)+1)\n display(Markdown('2. Insert the coefficients of the numerator.'))\n for i in range(orderNum+1):\n text1[i]=widgets.Text(description=(r'a%i'%(orderNum-i)))\n display(text1[i])\n display(Markdown('3. Insert the coefficients of the denominator.')) \n for j in range(orderDenom+1):\n text2[j]=widgets.Text(description=(r'b%i'%(orderDenom-j)))\n display(text2[j])\n global orderNum1, orderDenom1\n orderNum1=orderNum\n orderDenom1=orderDenom\n\ndef on_button_clickedPoly(btn):\n clear_output()\n global num,denom\n enacbaNum=\"\"\n enacbaDenom=\"\"\n num=[None]*(int(orderNum1)+1)\n denom=[None]*(int(orderDenom1)+1)\n for i in range(int(orderNum1)+1):\n if text1[i].value=='' or text1[i].value=='Please insert a coefficient':\n text1[i].value='Please insert a coefficient'\n else:\n try:\n num[i]=int(text1[i].value)\n except ValueError:\n if text1[i].value!='' or text1[i].value!='Please insert a coefficient':\n num[i]=sym.var(text1[i].value)\n \n for i in range (len(num)-1,-1,-1):\n if i==0:\n enacbaNum=enacbaNum+str(num[len(num)-i-1])\n elif i==1:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x+\"\n elif i==int(len(num)-1):\n enacbaNum=enacbaNum+str(num[0])+\"*x**\"+str(len(num)-1)\n else:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x**\"+str(i) \n \n for j in range(int(orderDenom1)+1):\n if text2[j].value=='' or text2[j].value=='Please insert a coefficient':\n text2[j].value='Please insert a coefficient'\n else:\n try:\n denom[j]=int(text2[j].value)\n except ValueError:\n if text2[j].value!='' or text2[j].value!='Please insert a coefficient':\n denom[j]=sym.var(text2[j].value)\n \n for i in range (len(denom)-1,-1,-1):\n if i==0:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])\n elif i==1:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x\"\n elif i==int(len(denom)-1):\n enacbaDenom=enacbaDenom+str(denom[0])+\"*x**\"+str(len(denom)-1)\n else:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x**\"+str(i)\n \n funcSym=sym.sympify('('+enacbaNum+')/('+enacbaDenom+')')\n\n DenomSym=sym.sympify(enacbaDenom)\n NumSym=sym.sympify(enacbaNum)\n DenomSymFact=sym.factor(DenomSym);\n funcFactSym=NumSym/DenomSymFact;\n \n if DenomSym==sym.expand(enacbaDenom):\n if DenomSym==DenomSymFact:\n display(Markdown('The function of interest is: $%s$. The numerator cannot be factored.' %sym.latex(funcSym)))\n else:\n display(Markdown('The function of interest is: $%s$. The numerator cannot be factored. The same function with the factored denominator can be written as: $%s$.' %(sym.latex(funcSym), sym.latex(funcFactSym))))\n\n if sym.apart(funcSym)==funcSym:\n display(Markdown('Partial fraction decomposition cannot be done.'))\n else:\n display(Markdown('The result of the partial fraction decomposition is: $%s$' %sym.latex(sym.apart(funcSym)) + '.'))\n \n btnReset.on_click(on_button_clickedReset)\n display(btnReset)\n \ndef partial_frac(index):\n\n if index==0:\n x = sym.Symbol('x') \n display(widgets.HBox((textbox, btnConfirmFunc)))\n btnConfirmFunc.on_click(on_button_clickedFunc)\n btnReset.on_click(on_button_clickedReset)\n \n elif index==1:\n display(Markdown('1. Define the order of the numerator (orderNum) and denominator (orderDenom).'))\n widgets.interact(f, orderNum=widgets.IntSlider(min=0,max=10,step=1,value=0),\n orderDenom=widgets.IntSlider(min=0,max=10,step=1,value=0));\n btnConfirmPoly.on_click(on_button_clickedPoly)\n display(btnConfirmPoly) \n\ninput_data=widgets.interactive_output(partial_frac,{'index':typeSelect})\ndisplay(input_data)\n```\n\n\n ToggleButtons(description='Select: ', options=(('Input by function', 0), ('Input by polynomial coefficients', \u2026\n\n\n\n Output()\n\n", "meta": {"hexsha": "49983613fb92ed5cfd2c3fe0af38757269498ff9", "size": 11805, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_en/examples/02/.ipynb_checkpoints/TD-09-Partial-fraction-decomposition-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT/ENG/examples/02/TD-09-Partial-fraction-decomposition.ipynb", "max_issues_repo_name": "tuxsaurus/ICCT", "max_issues_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT/ENG/examples/02/TD-09-Partial-fraction-decomposition.ipynb", "max_forks_repo_name": "tuxsaurus/ICCT", "max_forks_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 38.3279220779, "max_line_length": 425, "alphanum_fraction": 0.532909784, "converted": true, "num_tokens": 2102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47268349147711747, "lm_q2_score": 0.25386101261427363, "lm_q1q2_score": 0.11999590979243142}} {"text": "
\n

M\u00e9todos num\u00e9ricos

\n

Segundo cuatrimestre 2021

\n
\n

Pr\u00e1ctica 2: Esquemas temporales

\n

C\u00e1tedra: Pablo Dmitruk

\n
\n

Fecha l\u00edmite de entrega: 24 de septiembre de 2021 23:59

\n
\n\n### Entrega de: **COMPLETAR CON SUS NOMBRES**\n\n- [Ejercicios](#ejercicios)\n\n- [Repaso te\u00f3rico](#explicacion)\n\n\n\n# **Ejercicios**\n\n## **Problema 1: Reducci\u00f3n del orden**\n\nReescriba los siguientes problemas de valores iniciales como un sistema de primer orden.\n\\begin{alignat}{7}\n \\mathrm{a)}& \\quad \\ddot{y} + \\mu \\dot{y} + \\omega^2_0 y &&= \\cos(t), \\qquad \\qquad &&y(0) = y_0, \\qquad \\dot{y}(0) = \\dot{y}_0; \\\\\n \\mathrm{b)}& \\quad \\ddot{y} - \\mu(1-t^2) \\dot{y} + y &&= 0, \\qquad \\qquad &&y(0) = y_0, \\qquad \\dot{y}(0) = \\dot{y_0}; \\\\\n \\mathrm{c)}& \\quad \\dddot{y} + y \\ddot{y} + \\beta\\left(1 - \\dot{y}^2 \\right) &&= h(t), \\qquad \\qquad &&y(0) = y_0, \\qquad \\dot{y}(0) = \\dot{y_0}, \\qquad \\ddot{y}(0) = \\ddot{y_0}. \n\\end{alignat}\n\n**Su resoluci\u00f3n ac\u00e1**\n\n---\n\n## **Problema 2: Derivaci\u00f3n de m\u00e9todos multipaso**\nDemuestre la consistencia de los siguientes esquemas de integraci\u00f3n temporal y especifique el orden de precisi\u00f3n de cada uno:\n\n$\\text{a)}$ Adams-Bashforth de 2 pasos:\n\\begin{equation*}\n y^{n+1} = y^n + \\frac{k}{2} (3 f^n - f^{n-1});\n\\end{equation*}\n$\\text{b)}$ Adams-Moulton de 1 paso (de m\u00e1ximo orden):\n\\begin{equation}\n y^{n+1} = y^n + \\frac{k}{2} (f^{n+1} + f^n);\n\\end{equation}\n$\\text{c)}$ Adams-Moulton de 2 pasos:\n\\begin{equation*}\n y^{n+1} = y^n + \\frac{k}{12} (5f^{n+1} + 8f^n - f^{n-1}).\n\\end{equation*}\n\n**Su resoluci\u00f3n ac\u00e1**\n\n---\n\n## **Problema 3: Verificaci\u00f3n de orden para m\u00e9todos multipaso**\n\nPara cada uno de los m\u00e9todos demostrados en el problema anterior:\n\n$\\text{a)}$ integre num\u00e9ricamente el problema de valor inicial\n\\begin{equation*}\n \\dot{y} = -y + \\cos(t), \\qquad \\qquad y(0) = 3/2,\n\\end{equation*}\npara $0\\le t \\le 6$ utilizando los pasos temporales $k \\in \\{1\\times 10^{-1}, 3\\times 10^{-2}, 1 \\times 10^{-2}, 3 \\times 10^{-3}, 1 \\times 10^{-3} \\}$. Puede utilizar que este problema admite a $y=e^{-t} + (\\mathrm{sen}(t) + \\cos(t))/2$ como soluci\u00f3n anal\u00edtica para inicializar los primeros pasos de la integraci\u00f3n;\n\n
\n\n$\\text{b)}$ grafique la norma infinito del error en funci\u00f3n de $k$ y verifique que encuentra el orden de convergencia esperado.\n\n\n```python\n# Su soluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 4: Verificaci\u00f3n de orden para m\u00e9todos predictores correctores**\n\nRepita el estudio realizado en el problema 3 para los siguientes integradores temporales predictores-correctores:\n\n$\\text{a)}$ Matsuno;\n\n$\\text{b)}$ Heun.\n\nEn ambos casos considere solo una iteraci\u00f3n del paso corrector.\n\n\n```python\n# Su soluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 5: Verificaci\u00f3n de orden para Runge-Kutta**\n\nRealice nuevamente lo hecho en los problemas 3 y 4 para los siguientes integradores multietapa (Runge-Kutta):\n\n$a)$ Runge-Kutta de dos etapas punto medio o _midpoint_ (RK2);\n\n$b)$ Runge-Kutta de cuatro etapas (RK4).\n\n\n```python\n# Su soluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 6: Integraci\u00f3n del oscilador arm\u00f3nico amortiguado**\n\nIntegre la ecuaci\u00f3n del oscilador amortiguado\n\\begin{equation*}\n \\ddot{x} + \\mu \\dot{x} + \\omega^2_0 x = 0, \\qquad \\qquad x(0) = x_0, \\qquad \\dot{x} = \\dot{x}_0,\n\\end{equation*}\npara $0\\le t \\le 5$ utilizando:\n\n$\\text{a)}$ el m\u00e9todo de Matsuno (con una \u00fanica iteraci\u00f3n del proceso estimaci\u00f3n-correcci\u00f3n \u2014 EC\u2014);\n\n$\\text{b)}$ el m\u00e9todo de Runge-Kutta de punto medio (RK2).\n\nConsidere en particular $\\omega_0 = 2$, $\\mu=0,1$, $x_0 = 1$ y $\\dot{x}_0 = 0$. Integre el problema para $k \\in \\{10^{-1}; 10^{-2}; 10^{-3}\\}$. Usando la soluci\u00f3n anal\u00edtica $x(t) = e^{-\\mu t/2} \\bigg(\\cos(\\tilde{\\omega}t) + \\mu \\mathrm{sen}(\\tilde{\\omega}t)/2 \\tilde \\omega\\bigg)$, con $\\tilde \\omega = (4\\omega_0^2-\\mu^2)^{1/2}/2$, verifique para ambos integradores que el error en la posici\u00f3n final $x(5)$ y la velocidad final $\\dot{x}(5)$ decrecen a la tasa esperada al reducir el paso temporal.\n\n\n```python\n# Su soluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 7: Estabilidad en la integraci\u00f3n de un oscilador**\n\nConsidere nuevamente el problema de valores iniciales del ejercicio anterior (i.e. el oscilador arm\u00f3nico amortiguado), pero en esta ocasi\u00f3n tome $\\omega_0 = 30$, $\\mu=0,5$, $t_f=1$ y los pasos temporales $k \\in \\{ 8 \\times 10^{-2}; 4 \\times 10^{-2}; 1 \\times 10^{-2} \\}$. \n\nNoten que el problema involucra dos escalas temporales bien separadas, una de magnitud $1/30$ ($1/\\omega_0$) y otra de magnitud $2$ ($1/\\mu$). No es extra\u00f1o en f\u00edsica toparse con un problema con m\u00faltiples escalas, pero donde nos interesa la din\u00e1mica asociada a solo una de ellas. Por ejemplo, en este problema, podr\u00edamos estar interesados solamente en estudiar la tasa de disipaci\u00f3n de energ\u00eda, para lo cual la escala temporal de $1/30$ no resulta determinante.\n\nPara los par\u00e1metros, integre el problema utilizando los siguientes integradores temporales de segundo orden:\n\n$\\text{a)}$ Runge-Kutta (punto medio);\n\n$\\text{b)}$ Adams-Moulton de segundo orden. _Ayuda: Puede mostrar primeramente que dado el sistema de ecuaciones_\n\\begin{equation*}\n\\mathbf{y}^{n+1} = \\mathbf{y}^n + \\frac{k}{2} \\left( A \\mathbf{y}^n + A \\mathbf{y}^{n+1} \\right),\n\\end{equation*}\npuede resolverse como\n\\begin{equation*}\n\\mathbf{y}^{n+1} = \\tilde{A}^{-1} \\left(\\mathbf{y}^n + \\frac{k}{2} A \\mathbf{y}^n\\right),\n\\end{equation*}\n_con $\\tilde{A} = \\mathbb{I} - k A/2$, donde $\\mathbb{I}$ es la matriz identidad._\n\nEn ambos casos grafique la soluci\u00f3n anal\u00edtica y comp\u00e1rela con la soluci\u00f3n obtenida num\u00e9ricamente. _Ayuda: grafique solo para las ordenadas en el intervalo $(-3/2;\\ 3/2)$._\n\n\n\n\n```python\n# Su soluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 8: Regiones de estabilidad**\n\nHalle anal\u00edticamente las regiones de estabilidad para los siguientes m\u00e9todos predictores-correctores:\n\n$\\text{a)}$ Matsuno;\n\n$\\text{b)}$ Heun.\n\nGrafique las regiones de estabilidad y comp\u00e1relas con los m\u00e9todos de Euler adelantado y Adams-Moulton de 2do orden, respectivamente. _Ayuda: el siguiente c\u00f3digo grafica en azul los puntos que satisfacen la desigualdad $| 1 - \\bar{\\lambda} | < 1$_:\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Crea una grilla cuadrada de NX x NY puntos para\n# (x0,y0) x (x1, y1) y defino una variable compleja sobre ella\nNx, Ny = 1000, 1000\nx0, x1 = -2, 2\ny0, y1 = -2, 2\nx = np.linspace(x0, x1, Nx)\ny = np.linspace(y0, y1, Ny)\nlamda = x[:,None] + 1j*y[None,:] # Defino la variable compleja\n\narg = 1 - lamda # Defino el argumento del m\u00f3dulo\n\n# Grafico\nfig, ax = plt.subplots(1, 1, figsize=(8,4), constrained_layout=True)\nax.imshow((np.abs(arg) < 1).T, extent=[x[0], x[-1], y[0], y[-1]],\n cmap=\"GnBu\", vmin=0, vmax=1);\n```\n\n**Su resoluci\u00f3n ac\u00e1**\n\n\n```python\n# Su resoluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 9: Verificaci\u00f3n num\u00e9rica de regiones de estabilidad**\n\nVerifique num\u00e9ricamente las regiones de estabilidad halladas en el problema 8. Para ello puede usar como base la siguiente funci\u00f3n que, dado un arreglo 2D $\\lambda_{pq} = x_{p} + i y_{q}$ (donde $x$ e $y$ son arreglos de n\u00fameros reales) devuelve un arreglo con `True` donde $\\lambda$ es estable y `False` donde no lo es para el esquema de Euler adelantado:\n```python\ndef estabilidad_euler(dt, lamda):\n pasos = 50\n\n estabilidad = np.ones_like(lamda, dtype=bool)\n for i in range(lamda.shape[0]):\n for j in range(lamda.shape[1]):\n y = 1\n for n in range(0, pasos):\n y = y + dt*(lamda[i,j]*y)\n\n if np.abs(y) > 2:\n estabilidad[i,j] = False\n break\n \n return estabilidad\n```\n\n\n```python\n# Su soluci\u00f3n ac\u00e1\n```\n\n---\n\n## **Problema 10: P\u00e9ndulo (no-lineal)**\n\nLa tarea para este ejercicio es escribir dos integradores temporales, de \u00f3rdenes distintos para la ecuaci\u00f3n del p\u00e9ndulo sin la aproximaci\u00f3n de peque\u00f1as amplitudes. Consideraremos tambi\u00e9n el roce con el aire. Este problema queda regido por la ecuaci\u00f3n\n\\begin{equation*}\n \\ddot \\theta + \\mu \\dot \\theta + \\omega_0^2 \\mathrm{sen}(\\theta) = 0, \\qquad \\qquad \\theta(0) = \\theta_0, \\qquad \\dot \\theta(0) = \\dot \\theta_0, \\qquad 0 \\le t \\le t_f,\n\\end{equation*}\ndonde como vimos en la pr\u00e1ctica anterior $\\omega_0 = \\sqrt{g/\\ell}$ es la frecuencia natural del sistema ($g$ y $\\ell$ representan la aceleraci\u00f3n de la gravedad y el largo de la cuerda inextensible desde la cual suspende la masa, respectivamente), y $\\mu$ es la magnitud (dimensionada en unidades de frecuencia) del roce con el medio.\n\nCon las herramientas vistas en la pr\u00e1ctica podemos resolver esta ecuaci\u00f3n ordinaria de segundo orden escribi\u00e9ndola como un sistema de dos ecuaciones de primer orden:\n\n$\\text{a)}$ Escriba la ecuaci\u00f3n mencionada como un sistema de ecuaciones de primer orden.\n\n**Su resoluci\u00f3n ac\u00e1**\n\n$\\text{b)}$ Escriba dos funciones, `rk2` y `rk4` las cuales, dados $\\theta_0$ y $\\dot \\theta_0$, $\\omega_0$, $\\mu$, el paso temporal y $t_f$, resuelvan la ecuaci\u00f3n del p\u00e9ndulo para los instantes solicitados. Como sus nombres indican `rk2` debera realizar la integraci\u00f3n usando un m\u00e9todo de Runge-Kutta de 2do orden (punto medio), mientras que `rk4` deber\u00e1 hacer lo propio utilizando un m\u00e9todo de cuarto orden.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef rk2(th0, omega0, mu, dt, tf):\n \"\"\"\n Integra la ecuaci\u00f3n del p\u00e9ndulo usando un m\u00e9todo de Runge-Kutta\n de segundo orden:\n\n Entrada:\n - th0: Arreglo de 2 elementos. th[0] es la posici\u00f3n y th[1] la velocidad\n - omega0: frecuencia natural del p\u00e9ndulo.\n - mu: coeficiente de roce del p\u00e9ndulo con el medio circundante.\n - dt: paso temporal a utilizar.\n - tf: tiempo hasta el cual se desea integrar.\n Salida:\n - th: Arreglo de (N, 2), con N la cantidad de pasos temporales\n (incluye la condici\u00f3n inicial). th[:,0] contiene la posici\u00f3n y\n th[:,1] la velocidad.\n \"\"\"\n # Completar con el integrador RK2\n # ...\n # ...\n # ...\n\n # Finalmente, convierto theta al intervalo [-pi,pi)\n th[:, 0] = np.arctan2(np.sin(th[:,0]), np.cos(th[:,0]))\n return th\n\ndef rk4(th0, omega0, mu, dt, tf):\n \"\"\"\n Integra la ecuaci\u00f3n del p\u00e9ndulo usando un m\u00e9todo de Runge-Kutta\n de cuarto orden.\n\n Entrada:\n - th0: Arreglo de 2 elementos. th[0] es la posici\u00f3n y th[1] la velocidad\n - omega0: frecuencia natural del p\u00e9ndulo.\n - mu: coeficiente de roce del p\u00e9ndulo con el medio circundante.\n - dt: paso temporal a utilizar.\n - tf: tiempo hasta el cual se desea integrar.\n Salida:\n - th: Arreglo de (N, 2), con N la cantidad de pasos temporales\n (incluye la condici\u00f3n inicial). th[:,0] contiene la posici\u00f3n y\n th[:,1] la velocidad.\n \"\"\"\n # Completar con el integrador RK4\n # ...\n # ...\n # ...\n\n # Finalmente, convierto theta al intervalo [-pi,pi)\n th[:, 0] = np.arctan2(np.sin(th[:,0]), np.cos(th[:,0]))\n return th\n\n# NO EDITAR DEBAJO DE ESTA L\u00cdNEA\n#-------------------------------------------------------------------------------\ndef estetizar_graficos(ax, titulo, etiqueta_x, etiqueta_y):\n \"\"\"\n Dado el par de ejes `ax` coloca el t\u00edtulo y las etiquetas de los ejes\n x e y. Adem\u00e1s de existir crea tambi\u00e9n las leyendas y agregra una grilla.\n \"\"\"\n ax.legend()\n ax.grid()\n ax.set_title(titulo)\n ax.set_xlabel(etiqueta_x)\n ax.set_ylabel(etiqueta_y)\n```\n\n### Posici\u00f3n y velocidad sin roce\nA diferencia del oscilador resuelto en la pr\u00e1ctica anterior, no contamos en esta ocasi\u00f3n con una soluci\u00f3n anal\u00edtica (al menos en t\u00e9rminos de funciones elementales) contra la cual contrastar para verficar el funcionamiento de nuestra soluci\u00f3n.\n\nEn estos casos, una manera de comprobar si nuestro _solver_ se comporta de manera razonable, es analizar las soluciones que devuelve para casos l\u00edmites, situaciones en las que podemos aprovechar nuestro conocimiento f\u00edsico del problema.\n\n$\\text{c)}$ Piense cualitativamente como deber\u00eda ser el movimiento en los siguientes casos:\n
    \n
  1. $\\theta_0 = 0,1$, $\\dot \\theta_0 = 0$;
  2. \n
  3. $\\theta_0 = \\pi - 0,1$, $\\dot \\theta_0 = 0$;
  4. \n
  5. $\\theta_0 = \\pi$, $\\dot \\theta_0 = 0$;
  6. \n
  7. $\\theta_0 = 0$, $\\dot \\theta_0 = 2\\omega + 0,01$ ($2\\omega$ es la m\u00e1xima diferencia de energ\u00eda potencial gravitatoria entre dos puntos cualesquiera).
  8. \n
\n\nLuego, para ambos m\u00e9todos de Runge-Kutta, integre la ecuaci\u00f3n para cada condici\u00f3n inicial fijando $\\omega_0 = 2$, $\\mu=0$, $k=10^{-3}$ y $t_f = 10$. Grafique la posici\u00f3n y la velocidad obtenidas. En todos los casos verifique que recupera el comportamiento esperado.\n\n\n```python\n# Posici\u00f3n y velocidad sin roce\n#-------------------------------\n# Figura y pares de ejes\nfig, axs = plt.subplots(2, 4, figsize=(16,4), constrained_layout=True)\n\ndt = # COMPLETAR: Paso temporal \n\n# Integrar para el primer conjunto de condiciones iniciales\nth_2 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK2\nth_4 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK4\n\n#-------------------------------------------------------------------------------\n# NO EDITAR ESTA SECCI\u00d3N\n#-------------------------------------------------------------------------------\nt = np.arange(th_2.shape[0])*dt\naxs[0,0].plot(t, th_2[:,0], label=\"RK2\")\naxs[0,0].plot(t, th_4[:,0], label=\"RK4\")\naxs[1,0].plot(t, th_2[:,1], label=\"RK2\")\naxs[1,0].plot(t, th_4[:,1], label=\"RK4\")\nestetizar_graficos(axs[0,0], r\"$\\theta_0 = 0,1 \\ \\wedge \\ \\dot \\theta_0 = 0$\",\n \"$t$\", r\"$\\theta$\")\nestetizar_graficos(axs[1,0], \"\", \"$t$\", r\"$\\dot \\theta$\")\n#-------------------------------------------------------------------------------\n\n\n# Integrar para el segundo conjunto de condiciones iniciales\nth_2 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK2\nth_4 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK4\n\n#-------------------------------------------------------------------------------\n# NO EDITAR ESTA SECCI\u00d3N\n#-------------------------------------------------------------------------------\nt = np.arange(th_2.shape[0])*dt\naxs[0,1].plot(t, th_2[:,0], label=\"RK2\")\naxs[0,1].plot(t, th_4[:,0], label=\"RK4\")\naxs[1,1].plot(t, th_2[:,1], label=\"RK2\")\naxs[1,1].plot(t, th_4[:,1], label=\"RK4\")\nestetizar_graficos(axs[0,1], r\"$\\theta_0 = \\pi-0,1 \\ \\wedge \\ \\dot \\theta_0=0$\",\n \"$t$\", r\"$\\theta$\")\nestetizar_graficos(axs[1,1], \"\", \"$t$\", r\"$\\dot \\theta$\")\n#-------------------------------------------------------------------------------\n\n\n# Integrar para el tercer conjunto de condiciones iniciales\nth_2 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK2\nth_4 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK4\n\n#-------------------------------------------------------------------------------\n# NO EDITAR ESTA SECCI\u00d3N\n#-------------------------------------------------------------------------------\nt = np.arange(th_2.shape[0])*dt\naxs[0,2].plot(t, th_2[:,0], label=\"RK2\")\naxs[0,2].plot(t, th_4[:,0], label=\"RK4\")\naxs[1,2].plot(t, th_2[:,1], label=\"RK2\")\naxs[1,2].plot(t, th_4[:,1], label=\"RK4\")\nestetizar_graficos(axs[0,2], r\"$\\theta_0 = \\pi \\ \\wedge \\ \\dot \\theta_0 = 0$\",\n \"$t$\", r\"$\\theta$\")\nestetizar_graficos(axs[1,2], \"\", \"$t$\", r\"$\\dot \\theta$\")\n#-------------------------------------------------------------------------------\n\n# Integrar para el cuarto conjunto de condiciones iniciales\nth_2 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK2\nth_4 = # COMPLETAR: guardar en th_2 el resultado de integrar con RK4\n\n#-------------------------------------------------------------------------------\n# NO EDITAR ESTA SECCI\u00d3N\n#-------------------------------------------------------------------------------\nt = np.arange(th_2.shape[0])*dt\naxs[0,3].plot(t, th_2[:,0], label=\"RK2\")\naxs[0,3].plot(t, th_4[:,0], label=\"RK4\")\naxs[1,3].plot(t, th_2[:,1], label=\"RK2\")\naxs[1,3].plot(t, th_4[:,1], label=\"RK4\")\nestetizar_graficos(axs[0,3], r\"$\\theta_0 = 0 \\ \\wedge \\ \" +\\\n r\"\\dot \\theta_0 = 2\\omega_0 + 0,01$\", \"$t$\", r\"$\\theta$\")\nestetizar_graficos(axs[1,3], \"\", \"$t$\", r\"$\\dot \\theta$\");\n```\n\n### Conservaci\u00f3n de la energ\u00eda\nAdem\u00e1s de verificar que el movimiento, cualitativamente, es consistente con nuestro conocimiento f\u00edsico, para el caso sin roce sabemos que debe conservarse la energ\u00eda mec\u00e1nica, que podemos definir como\n\\begin{equation*}\n E(t) = \\dot \\theta(t)^2 + 4 \\omega_0^2 \\mathrm{sen}^2\\left(\\frac{\\theta(t)}{2} \\right).\n\\end{equation*}\n\n$\\text{d)}$ Escriba una funci\u00f3n `energia` que reciba como valores de entrada a $\\theta$, $\\dot \\theta$ y $\\omega_0$ y devuelva la energ\u00eda mec\u00e1nica para cada instante. Usando esta funci\u00f3n verifique que a todo tiempo se verifica aproximadamente la conservaci\u00f3n de la energ\u00eda, es decir, $|\\Delta E| = |E(t) - E(0)| \\ll 1$, para todos los casos estudiados en el inciso anterior, considerando esta vez $t_f=75$.\n\n\n```python\n# Estudio de conservaci\u00f3n de energ\u00eda para el caso sin rozamiento\n\ndef energia(th, omega):\n \"\"\"\n Calcula la energ\u00eda mec\u00e1nica para todo tiempo.\n \n Entrada:\n - th: Arreglo de forma (N,2) con la posici\u00f3n ([:,0]) y\n la velocidad angular ([:,1]).\n - omega_0: Frecuencia natural del sistema.\n \n Salida:\n - energia: Arreglo de forma (N).\n \"\"\"\n # COMPLETAR\n\n return energia\n\n# Figuras, pares de ejes y paso temporal\nfig, axs = plt.subplots(1, 4, figsize=(16,4), constrained_layout=True)\ndt = 1e-3\noffset = int(2/1e-3)\n\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK2\n# para el primer conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_2.\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK4\n# para el primer conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_4.\n\n# ----------------------------- NO EDITAR --------------------------------------\nt = np.arange(0, DeltaE_2.shape[0])*dt\naxs[0].semilogy(t[offset:], DeltaE_2[offset:], label=\"RK2\")\naxs[0].semilogy(t[offset:], DeltaE_4[offset:], label=\"RK4\")\nestetizar_graficos(axs[0], r\"$\\theta_0 = 0,1 \\ \\wedge \\ \\dot{\\theta}_0 = 0$\",\n \"$t$\", r\"$|\\Delta E|$\")\n# ------------------------------------------------------------------------------\n\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK2\n# para el segundo conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_2.\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK4\n# para el segundo conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_4.\n\n# ----------------------------- NO EDITAR --------------------------------------\nt = np.arange(0, DeltaE_2.shape[0])*dt\naxs[1].semilogy(t[offset:], DeltaE_2[offset:], label=\"RK2\")\naxs[1].semilogy(t[offset:], DeltaE_4[offset:], label=\"RK4\")\nestetizar_graficos(axs[1], r\"$\\theta_0 = \\pi-0,1 \\ \\wedge \\ \\dot{\\theta}_0 = 0$\",\n \"$t$\", r\"$|\\Delta E|$\")\n# ------------------------------------------------------------------------------\n\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK2\n# para el tercer conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_2.\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK4\n# para el tercer conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_4.\n\n# ----------------------------- NO EDITAR --------------------------------------\nt = np.arange(0, DeltaE_2.shape[0])*dt\naxs[2].semilogy(t[offset:], DeltaE_2[offset:], label=\"RK2\")\naxs[2].semilogy(t[offset:], DeltaE_4[offset:], label=\"RK4\")\nestetizar_graficos(axs[2], r\"$\\theta_0 = \\pi \\ \\wedge \\ \\dot{\\theta}_0 = 0$\",\n \"$t$\", r\"$|\\Delta E|$\")\n# ------------------------------------------------------------------------------\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK2\n# para el cuarto conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_2.\n\n# COMPLETAR: Calcular la variaci\u00f3n de energ\u00eda para todo instante usando RK4\n# para el cuarto conjunto de condiciones iniciales y asignarla a la variable\n# DeltaE_4.\n\n# ----------------------------- NO EDITAR --------------------------------------\nt = np.arange(0, DeltaE_2.shape[0])*dt\naxs[3].semilogy(t[offset:], DeltaE_2[offset:], label=\"RK2\")\naxs[3].semilogy(t[offset:], DeltaE_4[offset:], label=\"RK4\")\nestetizar_graficos(axs[3], rr\"$\\theta_0 = 0 \\ \\wedge \\ \" +\\\n r\"\\dot \\theta_0 = 2\\omega_0 + 0,01$\", \"$t$\", r\"$|\\Delta E|$\");\n# ------------------------------------------------------------------------------\n```\n\n### Reproducci\u00f3n del per\u00edodo real\nTambi\u00e9n usando la conservaci\u00f3n de la energ\u00eda, y si el movimiento parte del reposo, podemos obtener una expresi\u00f3n anal\u00edtica para el per\u00edodo del movimiento:\n\\begin{equation*}\n T = \\frac{4}{\\omega} K\\left( \\mathrm{sen}^2\\left( \\frac{\\theta_0}{2}\\right)\\right),\n\\end{equation*}\ndonde $\\theta_0$ es la posici\u00f3n inicial y $K$ la integral el\u00edptica completa de primera especie.\n\n$\\text{e)}$ Para cada integrador temporal, calcule la diferencia entre el per\u00edodo dado por la expresi\u00f3n anal\u00edtica y el que obtiene a partir de sus simulaciones para el conjunto de posiciones iniciales $[\\boldsymbol \\theta_0]_j = 0,01 + j\\Delta \\theta_0$, $0 \\le j < 20$ y $\\Delta \\theta_0 = 0,1234$ (es decir 20 puntos equiespaciados entre $0,01$ y $3\\pi/4$). En todos los casos considere nula a la velocidad angular inicial. Analice en particular los siguientes casos:\n
    \n
  1. $ \\omega_0 = 1, \\ \\mu=0, \\ k=1 \\times 10^{-2},\\ t_f = 250 $;
  2. \n
  3. $ \\omega_0 = 10, \\ \\mu=0, \\ k=1 \\times 10^{-2},\\ t_f = 250 $;
  4. \n
  5. $ \\omega_0 = 10, \\ \\mu=0, \\ k=1 \\times 10^{-3},\\ t_f = 250 $.
  6. \n
\n\nComo ayuda le proporcionamos funciones que calculan ambos per\u00edodos.\n\n\n```python\ndef periodo_analitico(th0, omega0):\n \"\"\"\n Calcula el per\u00edodo del p\u00e9ndulo sin roce dada la posici\u00f3n inicial y la\n frecuencia natural del sistema.\n\n Entradas:\n - th0: N\u00famero de punto flotante con la posici\u00f3n inicial de la masa.\n - omega0: Frecuencia natural del sistema.\n \n Salida:\n - Per\u00edodo del p\u00e9ndulo.\n \"\"\"\n import scipy.special as spspecial\n return 4/omega0*spspecial.ellipk(np.sin(th0/2)**2)\n\ndef periodo_datos(th, dt):\n \"\"\"\n Calcula el per\u00edodo del p\u00e9ndulo sin roce dada una serie de posiciones\n theta(t) y el paso temporal entre muestras.\n\n Entradas:\n - th: Vector de N elementos con la posici\u00f3n de la masa para N instantes\n consecutivos.\n - dt: Espaciamemiento entre muestras.\n \n Salida:\n - Per\u00edodo del p\u00e9ndulo.\n \"\"\" \n import scipy.signal as spsignal\n \n # Uso una ventana de Blackman-Harris para reducir Gibbs\n con_ventana = th*spsignal.blackmanharris(th.size)\n \n # Relleno con ceros para interpolar el espectro\n ceros = 10000\n T_previo = 0\n T_actual = 1\n\n # Pruebo distintos niveles de relleno hasta converger 3 d\u00edgitos decimales\n while np.abs((T_previo-T_actual)/T_actual) > 1e-3:\n ceros = 2*ceros\n relleno = np.pad(con_ventana, ceros, mode=\"constant\")\n RELLENO = np.fft.rfft(relleno)\n f = np.fft.rfftfreq(relleno.size, d=dt)\n\n # Busco el pico de la FFT y saco la frecuencia del pico. Transformo a T.\n ind = np.argmax(np.abs(RELLENO))\n T_previo = T_actual\n T_actual = 1/f[ind]\n\n return T_actual\n\n# Figura y pares de ejes\nfig, axs = plt.subplots(1, 3, figsize=(12,4), constrained_layout=True)\n\n# Vector de tiempos\ndt, tf = 1e-2, 250\npasos = int(round(tf/dt))\nt = np.arange(0, pasos+1)*dt\n\n# Conjunto de condiciones iniciales y diferencias de periodos\nN = 20\nconds_ini = np.linspace(1e-2, 3*np.pi/4, N)\nDeltaT_2 = np.zeros(N)\nDeltaT_4 = np.zeros(N)\n\n# Per\u00edodos para omega_0 = 1\nfor n, ini in enumerate(conds_ini):\n # COMPLETAR: Obtener la diferencia de per\u00edodos para la condicion inicial\n # actual y guardarlos en DeltaT_2[n] (para RK2) y DeltaT_4[n] (para RK4).\n\n# -------------------- NO EDITAR ESTA SECCI\u00d3N ----------------------------------\naxs[0].plot(conds_ini, DeltaT_2, \"x\", c=\"C0\", label=\"RK2\", markersize=10)\naxs[0].plot(conds_ini, DeltaT_4, \"o\", c=\"C1\", label=\"RK4\")\nestetizar_graficos(axs[0], r\"$\\omega_0=1, \\ \\Delta t=1\\times 10^{-2}$\",\n r\"$\\theta_0$\", \"$T$\")\n# ------------------------------------------------------------------------------\n\n\n# Per\u00edodos para omega_0 = 10\nfor n, ini in enumerate(conds_ini):\n # COMPLETAR: Obtener la diferencia de per\u00edodos para la condicion inicial\n # actual y guardarlos en DeltaT_2[n] (para RK2) y DeltaT_4[n] (para RK4).\n\n# -------------------- NO EDITAR ESTA SECCI\u00d3N ----------------------------------\naxs[1].plot(conds_ini, DeltaT_2, \"x\", c=\"C0\", label=\"RK2\", markersize=10)\naxs[1].plot(conds_ini, DeltaT_4, \"o\", c=\"C1\", label=\"RK4\")\nestetizar_graficos(axs[1], r\"$\\omega_0=10, \\ \\Delta t=1\\times 10^{-2}$\",\n r\"$\\theta_0$\", \"$T$\")\n# ------------------------------------------------------------------------------\n\n# Vector de tiempos\ndt, tf = 1e-3, 250\npasos = int(round(tf/dt))\nt = np.arange(0, pasos+1)*dt\n\n\n# Per\u00edodos para omega0 = 10 con k m\u00e1s chico\nfor n, ini in enumerate(conds_ini):\n # COMPLETAR: Obtener la diferencia de per\u00edodos para la condicion inicial\n # actual y guardarlos en DeltaT_2[n] (para RK2) y DeltaT_4[n] (para RK4).\n\n# -------------------- NO EDITAR ESTA SECCI\u00d3N ----------------------------------\naxs[2].plot(conds_ini, DeltaT_2, \"x\", c=\"C0\", label=\"RK2\", markersize=10)\naxs[2].plot(conds_ini, DeltaT_4, \"o\", c=\"C1\", label=\"RK4\")\nestetizar_graficos(axs[2], r\"$\\omega_0=10, \\ \\Delta t=1 \\times 10^{-3}$\",\n r\"$\\theta_0$\", \"$T$\");\n# ------------------------------------------------------------------------------\n```\n\n**Nota**: La celda anterior puede tardar entre 5 y 10 minutos en ejecutarse. Intente reducir la cantidad de condiciones iniciales `N` a 2 o 3 hasta estar seguro de estar obteniendo resultados razonables.\n\n### Posici\u00f3n y velocidad con roce\n\nObtenidos resultados satisfactorios en los incisos previos, pueden tener ya una cierta confianza en que los integradores que escribieron evolucionan aceptablemente el t\u00e9rmino no lineal. Resta realizar algunas pruebas para ver que los resultados para los casos con roce tambi\u00e9n se ajustan a lo esperado.\n\n$\\text{f)}$ Considerando $\\theta_0=0,01$, $\\omega_0 = 1$, $k=10^{-2}$, $t_f=100$ y para ambos integradores temporales, integre en tiempo los casos:\n
    \n
  1. $\\mu = 0,1$ (amortiguamiento subcr\u00edtico);
  2. \n
  3. $\\mu = 10$ (amortiguamiento supercr\u00edtico).
  4. \n
\n\nVerifique que, cualitativamente, la posici\u00f3n y la velocidad angular presentan el comportamientos esperados.\n\n\n\n```python\n# Figura, pares de ejes y par\u00e1metros temporales\nfig, axs = plt.subplots(2, 2, figsize=(8,8), constrained_layout=True)\ndt, tf = 1e-2, 100\n\n# COMPLETAR: Con las soluciones para RK2 (th_2) y RK4 (th_4) en el caso\n# mu = 0,1.\nth_2 = #...\nth_4 = #...\n\n# --------------------------- NO EDITAR ESTA SECCI\u00d3N ---------------------------\nt = np.arange(0, th_2.shape[0])*dt\naxs[0,0].plot(t, th_2[:,0], label=\"RK2\")\naxs[0,0].plot(t, th_4[:,0], label=\"RK4\")\nestetizar_graficos(axs[0,0], \"Amortiguamiento subcr\u00edtico\", \"$t$\", r\"$\\theta$\")\naxs[1,0].plot(t, th_2[:,1], label=\"RK2\")\naxs[1,0].plot(t, th_4[:,1], label=\"RK4\")\nestetizar_graficos(axs[1,0], \"\", \"$t$\", r\"$\\dot\\theta$\")\n# ------------------------------------------------------------------------------\n\n\n# COMPLETAR: Con las soluciones para RK2 (th_2) y RK4 (th_4) en el caso\n# mu = 10\nth_2 = #...\nth_4 = #...\n\n# --------------------------- NO EDITAR ESTA SECCI\u00d3N ---------------------------\nt = np.arange(0, th_2.shape[0])*dt\naxs[0,1].plot(t, th_2[:,0], label=\"RK2\")\naxs[0,1].plot(t, th_4[:,0], label=\"RK4\")\nestetizar_graficos(axs[0,1], \"Amortiguamiento supercr\u00edtico\", \"$t$\", r\"$\\theta$\")\naxs[1,1].plot(t, th_2[:,1], label=\"RK2\")\naxs[1,1].plot(t, th_4[:,1], label=\"RK4\")\nestetizar_graficos(axs[1,1], \"\", \"$t$\", r\"$\\dot\\theta$\");\n# ------------------------------------------------------------------------------\n```\n\n### Balance entre p\u00e9rdida de energ\u00eda mec\u00e1nica y potencia disipada\n\nEn t\u00e9rminos de energ\u00eda mec\u00e1nica ya no vamos a tener conservaci\u00f3n para el caso con disipaci\u00f3n. Sin embargo, resulta sencillo derivar la siguiente ecuaci\u00f3n de balance\n\\begin{equation*}\n \\dot E (t) = - \\mu \\dot \\theta^2(t) = P_\\mu(t),\n\\end{equation*}\ndonde $P_\\mu$ es la potencia disipada por la fuerza de roce.\n\n$\\text{g)}$ Para los casos estudiados en el inciso anterior, verifique si se verifica (de manera aproximada) la ecuaci\u00f3n de balance mencionada. Para ello, grafique la cantidad $|\\dot{E} - P_\\mu|$ en funci\u00f3n del tiempo.\n\n\n```python\n# Balance variaci\u00f3n de energ\u00eda - disipaci\u00f3n\ndef potencia_roce(th_punto, mu):\n \"\"\"\n Calcula la potencia entregada/disipada al sistema por el roce con el aire.\n\n Entrada:\n - th_punto: Arreglo de (N) elementos con la velocidad angular de la\n masa para N instantes de tiempo.\n - mu: Coeficiente de la fuerza de rozamiento (en unidades de frecuencia)\n \n Salida:\n - potencia: Arreglo de (N) elementos con la potencia entregada/disipada\n por la fuerza de roce a cada instante.\n \"\"\"\n return # COMPLETAR\n\ndef variacion_energia(energia, dt):\n \"\"\"\n Calcula la variaci\u00f3n temporal de la energ\u00eda mec\u00e1nica usando diferencias\n finitas centradas de 6to orden. En los extremos del intervalo, donde no se\n tienen suficientes datos, se devuelve 0.\n\n Entrada:\n - energia: Arreglo de (N) elementos con la energ\u00eda mec\u00e1nica para N\n N instantes consecutivos.\n - dt: Espaciamiento entre muestras de la energ\u00eda mec\u00e1nica.\n \n Salida:\n - var: Arreglo de (N) elementos con la variaci\u00f3n temporal de la energ\u00eda\n mec\u00e1nica.\n \"\"\"\n var = np.zeros_like(energia)\n var = -1*np.roll(energia, 3) + 9*np.roll(energia, 2)\n var += -45*np.roll(energia, 1) + 45*np.roll(energia, -1)\n var += -9*np.roll(energia, -2) + np.roll(energia, -3)\n var = var/(60*dt)\n var[-3:] = 0\n var[:3] = 0\n return var\n\n# Figuras, pares de ejes y paso temporal\nfig, axs = plt.subplots(1, 2, figsize=(8,4), constrained_layout=True)\ndt = 1e-2\noffset = int(round(2/dt))\n\n# Integraci\u00f3n para mu = 0,1\n\n# COMPLETAR: Calcular el valor absoluto de la diferencia entre la variaci\u00f3n\n# de energ\u00eda y la potencia disipada. Guarde sus resultados en las variables\n# bal_2 (para la integraci\u00f3n de segundo orden) y bal_4 (para la de cuarto).\n\n# --------------------------- NO EDITAR ESTA SECCI\u00d3N ---------------------------\nt = np.arange(bal_2.shape[0])*dt\naxs[0].semilogy(t[offset:], bal_2[offset:], label=\"RK2\")\naxs[0].semilogy(t[offset:], bal_4[offset:], label=\"RK4\")\nestetizar_graficos(axs[0], \"Amortiguamiento subcr\u00edtico\",\n \"$t$\", r\"$|\\partial_t E - P_\\mu|$\")\n# ------------------------------------------------------------------------------\n\n# Integraci\u00f3n para mu = 10\n\n# COMPLETAR: Calcular el valor absoluto de la diferencia entre la variaci\u00f3n\n# de energ\u00eda y la potencia disipada. Guarde sus resultados en las variables\n# bal_2 (para la integraci\u00f3n de segundo orden) y bal_4 (para la de cuarto).\n\n# --------------------------- NO EDITAR ESTA SECCI\u00d3N ---------------------------\nt = np.arange(bal_2.shape[0])*dt\naxs[1].semilogy(t[offset:], bal_2[offset:], label=\"RK2\")\naxs[1].semilogy(t[offset:], bal_4[offset:], label=\"RK4\")\nestetizar_graficos(axs[1], \"Amortiguamiento supercr\u00edtico\",\n \"$t$\", r\"$|\\partial_t E - P_\\mu|$\")\n# ------------------------------------------------------------------------------\n```\n\n### An\u00e1lisis de resultados\n\n$\\text{h)}$ Describa **brevemente** los resultados obtenidos en los incisos anteriores.\n\n**Su resoluci\u00f3n ac\u00e1**\n\n---\n\n---\n\n\n# **Esquemas temporales**\n\n## **Integraci\u00f3n temporal**\n\nEn esta pr\u00e1ctica nos interesa poder resolver num\u00e9ricamente el problema\n\\begin{align*}\n \\dot y &= f(t, y),\\\\\n y(t_0) &= y_0,\n\\end{align*}\ndonde para alivianar la notaci\u00f3n usaremos en adelante $\\dot y = \\mathrm{d}y/\\mathrm{d}t$.\n\nNoten que, a diferencia de lo que vimos en la pr\u00e1ctica anterior, conocemos ahora la derivada y queremos recuperar $y^n$ (es decir, el valor de $y$ sobre una cantidad discreta de valores de $t$, $t^n$). Luego, asumiendo conocidos $y^j, f(t^j, y^j)$ para $0 \\le j \\le n$, una forma de determinar $y^{n+1}$ es usar expansiones de Taylor. Por ejemplo, expandiendo $y^{n+1}$ alrededor de $y^n$ o bien $y^n$ alrededor de $y^{n+1}$, tenemos respectivamente\n\\begin{alignat*}{5}\n y^{n+1} &= y^n &+& \\dot y^n k &+& \\mathcal{O}(k^2),\\\\\n y^n &= y^{n+1} &-& \\dot y^{n+1} k &+& \\mathcal{O}(k^2),\n\\end{alignat*}\ndonde $k = \\Delta t$ es el paso temporal y $f^n = f(t^n, y^n)$. Podemos despejar f\u00e1cilmente sendos m\u00e9todos de segundo **orden local**\n\\begin{align}\n y^{n+1} &= y^n + f^n k, \\tag{Euler adelantado}\\\\\n y^{n+1} &= y^n + f^{n+1}k. \\tag{Euler atrasado}\n\\end{align}\nComo indican las etiquetas, estos m\u00e9todos reciben el nombre de **Euler adelantado** y **Euler atrasado**, respectivamente. Estos m\u00e9todos determinan $y^{n+1}$ a segundo orden, si conocemos $y^n$ y $f^n$ o $f^{n+1}$ de manera exacta (o, al menos, a orden $k^2$ y $k$, respectivamente). Sin embargo, este no suele ser el caso, ya que lo usual suele ser partir de una condici\u00f3n inicial y aplicar integradores temporales de forma iterativa hasta llegar a la soluci\u00f3n a un cierto tiempo final $t_f$ deseado. Al aplicar el m\u00e9todo $N=t_f/k$ veces, el orden ser\u00e1 entonces $N \\mathcal{O}(k^2) = t_f \\mathcal{O}(k)$. En consecuencia, el **orden global**, es decir, el error asociado a integrar hasta un cierto tiempo $t$, es $\\mathcal{O}(k)$ para los m\u00e9todos de Euler (adelantado y atrasado). Encontraremos esta diferencia de un orden de magnitud entre el orden local y el orden global para todos los esquemas temporales que veremos, al menos mientras permanezcan **estables**, concepto que discutiremos m\u00e1s adelante. Excepto menci\u00f3n expl\u00edcita de lo contrario, **cuando nos referimos al orden de un integrador temporal hacemos referencia al orden global** del mismo.\n\nUn elemento en el que a\u00fan no reparamos es en que si conocemos funcionalmente $f$, y contamos con $y^n$, la ecuaci\u00f3n de Euler hacia adelante resulta inmediata para resolver. Sin embargo, la versi\u00f3n atrasada, es una expresi\u00f3n impl\u00edcita para $y^{n+1}$ (ya que $f^{n+1}$ depende de $y^{n+1}$). Esto \u00faltimo no presenta gran dificultad (en precisi\u00f3n infinita) si $f^{n+1}$ es lineal, pero no resulta trivial en casos generales, requiriendo generalmente usar m\u00e9todos iterativos para hallar $y^{n+1}$. A los m\u00e9todos que, conocida toda la informaci\u00f3n para $t^j, y^j$, $j\\le n$ permiten determinar expl\u00edcitamente $y^{n+1}$ se los llama **m\u00e9todos expl\u00edcitos**. Por otro lado, aquellos que requiren resolver una ecuaci\u00f3n implicita (como Euler hacia atr\u00e1s), se los llama **m\u00e9todos impl\u00edcitos**. A la hora de utilizar m\u00e9todos impl\u00edcitos, es usual conjugarlos con alg\u00fan esquema iterativo, como por ejemplo el algoritmo de Newton-Raphson hasta alcanzar la tolerancia deseada. As\u00ed planteado, parecer\u00eda que los m\u00e9todos impl\u00edcitos son una complicaci\u00f3n innecesaria, sin embargo a la hora de discutir estabilidad, veremos que pueden ser muy ventajosos para algunos problemas, mientras que los expl\u00edcitos ser\u00e1n una opci\u00f3n mejor en otros casos.\n\nPara construir m\u00e9todos de mayor orden, las estrategias m\u00e1s populares (y que veremos en el curso) incluyen:\n- **M\u00e9todos multipaso lineales**: en forma similar a lo que hac\u00edamos con diferencias finitas, se propone $y^{n+1}$ como funci\u00f3n de m\u00faltiples valores $y^j$ y $f^j$, en lugar de usar solo $y^n$ y $f^n/f^{n+1}$ como hicimos para los m\u00e9todos de Euler.\n- **M\u00e9todos predictores-correctores**: generan una soluci\u00f3n aproximada a partir de m\u00e9todos de bajo orden (la _predicci\u00f3n_) que luego es mejorada usando alg\u00fan algoritmo de interpolaci\u00f3n (la _correcci\u00f3n_).\n- **M\u00e9todos de Runge-Kutta**: utilizan, en lugar de valores de $y$ y de $f$ para pasos previos/posteriores, valores intermedios de $f$, como por ejemplo $f^{n+\\frac{1}{2}} = f\\left(t^n + \\frac{k}{2}, y\\left(t^n+\\frac{k}{2}\\right)\\right)$. Pueden verse como un caso de m\u00e9todos predictores-correctores, aunque su uso extendido hace que reciban una denominaci\u00f3n propia.\n\nVeamos, antes de pasar a considerar en detalle estas estrategias, por qu\u00e9 nos concentraremos solo en integrar $y$ a partir de su derivada primera $\\dot y$. Spoiler: una EDO de orden $N$ puede reducirse a un sistema de $N$ EDOs de primer orden. Vale notar, sin embargo, que existen integradores temporales para EDOs de orden superior, aunque no los veremos en el curso. Cuando en pr\u00e1cticas posteriores veamos ecuaciones en derivadas parciales EDPs, trabajar con integradores para la primer derivada temporal tampoco resultar\u00e1 un limitante, como oportunamente mostraremos.\n\n### **Reducci\u00f3n de dimensionalidad de una EDO (ecuaci\u00f3n diferencial ordinaria)**\n\nSi bien en la materia solo veremos integradores temporales para sistemas de EDOs de primer orden, esto no representa restricci\u00f3n alguna con respecto a los problemas que podremos resolver. Como probablemente hayan visto en materias anteriores, cualquier EDO de orden $N$ puede reescribirse como un sistema de $N$ EDOs de primer orden. La manera m\u00e1s simple de verlo es con un ejemplo. Dada el problema de valores iniciales de orden 2\n\\begin{equation*}\n \\dot{y} \\ddot{y}\\cos(t) + t^2\\dot{y}^2 = y \\ln(y), \\qquad y(t_0) = y_0 \\quad \\wedge \\quad \\dot{y}(t_0) = \\dot{y}_0,\n\\end{equation*}\npodemos substituir $u_1 = y$, $u_2 = \\dot y$, y de esta manera tenemos el sistema\n\\begin{align*}\n \\dot{u}_1 &= u_2, \\qquad \\qquad &&u_1(t_0) = y_0, \\\\\n \\dot{u}_2 &= \\frac{u_1 \\ln(u_1)- t^2 u_2^2}{u_2 \\cos(t)}, \\qquad \\qquad &&u_2(t_0) = \\dot{y}_0,\n\\end{align*}\nque es un sistema de 2 ecuaciones diferenciales de primer orden acopladas expresable como\n\\begin{align*}\n \\dot{\\mathbf{u}} &= \\mathbf{F}(t, \\mathbf{u}), \\\\ \\mathbf{u}(t_0) &= \\mathbf{u_0},\n\\end{align*}\ncon\n\\begin{equation*}\n\\mathbf{u} = \\begin{pmatrix}\n u_1 \\\\\n u_2 \\\\\n\\end{pmatrix}, \\qquad \\mathbf{u}_0=\\begin{pmatrix}\n y_0\\\\\n \\dot{y}_0\n\\end{pmatrix}, \\qquad \\mathbf{F}=\\begin{pmatrix}\n u_2 \\\\\n \\dfrac{u_1 \\ln(u_1) - t^2 u_2^2}{u_2 \\cos(t)}\n \\end{pmatrix}.\n\\end{equation*}\n\nVemos entonces que, reescribiendo nuestro problema, los m\u00e9todos que veremos a continuaci\u00f3n resultan f\u00e1cilmente aplicables a ecuaciones de \u00f3rdenes superiores. Adem\u00e1s, para alivianar la notaci\u00f3n, presentaremos los esquemas de integraci\u00f3n temporal considerando una \u00fanica EDO de primer orden. Sin embargo, todos los m\u00e9todos son f\u00e1cilmente generalizables a sistemas de EDOs, como veremos al final con un ejemplo.\n\n### **M\u00e9todos multipaso lineales**\n\nEste tipo de m\u00e9todos se llaman lineales ya que proponen una combinaci\u00f3n lineal (de all\u00ed el nombre lineal) de $y^j$ y $f^j$ usando informaci\u00f3n correspondiente a varios pasos temporales (de all\u00ed el nombre multipaso). Esto quiere decir que, en general, podemos escribirlos para un cierto tiempo $t^n$ como\n\\begin{equation*}\n \\sum_{j=0}^s \\alpha^j y^{n+j} = k \\sum_{j=0}^s \\beta^j f^{n+j}, \\tag{1}\n\\end{equation*}\ndando valores espec\u00edficos de $\\boldsymbol \\alpha$ y $\\boldsymbol \\beta$ a realizaciones particulares de m\u00e9todos multipaso lineales. Si bien a simple vista la elecci\u00f3n de $\\boldsymbol \\alpha$ y $\\boldsymbol \\beta$ puede parecer trivial, i.e. ajustar los $2s+2$ grados de libertad de forma de obtener esquemas de orden $\\mathcal{O}(k^{2s})$, si buscamos obtener integradores estables (veremos por qu\u00e9 son los \u00fanicos que nos interesan m\u00e1s adelante), el mayor orden de aproximaci\u00f3n que podemos obtener con la ecuaci\u00f3n $(1)$ es $\\mathcal{O}(k^{s+2})$ (Teorema de Dahlquist). De esta manera, aparecen muchas propuestas de c\u00f3mo ajustar los correspondientes $s$ grados de libertad.\n\n**Una dificultad que presentan los m\u00e9todos multipaso es que**, por construcci\u00f3n, **resulta necesario conocer $y^j$ y $f^j$ para m\u00faltiples instantes**, informaci\u00f3n con la que no contamos si solo disponemos de una condici\u00f3n inicial $y^0$. **En la pr\u00e1ctica, una manera usual de resolver esto es inicializar la integraci\u00f3n con otro m\u00e9todo** (ej: Euler con alta resoluci\u00f3n temporal \u2014 de forma de mantener el orden de aproximaci\u00f3n\u2014, Runge-Kutta, predictor-corrector, etc.). Una vez contamos con la cantidad de pasos suficientes comenzamos la aplicaci\u00f3n del m\u00e9todo multipaso deseado.\n\nEn la pr\u00e1ctica de la materia veremos dos familias de m\u00e9todos multipaso lineales: los m\u00e9todos de **Adams-Bashforth** (expl\u00edcitos) y los de **Adams-Moulton** (impl\u00edcitos). Ambos m\u00e9todos en su conjunto se conocen como m\u00e9todos de _Adams_$^\\dagger$ y est\u00e1n dados por la expresi\u00f3n\n\\begin{equation*}\n y^{n+s} - y^{n+s-1} = k\\sum_{j=0}^s \\beta^j f^{n+j},\n\\end{equation*}\nes decir, surgen de escoger $\\alpha_0, \\dots, \\alpha_{s-2} = 0$, $\\alpha_{s-1} = -1$ y $\\alpha_s = 1$.\n\n$^\\dagger$: el mismo Adams que postul\u00f3 por primera vez la existencia de Neptuno.\n\n#### **Adams-Bashforth**\n\nEn el m\u00e9todo de Adams-Bashforth se busca obtener un m\u00e9todo que resulte expl\u00edcito y por lo tanto se propone $\\beta_s = 0$, es decir\n\\begin{equation*}\n y^{n+s} - y^{n+s-1} = k\\sum_{j=0}^{s-1} \\beta^j f^{n+j} \\tag{Adams-Bashforth}\n\\end{equation*}\nDe esta forma solo queda determinar los coeficientes $\\beta^{n}, \\dots, \\beta^{n+s-1}$ y con ello tendremos ya una f\u00f3rmula que permite determinar $y^{n+s}$ si conocemos $y^{n+s-1}$ y $f^{n}, \\dots, f^{n+s-1}$. Para ello, naturalmente, buscaremos un m\u00e9todo consistente y maximizar el orden de aproximaci\u00f3n a $y^{n+s}$. Para esto consideremos $Y$ una soluci\u00f3n exacta a la EDO en cuesti\u00f3n, reemplazando en la \u00faltima ecuaci\u00f3n tenemos\n\\begin{gather*}\n \\left[ Y + \\dot{Y}k + \\ddot{Y}\\frac{k^2}{2} + \\dots \\right]_{t^{n+s-1}} - Y^{n+s-1} + \\delta Y = k \\left\\{ \\beta^{s-1} f^{n+s-1} + \\beta^{s-2} \\left[ f - \\dot{f} k + \\ddot{f} \\frac{k^2}{2} + \\dots \\right]_{t^{n+s-1}} + \\\\\n + \\ldots + \\beta^0 \\left[ f - \\dot{f} (s-1)k + \\ddot{f} \\frac{(s-1)^2 k^2}{2} + \\dots \\right]_{t^{n+s-1}} \\right\\}\n\\end{gather*}\ndonde reemplazamos $Y^{n+s}$ por su expansi\u00f3n de Taylor alrededor de $Y^{n+s-1}$, $f^{n+s-j}$ por las correspondientes expansiones alrededor de $f^{n+s-1}$, y $\\delta Y$ es el error de truncamiento. Regrupando obtenemos\n\\begin{equation}\n \\delta Y = k \\dot Y^{n+s-1} \\left[ \\beta^{s-1} + \\beta^{s-2} + \\ldots + \\beta^0 - 1\\right] + \\\\\n + k^2 \\ddot Y \\left[ - \\beta^{s-2} - \\ldots - (s-1) \\beta^0 - \\frac{1}{2} \\right] + \\dots, \\tag{Error de truncamiento}\n\\end{equation}\ndonde usamos $f^j = \\dot Y^j$, $\\dot f^j = \\ddot Y^j$ y as\u00ed sucesivamente.\n\nVemos que para que haya consistencia, debemos pedir $\\sum_j \\beta^j = 1$, este resultado es general y aplica a todos los m\u00e9todos multipaso lineales. Los restantes $s$ coeficientes se obtienen intentando cancelar la mayor cantidad de potencias de $k$, de forma de conseguir un m\u00e9todo con el mejor orden de aproximaci\u00f3n posible para la cantidad de pasos en consideraci\u00f3n.\n\n##### **Deducci\u00f3n de coeficientes para un m\u00e9todo de 3 pasos**\n\nPara ver un ejemplo de esto, intentemos deducir el m\u00e9todo de Adams-Bashforth que utiliza 3 pasos ($s=3$). Es decir, debemos determinar $\\beta^0, \\beta^1, \\beta^2$ ($\\beta^3 = 0$ por ser expl\u00edcito, i.e. un m\u00e9todo de Adams-Bashforth). Para ello, escribimos el caso part\u00edcular de la ecuaci\u00f3n anterior\n\\begin{gather*}\n \\delta Y = k \\dot Y \\left[ \\beta^2 + \\beta^1 + \\beta^0 - 1\\right] + k^2 \\ddot Y \\left[ -\\beta^1 - 2\\beta^0 - \\frac{1}{2} \\right] + \\\\\n + k^3 \\dddot Y \\left[ \\frac{\\beta^1}{2} + \\frac{2^2}{2} \\beta^0 - \\frac{1}{6} \\right] + \\mathcal{O}(k^4),\n\\end{gather*}\ny pidiendo que se anulen la mayor cantidad de \u00f3rdenes dominantes posibles (i.e., lo que acompa\u00f1a a $k$, $k^2$, $k^3$) tenemos\n\\begin{equation*}\n\\begin{pmatrix}\n 1 & 1 & 1 \\\\\n -2 & -1 & 0 \\\\\n 2 & \\frac{1}{2} & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n \\beta^0\\\\\n \\beta^1\\\\\n \\beta^2\n\\end{pmatrix}\n= \\begin{pmatrix}\n 1 \\\\\n \\frac{1}{2} \\\\\n \\frac{1}{6} \n\\end{pmatrix},\n\\end{equation*}\nque tiene como soluci\u00f3n\n\\begin{equation*}\n \\beta^0 = \\frac{5}{12}, \\qquad \\beta^1 = - \\frac{16}{12}, \\qquad \\beta^2 = \\frac{23}{12},\n\\end{equation*}\nobteniendo un m\u00e9todo donde $\\delta F = \\mathcal{O}(k^4)$, o sea, un m\u00e9todo de orden $3$ (recuerden que el orden global es un orden de magnitud menor que el error al cabo de un paso). En general, un m\u00e9todo de Adams-Bashforth de $s$ pasos, posee un orden (global) $s$.\n\nEn una notaci\u00f3n m\u00e1s operativa, podemos escribir el resultado hallado como\n\\begin{equation}\n y^{n+1} = y^n + \\frac{k}{12} \\left( 23 f^n - 16 f^{n-1} + 5 f^{n-2} \\right).\n\\end{equation}\n\nVeamos en acci\u00f3n el m\u00e9todo que acabamos de derivar. Para eso consideramos el problema de valor inicial\n\\begin{equation*}\n \\dot y (t) = -y(t) + \\mathrm{sen}(t), \\qquad y(0)=1/2,\n\\end{equation*}\npara $0 \\le t \\le 10$, que tiene como soluci\u00f3n anal\u00edtica $y(t) = e^{-t} + [\\mathrm{sen}(t) - \\cos(t)]/2$. Nos valdremos de esta soluci\u00f3n anal\u00edtica para resolver el problema de la inicializaci\u00f3n. Esta estrategia no ser\u00e1 posible cuando desconozcamos la soluci\u00f3n anal\u00edtica, debiendo recurrir a inicializar la integraci\u00f3n con otros m\u00e9todos, como mencionamos anteriormente.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndt = 2.5e-1 # Paso temporal\ny0 = 1/2 # Condici\u00f3n inicial\ntf = 10 # Tiempo final de integraci\u00f3n\npasos = int(round(tf/dt)) # Cantidad de pasos\n\ny = np.zeros( pasos+1 ) # Variable para ir guardando la integraci\u00f3n\n\n# Agrego los tres primeros pasos\ny[0] = y0\ny[1] = np.exp(-dt) + (np.sin(dt) - np.cos(dt) )/2\ny[2] = np.exp(-2*dt) + (np.sin(2*dt)-np.cos(2*dt))/2\n\n# Integro usando AB3\nfor n in range(2, pasos):\n tn = n*dt # t^n\n tn1 = (n-1)*dt # t^{n-1}\n tn2 = (n-2)*dt # t^{n-2}\n\n fn = -y[n] + np.sin(tn) # f^n\n fn1 = -y[n-1] + np.sin(tn1) # f^{n-1}\n fn2 = -y[n-2] + np.sin(tn2) # f^{n-2}\n\n y[n+1] = y[n] + (23*fn - 16*fn1 + 5*fn2)*dt/12 # Integraci\u00f3n expl\u00edcita\n\n# Grafico\nt = np.arange(0, y.size)*dt\nfig, ax = plt.subplots(1, 1, figsize=(8,4), constrained_layout=True)\nax.plot(t, y, label=r\"AB3 ($k=2,5 \\times 10^{-1})$\", c=\"C1\", lw=4)\nax.plot(t, np.exp(-t) + (np.sin(t)-np.cos(t))/2, \"--k\", label=\"Soluci\u00f3n exacta\")\nax.legend()\nax.set_title(r\"Integraci\u00f3n de $\\dot{y} = -y + \\mathrm{sen(t)}$, $y(0)=1/2$\")\nax.set_xlabel(\"$t$\")\nax.set_ylabel(\"$y$\");\n```\n\n##### **Resumen de m\u00e9todos de Adams-Bashforth**\n\nIncluimos a continuaci\u00f3n una tabla con los coeficientes que acompa\u00f1an a cada evaluaci\u00f3n de $f$ para los m\u00e9todos de Adams-Bashforth hasta orden $4$.\n\n| $\\mathrm{Pasos}$ | $\\mathrm{Orden}$ | $f^n$ | $f^{n-1}$ | $f^{n-2}$ | $f^{n-3}$ |\n|------------------|------------------|----------------|----------------|----------------|---------------|\n| $1$ | $1$ | $1$ | $0$ | $0$ | $0$ |\n| $2$ | $2$ |$\\frac{3}{2}$ | $-\\frac{1}{2}$ | $0$ | $0$ |\n| $3$ | $3$ |$\\frac{23}{12}$ |$-\\frac{16}{12}$| $\\frac{5}{12}$ | $0$ |\n| $4$ | $4$ |$\\frac{55}{24}$ |$-\\frac{59}{24}$| $\\frac{37}{24}$|$-\\frac{9}{24}$|\n\nNoten que para $s=1$ recuperamos el m\u00e9todo de Euler hacia adelante.\n\n#### **Adams-Moulton**\n\nEn los m\u00e9todos de Adams-Moulton se sigue la misma l\u00f3gica que la empleada en Adams-Bashforth, pero se relaja la restricci\u00f3n $\\beta^s = 0$, resultando por tanto en un m\u00e9todo impl\u00edcito. La forma de obtener los coeficientes apropiados es completamente an\u00e1loga, obteniendo\nla siguiente relaci\u00f3n para el error de truncamiento\n\\begin{equation}\n \\delta Y = k \\dot Y^{n+s-1} \\left[ \\beta^s + \\beta^{s-1} + \\beta^{s-2} + \\ldots + \\beta^0 - 1\\right] + k^2 \\ddot Y \\left[ \\beta^s - \\beta^{s-2} - \\ldots - (s-1) \\beta^0 - \\frac{1}{2} \\right] + \\dots,\n\\end{equation}\nque naturalmente tiene un grado de libertad mayor a la expresi\u00f3n hallada para Adams-Bashforth, dado por $\\beta^s$.\n\n##### **Deducci\u00f3n de coeficientes para un m\u00e9todo de 3 pasos**\n\nDe manera an\u00e1loga a lo que hicimos para el caso expl\u00edcito, calculemos los coeficientes para el m\u00e9todo impl\u00edcito (Adams-Moulton) de 3 pasos. Esto implica hallar $\\beta^3$, $\\beta^2$, $\\beta^1$ y $\\beta^0$. Para ello escribamos el error de truncamiento hasta un orden apropiado\n\\begin{gather*}\n \\delta Y = k \\dot Y \\left[ \\beta^3 + \\beta^2 + \\beta^1 + \\beta^0 - 1\\right] + k^2 \\ddot Y \\left[ \\beta^3 -\\beta^1 - 2\\beta^0 - \\frac{1}{2} \\right] + \\\\\n + k^3 \\ddot Y \\left[ \\frac{\\beta^3}{2} + \\frac{\\beta^1}{2} + \\frac{2^2}{2} \\beta^0 - \\frac{1}{6} \\right] + + k^4 \\ddot Y \\left[ \\frac{\\beta^3}{6} - \\frac{\\beta^1}{6} - \\frac{2^3}{6} \\beta^0 - \\frac{1}{24} \\right] + \\mathcal{O}(k^5).\n\\end{gather*}\nPedimos nuevamente que se anulen la mayor cantidad de \u00f3rdenes dominantes posibles, es decir, lo que acompa\u00f1a a $k$, $k^2$, $k^3$ y $k^4$. Noten que como tenemos un coeficiente m\u00e1s, podemos anular un t\u00e9rmino m\u00e1s con respecto a lo planteado para Adams-Bashforth. Tenemos entonces\n\\begin{equation*}\n\\begin{pmatrix}\n 1 & 1 & 1 & 1 \\\\\n -2 & -1 & 0 & 1 \\\\\n 2 & \\frac{1}{2} & 0 & \\frac{1}{2}\\\\\n -\\frac{4}{3} & -\\frac{1}{6} & 0 & \\frac{1}{6}\n\\end{pmatrix}\n\\begin{pmatrix}\n \\beta^0\\\\\n \\beta^1\\\\\n \\beta^2\\\\\n \\beta^3\n\\end{pmatrix}\n= \\begin{pmatrix}\n 1 \\\\\n \\frac{1}{2} \\\\\n \\frac{1}{6} \\\\\n \\frac{1}{24} \\\\\n\\end{pmatrix},\n\\end{equation*}\nque tiene como soluci\u00f3n\n\\begin{equation*}\n \\beta^0 = \\frac{1}{24}, \\qquad \\beta^1 = - \\frac{5}{24}, \\qquad \\beta^2 = \\frac{19}{24}, \\qquad \\beta^3 = \\frac{9}{24}\n\\end{equation*}\nobteniendo un m\u00e9todo $5$. En general, un m\u00e9todo de Adams-Moulton de $s$ pasos, posee un orden (global) $s+1$.\n\nEn una notaci\u00f3n m\u00e1s operativa, podemos escribir el resultado hallado como\n\\begin{equation}\n y^{n+1} = y^n + \\frac{k}{24} \\left( 9 f^{n+1} + 19 f^n - 5 f^{n-1} + f^{n-2} \\right),\n\\end{equation}\nque es una expresi\u00f3n impl\u00edcita para $y^{n+1}$.\n\nAl igual que hicimos antes, consideremos el problema de valores iniciales\n\\begin{equation*}\n \\dot y (t) = -y(t) + \\mathrm{sen}(t), \\qquad \\qquad y(0) = 1/2,\n\\end{equation*}\npara $0 \\le t \\le 10$. Al igual que antes, haremos uso de la soluci\u00f3n anal\u00edtica $y(t) = e^{-t} + \\left[\\mathrm{sen}(t) - \\cos(t) \\right]/2$ para inicializar la integraci\u00f3n. Adem\u00e1s, dado que esta ecuaci\u00f3n es lineal, el hecho de que el m\u00e9todo sea impl\u00edcito no supondr\u00e1 dificultad, ya que puede realizarse el despeje de la siguiente manera\n\\begin{align*}\n y^{n+1} &= y^n + \\frac{k}{24} \\left[9(-y^{n+1} + \\mathrm{sen}(t^{n+1}) + 19f^n - 5f^{n-1} + f^{n-2} \\right]\\\\\n &= \\left[y^n + \\frac{k}{24} \\left(9\\ \\mathrm{sen}(t^{n+1}) + 19f^n - 5f^{n-1} + f^{n-2} \\right) \\right] \\frac{1}{1 + \\dfrac{9}{24}k}.\n\\end{align*}\n\nEsto no generalizar\u00e1 a problemas no lineales, donde habr\u00e1 que usar alg\u00fan esquema iterativo para hallar $y^{n+1}$.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndt = 2.5e-1 # Paso temporal\ny0 = 1/2 # Condici\u00f3n inicial\ntf = 10 # Tiempo final de integraci\u00f3n\npasos = int(round(tf/dt)) # Cantidad de pasos\n\ny = np.zeros( pasos+1 ) # Variable para ir guardando la integraci\u00f3n\n\n# Condici\u00f3n inicial y primeras iteraciones (las saco de la soluci\u00f3n anal\u00edtica)\ny[0] = y0\ny[1] = np.exp(-dt) + (np.sin(dt) - np.cos(dt) )/2\ny[2] = np.exp(-2*dt) + (np.sin(2*dt)-np.cos(2*dt))/2\n\n# Integro usando AM4\nfor n in range(2, pasos):\n ts = (n+1)*dt # t^{n+1} (tiempo siguiente)\n tn = n*dt # t^n\n tn1 = (n-1)*dt # t^{n-1}\n tn2 = (n-2)*dt # t^{n-2}\n\n fn = -y[n] + np.sin(tn) # f^n\n fn1 = -y[n-1] + np.sin(tn1) # f^{n-1}\n fn2 = -y[n-2] + np.sin(tn2) # f^{n-2}\n\n y[n+1] = (y[n] + (9*np.sin(ts) + 19*fn - 5*fn1 + fn2)*dt/24)/(1+9/24*dt)\n\n# Grafico\nt = np.arange(0, y.size)*dt\nfig, ax = plt.subplots(1, 1, figsize=(8,4), constrained_layout=True)\nax.plot(t, y, label=r\"AM3 ($k=2,5 \\times 10^{-1})$\", c=\"C1\", lw=4)\nax.plot(t, np.exp(-t) + (np.sin(t)-np.cos(t))/2, \"--k\", label=\"Soluci\u00f3n exacta\")\nax.legend()\nax.set_title(r\"Integraci\u00f3n de $\\dot{y} = -y + \\mathrm{sen(t)}$, $y(0)=1/2$\")\nax.set_xlabel(\"$t$\")\nax.set_ylabel(\"$y$\");\n```\n\n##### **Resumen de m\u00e9todos de Adams-Moulton**\n\n| $\\mathrm{Pasos}$ | $\\mathrm{Orden}$ | $f^{n+1}$ | $f^n$ | $f^{n-1}$ | $f^{n-2}$ | $f^{n-3}$ |\n|------------------|------------------|-----------------|-----------------|------------------|-----------------|-----------------|\n| $1$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$ |\n| $1$ | $2$ |$\\frac{1}{2}$ |$\\frac{1}{2}$ | $0$ | $0$ | $0$ |\n| $2$ | $3$ |$\\frac{5}{12}$ |$\\frac{8}{12}$ |$-\\frac{1}{12}$ | $0$ | $0$ |\n| $3$ | $4$ |$\\frac{9}{24}$ |$\\frac{19}{24}$ |$-\\frac{5}{24}$ |$\\frac{1}{24}$ | $0$ |\n| $4$ | $5$ |$\\frac{251}{720}$|$\\frac{646}{720}$|$-\\frac{264}{720}$|$\\frac{106}{720}$|$-\\frac{19}{720}$|\n\nVemos que el caso de $1$ paso resulta degenerado, teniendo dos esquemas posibles. El de orden $1$ no es ni m\u00e1s ni m\u00e9nos que el m\u00e9todo de Euler atrasado, mientras que el esquema de segundo orden se conoce como regla trapezoidal.\n\n#### **Otros m\u00e9todos multipaso lineales**\n\nSin demostraciones de por medio, podemos mencionar otras familias populares de m\u00e9todos multipaso lineales. Una opci\u00f3n por ejemplo es buscar operadores temporales centrados, que resulten de la forma \n\\begin{equation*}\ny^{n+s}-y^{n+s-2} = \\sum_{j=0}^s \\beta^j f^{n+j}.\n\\end{equation*}\nLos m\u00e9todos expl\u00edcitos ($\\beta^s=0$) asociados a esta elecci\u00f3n reciben el nombre de _m\u00e9todos de Nystr\u00f6m_, mientras que a aquellos impl\u00edcitos se los conoce como m\u00e9todos de _Milne-Simpson_, y tienen orden $s$ y $s+1$, respectivamente. Un m\u00e9todo de Nystr\u00f6m que vieron en las clases te\u00f3ricas es aquel conocido como _**Salto de rana**_ (o _leapfrog_ en ingl\u00e9s), dado por\n\\begin{equation*}\n y^{n+1} = y^{n-1} + 2k f^n,\n\\end{equation*}\nque como vieron en las clases te\u00f3ricas presenta un modo computacional (es decir, un modo _no-f\u00edsico_).\n\nAdicionalmente, y de manera opuesta a lo que proponen los m\u00e9todos de Adams, pueden buscarse m\u00e9todos con $\\beta^0 = \\beta^1 = \\ldots = \\beta^{s-1} = 0$, donde se ajusta $\\beta^s$ y $\\boldsymbol \\alpha$. Estos esquemas reciben el nombre de **_f\u00f3rmulas de diferenciaci\u00f3n hacia atr\u00e1s_** (o _backward differentiation formulas_ \u2014 BDF en ingl\u00e9s\u2014).\n\n### **M\u00e9todos predictores-correctores**\n\nComo mencionamos anteriormente, los m\u00e9todos predictores-correctores consisten en usar dos integradores temporales y combinarlos. La t\u00e9cnica m\u00e1s com\u00fan es la que combina un m\u00e9todo expl\u00edcito con uno impl\u00edcito, de la siguiente manera\n\n1. **Paso predictor ($P$)**: Usamos un m\u00e9todo expl\u00edcito para obtener una estimaci\u00f3n de $y^{n+1}$, que llamamos $y^{*n+1}$ por ejemplo, utilizando Euler adelantado tendremos:\n\\begin{equation*}\n y^{*n+1} = y^n + kf^n.\n\\end{equation*}\n\n2. **Paso evaluador ($E$)**: A partir de la estimaci\u00f3n para $y^{n+1}$ obtenida en el paso anterior, calculamos $f(t^{n+1}, y^{*n+1})$, es decir, $f^{*n+1}$.\n\n3. ** Paso corrector ($C$)**: Llamando $f^{*n+1}$ a $f(t^{n+1}, y^{*n+1})$, resolvemos para un esquema impl\u00edcito. Utilizando ahora como ejemplo Euler atrasado, se obtiene:\n\\begin{equation}\n y^{n+1} = y^n + kf^{*n+1} = y^n + kf(t^{n+1}, y^n + kf^n). \\tag{Matsuno}\n\\end{equation}\nEl m\u00e9todo que acabamos de hallar es conocido como **_m\u00e9todo de Matsuno_** y es $\\mathcal{O}(k)$. An\u00e1logamente podr\u00edamos haber utilizado el esquema trapezoidal (Adams-Moulton de 2do orden) en lugar de Euler atrasado y tendr\u00edamos\n\\begin{equation}\n y^{n+1} = y^n + \\frac{k}{2} \\left( f^{*n+1} + f^n \\right) = y^n + \\frac{k}{2}\\left[f(t^{n+1}, y^n + kf^n) + f(t^n, y^n) \\right], \\tag{Heun}\n\\end{equation}\nconocido como **_m\u00e9todo de Heun_**, cuyo orden global es $\\mathcal{O}(k^2)$.\n\nNoten que en ambos casos acabamos con m\u00e9todos expl\u00edcitos, derivados de la combinaci\u00f3n de uno expl\u00edcito y uno impl\u00edcito.\n\nVale mencionar que es posible realizar este proceso de manera iterativa, utilizando las ecuaciones de $(\\mathrm{Matsuno})$ y $(\\mathrm{Heun})$ como nuevos candidatos $y^{*n+1}$, obteniendo un nuevo valor para $f^{*n+1}$ y volviendo a aplicar el m\u00e9todo respectivo para obtener $y^{n+1}$. Si realizamos la operaci\u00f3n de correcci\u00f3n $c$ veces, suele notarse al m\u00e9todo resultante como $P(EC)^c$.\n\nSi bien los m\u00e9todos predictores-correctores pueden estudiarse m\u00e1s formalmente, no lo haremos en el curso. Solo mencionaremos que otra combinaci\u00f3n com\u00fan para generar m\u00e9todos predictores-correctores es la conjunci\u00f3n de m\u00e9todos de Adams-Bashforth (predictor) y Adams-Moulton (corrector), que reciben el nombre de _m\u00e9todos Adams-Bashforth-Moulton_.\n\n\n### **M\u00e9todos de Runge-Kutta**\n\nA diferencia de los m\u00e9todos multipaso, que utilizan valores de $y$ y $f$ calculados en pasos anteriores (o para el paso posterior si son impl\u00edcitos), la idea de los m\u00e9todos de Runge-Kutta es utilizar m\u00faltiples evaluaciones de $f$ entre $t^n$ y $t^{n+1}$ (denominadas _etapas_) para generar una aproximaci\u00f3n de mayor orden.\n\nA priori, esto podr\u00eda parecer ineficiente ya que en los m\u00e9todos multipaso usamos informaci\u00f3n ya conocida, mientras que en Runge-Kutta estaremos utilizando nuevas evaluaciones que emplearemos solo para el paso en cuesti\u00f3n. Sin embargo, los m\u00e9todos de Runge-Kutta precisan menos almacenamiento en memoria. Adem\u00e1s presentan distintas caracter\u00edsticas de estabilidad con respecto a los m\u00e9todos multipaso, por lo que pueden ser m\u00e1s apropiados para algunas EDOs.\n\nLos m\u00e9todos de Runge-Kutta se clasifican de acuerdo a la cantidad de etapas (as\u00ed como los multipaso a la cantidad de pasos utilizados). Por ejemplo, para un m\u00e9todo de dos etapas tendremos\n\\begin{equation*}\n y^{n+1} = y^{n} + k \\left( \\alpha_1 f^n_1 + \\alpha_2 f^n_2 \\right),\n\\end{equation*}\ndonde $f^n_1=f(t^n_1, y^n_1)$ y $f^n_2=f(t^n_2, y^n_2)$ son evaluaciones de $f$ entre $t^n$ y $t^{n+1}$, es decir, $t^n \\le t^n_1 \\le t^n_2 \\le t^{n+1}$. La misma idea aplica para m\u00e9todos con m\u00e1s etapas. _**Vale resaltar que veremos solo los m\u00e9todos de Runge-Kutta expl\u00edcitos ya que son los m\u00e1s empleados**_.\n\nEn general, y a diferencia de los m\u00e9todos multipaso, en el caso de Runge-Kutta no alcanza con definir una cantidad de etapas $s$ y buscar maximizar el orden de aproximaci\u00f3n para obtener un m\u00e9todo \u00fanico. Sin embargo, si nos restringimos a m\u00e9todos cuyas evaluaciones de $f$ est\u00e1n equiespaciadas (aunque en algunos casos repetidas), obtenemos los siguientes esquemas:\n\n\n* _**Segundo orden**_:\n\n\\begin{align}\n y^{n+1} &= y^n + R_2, \\tag{Midpoint} \\\\\n R_1 &= kf\\left( t^n, y^n \\right), \\\\\n R_2 &= kf\\left( t^n + \\frac{k}{2}, y^n + \\frac{R_1}{2}\\right)\n\\end{align}\n\n
\n\n\\begin{align}\n y^{n+1} &= y^n + \\frac{1}{2} \\left( R_1 + R_2 \\right), \\tag{Heun} \\\\\n R_1 &= kf\\left( t^n, y^n \\right), \\\\\n R_2 &= kf\\left( t^n + k, y^n + R_1\\right)\n\\end{align}\n\n* _**Tercer orden**_:\n\\begin{align}\n y^{n+1} &= y^n + \\frac{1}{4}\\left( R_1 + 3R_3 \\right), \\tag{RK3} \\\\\n R_1 &= kf\\left( t^n, y^n \\right), \\\\\n R_2 &= kf\\left( t^n + \\frac{k}{3}, y^n + \\frac{R_1}{3}\\right), \\\\\n R_3 &= kf\\left( t^n + 2\\frac{k}{3}, y^n + 2\\frac{R_2}{3} \\right).\n\\end{align}\n\n* _**Cuarto orden**_:\n\\begin{align}\n y^{n+1} &= y^n + \\frac{1}{6}\\left( R_1 + 2R_2 + 2R_3 + R_4\\right), \\tag{RK4} \\\\\n R_1 &= kf\\left( t^n, y^n \\right), \\\\\n R_2 &= kf\\left( t^n + \\frac{k}{2}, y^n + \\frac{R_1}{2}\\right), \\\\\n R_3 &= kf\\left( t^n + \\frac{k}{2}, y^n + \\frac{R_2}{2} \\right), \\\\\n R_4 &= kf\\left( t^n + k, y^n + R_3 \\right).\n\\end{align}\n\nVemos que tenemos dos m\u00e9todos de segundo orden. El primero de ellos llamado _**Euler mejorado**_ o _**Punto medio**_ (_midpoint_ en ingl\u00e9s), mientras que el otro es el ya familiar m\u00e9todo de Heun. Esto \u00faltimo muestra, como mencionamos anteriormente, que aunque no es del todo usual, es posible considerar a los m\u00e9todos de Runge-Kutta como esquemas predictores-correctores, solo que operando sobre instantes intermedios (i.e. etapas) en lugar de entre pasos.\n\nAdicionalmente, vemos que nuevamente recuperamos el m\u00e9todo de Euler adelantado como el caso degenerado de un m\u00e9todo Runge-Kutta de 1 etapa.\n\nAl igual que hicimos antes, para ilustrar el uso de los m\u00e9todos de Runge-Kutta, integremos el problema de valores iniciales\n\\begin{equation*}\n \\dot y(t) = -y(t) + \\mathrm{sen}(t), \\qquad \\qquad y(0) = 1/2,\n\\end{equation*}\npara $0\\le t\\le 10$ usando un m\u00e9todo de Runge-Kutta de orden 3.\n\n\n```python\nimport numpy as np\n\ndt = 2.5e-1 # Paso temporal\ny0 = 1/2 # Condici\u00f3n inicial\ntf = 10 # Tiempo final de integraci\u00f3n\npasos = int(round(tf/dt)) # Cantidad de pasos\n\n# Variable para ir guardando la integraci\u00f3n\ny = np.zeros( pasos+1 )\ny[0] = y0\n\n# Integro usando RK3\nfor n in range(0, pasos):\n t1 = n*dt\n R1 = dt*(-y[n] + np.sin(t1)) # Primer etapa\n\n t2 = t1 + dt/3\n R2 = dt*(-(y[n] + R1/3) + np.sin(t2)) # Segunda etapa\n\n t3 = t2 + dt/3\n R3 = dt*(-(y[n] + 2*R2/3) + np.sin(t3)) # \u00daltima etapa\n\n y[n+1] = y[n] + (R1 + 3*R3)/4 # Combino las etapas\n\n# Grafico\nt = np.arange(0, y.size)*dt\nfig, ax = plt.subplots(1, 1, figsize=(8,4), constrained_layout=True)\nax.plot(t, y, label=r\"RK3 ($k=2,5 \\times 10^{-1})$\", c=\"C1\", lw=4)\nax.plot(t, np.exp(-t) + (np.sin(t)-np.cos(t))/2, \"--k\", label=\"Soluci\u00f3n exacta\")\nax.legend()\nax.set_title(r\"Integraci\u00f3n de $\\dot{y} = -y + \\mathrm{sen(t)}$, $y(0)=1/2$\")\nax.set_xlabel(\"$t$\")\nax.set_ylabel(\"$y$\");\n```\n\n### **Aplicaci\u00f3n a una EDO de orden superior**\n\n\nConsideremos ahora el siguiente problema de valores iniciales\n\\begin{equation*}\n \\ddot{y} + \\frac{2}{t^2+1} (y - t\\dot y) = \\bigg( \\cos(t) + t\\mathrm{sen}(t) \\bigg)\\frac{2}{t^2+1} - \\cos(t), \\qquad \\qquad y(0) = 2, \\qquad \\dot{y}(0) = 0.\n\\end{equation*}\nEste problema tiene como soluci\u00f3n $y(t) = 1-t^2 + \\cos(t)$.\n\nNotemos que llamando $u_0 = y$, $u_1 = \\dot{y}$, podemos reducir esta EDO de segundo orden a un sistema de EDOs de primer orden de la siguiente manera\n\\begin{align*}\n \\dot{u}_0 &= u_1, \\qquad \\qquad & u_0(0) = 2,\\\\\n \\dot{u}_1 &= -\\frac{2}{t^2 + 1} (u_0 - tu_1) + \\bigg( \\cos(t) + t\\mathrm{sen}(t) \\bigg)\\frac{2}{t^2+1} - \\cos(t), & u_1(0) = 0.\n\\end{align*}\nResolvamos este sistema usando un m\u00e9todo de Runge-Kutta de orden 3 (RK3).\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndt = 1e-2 # Paso temporal\ntf = 2 # Tiempo final de integraci\u00f3n\npasos = int(round(tf/dt)) # Cantidad de pasos\n\nu = np.zeros( (pasos+1, 2) ) # Arreglo para u_0 (u[:,0]) y u_1 (u[:,1])\n\n# Condiciones iniciales\nu[0,0] = 2\nu[0,1] = 0\n\n# Variables para guardar los pasos intermedios de Runge-Kutta\nR1 = np.zeros( 2 )\nR2 = np.zeros( 2 )\nR3 = np.zeros( 2 )\n\n# Comienzo a integrar Runge-Kutta de 3er orden (RK3)\nfor n in range(0, pasos):\n # Primera etapa\n t1 = n*dt\n R1[0] = dt*u[n,1]\n R1[1] = -2*dt/(t1**2+1)*(u[n,0] - t1*u[n,1]) \\\n + dt*((np.cos(t1) + t1*np.sin(t1))*2/(t1**2+1) - np.cos(t1) )\n\n # Segunda etapa\n t2 = t1 + dt/3\n R2[0] = dt*( u[n,1] + R1[1]/3 )\n R2[1] = -2*dt/(t2**2+1)*( u[n,0]+ R1[0]/3 - t2*(u[n,1]+R1[1]/3) ) \\\n + dt*((np.cos(t2) + t2*np.sin(t2))*2/(t2**2+1) - np.cos(t2) )\n\n # Tercera etapa\n t3 = t1 + 2*dt/3\n R3[0] = dt*( u[n,1]+2*R2[1]/3 )\n R3[1] = -2*dt/(t3**2+1)*( u[n,0]+ 2*R2[0]/3 - t3*(u[n,1]+2*R2[1]/3)) \\\n + dt*((np.cos(t3) + t3*np.sin(t3))*2/(t3**2+1) - np.cos(t3) )\n\n\n # Combino etapas\n u[n+1] = u[n] + (R1 + 3*R3)*1/4\n \n\n# Grafico\nt = np.arange(u.shape[0])*dt\nfig, ax = plt.subplots(1, 1, figsize=(8,4), constrained_layout=True)\nax.plot(t, u[:,0], label=\"Num\u00e9rica\", lw=6, c=\"C1\")\nax.plot(t, (1-t**2) + np.cos(t), \"--k\", label=\"Exacta\")\nax.legend()\nax.set_title(r\"Integraci\u00f3n de $\\ddot y + \\frac{2}{t^2+1}(y-t\\dot{y})= \" +\n r\"(\\cos(t)+t\\mathrm{sen}(t))\\frac{2}{t^2+1}-\\cos(t)$\"+\n \"\\n con $y(0) = 2$ y $\\ddot{y} = 0$\")\nax.set_xlabel(\"$t$\")\nax.set_ylabel(\"$y$\");\n```\n\n## **Estabilidad**\n\n### **Regiones de estabilidad**\n\nUna cuesti\u00f3n muy importante que evitamos discutir hasta ahora es la convergencia de la soluci\u00f3n num\u00e9rica a la soluci\u00f3n real.\n\nMencionamos en la pr\u00e1ctica anterior que un requisito para que nuestro esquema sea convergente era que el m\u00e9todo sea consistente. Esto quiere decir, que en el l\u00edmite en que la grilla se vuelve infinitamente densa, el operador discreto aplicado a la soluci\u00f3n real $Y$ debe converger a la aplicaci\u00f3n del operador continuo (lo vimos de esta manera en la pr\u00e1ctica anterior, hay otras definiciones equivalentes).\n\nSin embargo, a la hora de realizar la integraci\u00f3n temporal, esto no garantiza que la soluci\u00f3n num\u00e9rica sea convergente. Si inicialmente nuestra soluci\u00f3n num\u00e9rica contiene un peque\u00f1o error $\\epsilon$, es decir\n$y^0 = Y^0 + \\epsilon$, \u00bfqu\u00e9 sucede con $\\epsilon$ a medida que integramos en el tiempo?\n\nPara responder esta pregunta es que analizamos la estabilidad. Solo veremos aqu\u00ed algunas consideraciones generales sobre la estabilidad de distintos esquemas y no todas las maneras de estudiar la estabilidad de un integrador temporal, que es un t\u00f3pico amplio.\n\nUna manera de analizar la estabilidad es pregntarnos si el esquema temporal propuesto se mantiene acotado al aplicarlo a la ecuaci\u00f3n\n\\begin{equation*}\n\\dot y = \\lambda y,\n\\end{equation*}\ndonde $\\lambda \\in \\mathbb{C}$. Noten la similitud con el an\u00e1lisis de Fourier para los errores que vieron en te\u00f3ricas (estamos mirando el error de amplitud en vez del error de fase). Para $\\mathrm{Re}(\\lambda) < 0$, un esquema temporal apropiado deber\u00eda devolver una soluci\u00f3n acotada, sin embargo, veremos que esto no siempre es as\u00ed.\n\nEn general, para un dado esquema temporal, habr\u00e1 valores de $\\Delta t$ para los cuales la soluci\u00f3n discreta de esta ecuaci\u00f3n se mantenga acotada y otros para los cuales no. Estas ser\u00e1n las _**regiones de estabilidad**_ e inestabilidad del m\u00e9todo, respectivamente.\n\n### **Ejemplos de regiones de estabilidad**\n\n#### **Euler adelantado**\n\nComo ejemplo podemos estudiar el m\u00e9todo de Euler hacia adelante, y el an\u00e1lisis de estabilidad resulta\n\\begin{equation*}\n y^{n+1} = y^n + k \\lambda y^n = (1+k\\lambda)y^n.\n\\end{equation*}\nSi ahora lo escribimos en t\u00e9rminos de la condici\u00f3n inicial vemos que\n\\begin{equation*}\n y^n = (1+\\bar{\\lambda})^n y^0\n\\end{equation*}\ncon $\\bar \\lambda = k \\lambda$, que se mantiene acotado solo s\u00ed $|1 + \\bar{\\lambda}| < 1$. Utilizamos la variable $\\bar \\lambda$ ya que para cualquier esquema acabaremos con un factor $\\bar{\\lambda} = k \\lambda$ al discretizar la ecuaci\u00f3n.\n\nObtuvimos entonces que la _**regi\u00f3n de estabilidad**_ para el m\u00e9todo de Euler adelantado es la regi\u00f3n $|1+k\\lambda| < 1$ (i.e. un c\u00edrculo de radio $1$ en el plano complejo centrado en $z=-1$).\n\n#### **Trapezoidal**\n\nSi consideramos ahora el m\u00e9todo trapezoidal tenemos\n\\begin{equation*}\n y^{n+1} = y^n + \\frac{\\bar{\\lambda}}{2} (y^n + y^{n+1}) \\qquad \\Longrightarrow \\qquad y^{n+1}\\left(1 - \\frac{\\bar \\lambda}{2} \\right) = y^n \\left(1 + \\frac{\\bar \\lambda}{2} \\right)\n\\end{equation*}\ny por tanto\n\\begin{equation*}\n y^n = y_0 \\left[ \\dfrac{1 + \\dfrac{\\bar \\lambda}{2}}{1 - \\dfrac{\\bar \\lambda}{2}}\\right]^n.\n\\end{equation*}\nLa regi\u00f3n de estabilidad estar\u00e1 dada entonces por $|1+\\bar \\lambda/2| < |1 - \\bar \\lambda/2|$, que se satisface siempre que $\\mathrm{Re}(\\bar \\lambda) < 0$, es decir, el m\u00e9todo trapezoidal es _**incondicionalmente estable**_ (es estable para todo $k$).\n\n### **Rigidez de una EDO**\n\nLa secci\u00f3n previa refiri\u00f3 solo a la estabilidad de EDOs lineales. Sin embargo, para el caso no-lineal, dada una soluci\u00f3n a la ecuaci\u00f3n a tiempo $t^*$, $y^*$ (que obtuvimos, por ejemplo, num\u00e9ricamente), podemos linealizar la ecuaci\u00f3n alrededor de ($t^*, y^*)$ y hacer un estudio de estabilidad lineal en un entorno de $(t^*$, $y^*)$. Esta estrategia, que no utilizaremos en la materia, funciona aceptablemente en una gran cantidad de casos.\n\nLuego de esta discusi\u00f3n, queda de manifiesto que **la estabilidad al integrar una EDO depende del esquema temporal escogido y de la propia EDO**. De esta observaci\u00f3n surge el calificativo de **r\u00edgidas** (_stiff_ en ingl\u00e9s) para ecuaci\u00f3nes diferenciales que requieren pasos temporales muy peque\u00f1os para mantener estable su integraci\u00f3n. Una formulaci\u00f3n alternativa de este concepto, es considerar r\u00edgidas a aquellas ecuaciones diferenciales que resultan extremadamente dif\u00edciles de integrar con m\u00e9todos expl\u00edcitos, que como veremos m\u00e1s adelante, suelen tener regiones de estabilidad m\u00e1s reducidas.\n\nA\u00fan cuando generalmente refiere ecuaciones diferenciales, y as\u00ed lo usaremos en la materia, el calificativo de r\u00edgido es m\u00e1s apropiado para referirse al problema de valores iniciales en su conjunto. Por ejemplo, una EDO puede ser m\u00e1s o menos estable para integrar en funci\u00f3n de su condici\u00f3n inicial (i.e. de la soluci\u00f3n particular), o del intervalo donde estamos buscando una soluci\u00f3n.\n\nPara ver los conceptos de rigidez y estabilidad, consideremos el ejemplo\n\\begin{equation*}\n \\dot y(t) = -100\\left[y(t) - \\cos(t) \\right] - \\mathrm{sen}(t), \\qquad \\qquad y(0) = 1,\n\\end{equation*}\npara $0\\le t \\le 1$. Este problema de valores iniciales tiene como soluci\u00f3n $y(t) = \\cos(t)$, y se anula el t\u00e9rmino entre corchetes.\nVeamos que sucede num\u00e9ricamente\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndt = 1e-1 # Paso temporal\ny0 = 1 # Condici\u00f3n inicial\n\ntf = 1 # Tiempo final de integraci\u00f3n\npasos = int(round(tf/dt)) # Cantidad de pasos\n\n# Inicializo y para los dos integradores que voy a usar\ny_ex1 = np.zeros( pasos+1 )\ny_im = np.zeros( pasos+1 )\n\n# Escribo la condici\u00f3n inicial\ny_ex1[0] = y0\ny_im[0] = y0\n\n# Integro con Euler adelantado (expl\u00edcito) y atrasado (impl\u00edcito)\nfor n in range(0, pasos):\n t_ex = n*dt # Tiempo actual\n t_im = (n+1)*dt # Tiempo siguiente\n\n # Euler adelantado\n y_ex1[n+1] = y_ex1[n] + dt*(-100*(y_ex1[n]-np.cos(t_ex)) - np.sin(t_ex))\n\n # Euler atrasado, como la EDO es lineal, puedo despejar y^{n+1}\n y_im[n+1] = (y_im[n] + dt*(100*np.cos(t_im) - np.sin(t_im))) / (1+100*dt)\n\n# Pruebo ahora reducir el paso en un factor 10\ndt2 = dt/10\npasos2 = int(round(tf/dt2))\ny_ex2 = np.zeros( pasos2+1 )\ny_ex2[0] = y0\nfor n in range(0, pasos2):\n t_ex = n*dt2\n # Euler adelantado\n y_ex2[n+1] = y_ex2[n] + dt2*(-100*(y_ex2[n]-np.cos(t_ex)) - np.sin(t_ex))\n\n# Grafico\nt1 = np.arange(0, y_ex1.size)*dt\nt2 = np.arange(0, y_ex2.size)*dt2\n\nfig, ax = plt.subplots(1, figsize=(8,4), constrained_layout=True)\nax.plot(t1, y_ex1, label=r\"Euler adelantado ($k=1\\times10^{-1})$\")\nax.plot(t2, y_ex2, label=r\"Euler adelantado ($k=1\\times10^{-2})$\", lw=16,\n alpha=.5)\nax.plot(t1, y_im, label=r\"Euler atrasado ($k=1\\times10^{-1})$\", lw=8, alpha=0.8)\nax.plot(t1, np.cos(t1), '--k', label=\"Soluci\u00f3n exacta\", lw=2)\nax.legend(frameon=False, loc=\"upper left\")\nax.set_title(\"Integraci\u00f3n de $\\dot y = -100(y-\\cos(t)) - \\mathrm{sen}(t)$,\" + \n \" y(0)=1\")\nax.set_xlabel(\"$t$\")\nax.set_ylabel(\"$y$\")\nax.set_ylim(0, 2);\n```\n\nVemos que el m\u00e9todo expl\u00edcito diverge para $k=1\\times 10^{-1}$, mientras que el impl\u00edcito se mantiene acotado y adem\u00e1s converge a la soluci\u00f3n exacta. Esto puede verse a partir de lo que vimos en la secci\u00f3n previa, ya que linealizando la ecuaci\u00f3n tenemos $\\lambda = -100$ y $\\bar \\lambda = k \\lambda = -10$ est\u00e1 fuera de la regi\u00f3n de convergencia del m\u00e9todo de Euler adelantado.\n\nAlternativamente, podemos comprender este fen\u00f3meno de la siguiente manera. Reparemos en que para una condici\u00f3n inicial ligeramente diferente el t\u00e9rmino entre corchetes resulta dominante en los instantes iniciales, convergiendo en una escala temporal de $0,01$ ($1/100$) a una soluci\u00f3n muy similar a $\\cos(t)$. Sin embargo, el m\u00e9todo expl\u00edcito no logra capturar esta escala temporal y acaba por diverger.\n\nEs por ello que un formulaci\u00f3n igualmente v\u00e1lida de rigidez, pero m\u00e1s interesante para este curso ser\u00e1 la siguiente:\n\n> **Una EDO r\u00edgida es aquella que involucra escalas muy diversas.**\n\nVean que, en nuestro caso, el problema que nos presenta la EDO propuesta es que nos interesa el comportamiento para tiempos $\\mathcal{O}(1)$ (el orden del per\u00edodo de $\\cos(t)$); sin embargo, para alcanzar una soluci\u00f3n aceptable de este comportamiento de baja frecuencia, debemos resolver tambi\u00e9n las escalas $\\mathcal{O}(0,01)$. Pensado f\u00edsicamente, ese es el motivo por el cual nuestro integrador expl\u00edcito presenta dificultades, requiriendo 10 veces m\u00e1s pasos para converger a la soluci\u00f3n correcta.\n\nEspero que con esto tambi\u00e9n resulte m\u00e1s clara la importancia de los m\u00e9todos impl\u00edcitos que, para el caso no-lineal, pueden requerir un trabajo considerablemente mayor por iteraci\u00f3n, con respecto a los expl\u00edcitos. Sin embargo, generalmente se mantienen estables tomando pasos temporales m\u00e1s grandes. Veremos en el resto de la materia que la elecci\u00f3n de un esquema temporal expl\u00edcito o impl\u00edcito ser\u00e1 problema-dependiente.\n\n\n### **Resumen de regiones de estabilidad**\n\nLes dejamos a continuaci\u00f3n diagramas que esquematizan las regiones de estabilidad de algunas familias de esquemas temporales. En todos los casos los diagramas se hallan en unidades de $\\bar \\lambda$ (noten las diferencias en los l\u00edmites de cada panel).\n\n
\n\nFuera de esta gr\u00e1fica qued\u00f3 el m\u00e9todo _Salto de rana_, que solo es estable en la regi\u00f3n $\\mathrm{Re}(\\bar \\lambda) = 0 \\ \\ \\wedge \\ \\ |\\mathrm{Im}(\\bar \\lambda)| \\le 1$, siendo apropiado solo para problemas de oscilaciones sin amortiguamiento.\n\nPara los m\u00e9todos expl\u00edcitos (Adams-Bashforth y Runge-Kutta) y para Adams-Moulton de orden 3 en adelante, las regiones de estabilidad son las regiones interiores a cada curva (sombreadas en los gr\u00e1ficos). Para Adams-Moulton de orden 1 la regi\u00f3n de estabilidad es la exterior a la circunsferencia, mientras que para orden 2 es todo el semiplano $\\mathrm{Re}(\\bar \\lambda) \\le 0$. En el caso de diferenciaci\u00f3n hacia atr\u00e1s, las regiones de estabilidad son exteriores a cada una de las curvas.\n\nRecuerden que coinciden los m\u00e9todos de Runge-Kutta y Adams-Bashforth de orden 1, siendo ambos el m\u00e9todo de Euler adelantado. Por su parte, los esquemas de Adams-Moulton de orden 1 y 2 representan a Euler atrasado y a la regla trapezoidal, respectivamente.\n\nVale notar tambi\u00e9n que en los m\u00e9todos multipaso las regiones de estabilidad van disminuyendo a medida que aumenta el orden, mientras que con Runge-Kutta (hasta orden 4) pasa lo contrario.\n\n\n### **Relaci\u00f3n entre convergencia, consistencia y estabilidad**\n\nLa importancia de la estabilidad queda resumida en el teorema de Dahlquist$^\\dagger$, que enuncia que para un esquema num\u00e9rico que aproxime a una EDO\n\\begin{equation}\n \\mathrm{Convergente} \\qquad \\Longleftrightarrow \\qquad \\mathrm{Consistente} \\wedge \\mathrm{Estable} \\tag{T. de Dahlquist}\n\\end{equation}\n\n## **Ejemplo integrador: Ecuaci\u00f3n de Lane-Emden**\n\nConsideremos ahora el siguiente problema de valores iniciales no lineal\n\\begin{equation*}\n t^2 \\ddot{y} + 2t \\dot{y} + t^2 y^\\gamma = 0, \\qquad \\qquad y(0) = 1, \\qquad \\dot{y}(0) = 0.\n\\end{equation*}\nEsta ecuaci\u00f3n es conocida como ecuaci\u00f3n de Lane-Emden y describe (adimensionalmente) la presi\u00f3n y la densidad en funci\u00f3n de la distancia para una esfera de fluido autogravitante en equilibrio hidrost\u00e1tico, bajo la aproximaci\u00f3n politr\u00f3pica.\n\nLlamando $u_0 = y$, $u_1 = \\dot{y}$, podemos reducir esta EDO de segundo orden a un sistema de EDOs de primer orden de la siguiente manera\n\\begin{align*}\n \\dot{u}_0 &= u_1, \\qquad \\qquad & u_0(0) = 1,\\\\\n \\dot{u}_1 &= - u_0^\\gamma - \\frac{2u_1}{t} & u_1(0) = 0.\n\\end{align*}\nNoten que para integrar vamos a tener una singularidad al calcular $\\dot{u_1}$ en $t=0$. Puede mostrarse que, para todo valor de $\\gamma$, las soluciones a la ecuaci\u00f3n de Lane-Emden verifican $\\ddot y (0) = - y(0)^\\gamma/3$. Vamos a aprovechar este resultado para conservar el orden del integrador. Vale remarcar que si no repararamos en este hecho, podr\u00edamos integrar la ecuaci\u00f3n de cualquier modo, pero los integradores de alto orden temporal ver\u00edan su orden de precisi\u00f3n disminuido.\n\nVamos a resolver este sistema utilizando un m\u00e9todo de Adams-Moulton de 3er orden (i.e., de 2 pasos). Para ello, tendremos que conseguir de alguna manera $\\mathbf{u}^1$ (la soluci\u00f3n para el primer paso temporal) y a partir de all\u00ed podremos aplicar\n\\begin{equation*}\n \\mathbf u^{n+1} = \\mathbf u^n + \\frac{k}{12} \\left( 5 \\mathbf f^{n+1} + 8 \\mathbf f^n - \\mathbf f^{n-1} \\right).\n\\end{equation*}\n\nEsta ecuaci\u00f3n impl\u00edcita para $\\mathbf u^{n+1}$ (aparece en $\\mathbf f^{n+1}$ y expl\u00edcitamente en el miembro izquierdo), la resolveremos mediante un m\u00e9todo iterativo de Newton-Krylov (en particular, [LGMRES](https://epubs.siam.org/doi/10.1137/S0895479803422014)) que buscar\u00e1 las ra\u00edces de la funci\u00f3n\n\\begin{equation*}\n \\mathbf g(\\mathbf x) = \\mathbf x - \\mathbf u^n - \\frac{k}{12} \\left( 5 \\mathbf f(\\mathbf x) + 8 \\mathbf f^n - \\mathbf f^{n-1} \\right),\n\\end{equation*}\ndonde $\\mathbf f(t^{n+1}, \\mathbf{x})$ es el sistema de Lane-Emden evaluado en $(t^{n+1}, \\mathbf x)$. Vale remarcar que no es necesario que como funcionan los algoritmos de Newton-Krylov, solo precisan saber que permiten obtener raices de forma eficiente y que podemos acceder a \u00e9l mediante una biblioteca preinstalalada en Colab, SciPy, a trav\u00e9s de [`scipy.optimize.newton_krylov`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton_krylov.html).\n\nPara obtener $\\mathbf{u}^1$ vamos a usar otro m\u00e9todo de la familia de Adams-Moulton, en particular un m\u00e9todo de segundo orden. Para ello utilizaremos la misma idea de b\u00fasqueda de raices que mencionamos m\u00e1s arriba pero con el esquema\n\\begin{equation*}\n \\mathbf u^1 = \\mathbf u^0 + \\frac{k}{2} (\\mathbf f^1 + \\mathbf f^0).\n\\end{equation*}\n\nSin embargo, este m\u00e9todo genera $\\mathbf u^1$ con un orden de aproximaci\u00f3n menor que el que tendr\u00e1 el integrador que usaremos en adelante (2 y 3 respectivamente). Para mantener el orden de aproximaci\u00f3n tendremos que usar a la hora de inicializar un paso temporal $k'$ m\u00e1s chico que el que usaremos luego para continuar la integraci\u00f3n, $k$. Este $k'$ deber\u00e1 verificar ser el m\u00ednimo entero mayor que $k^{1-3/2}$ (3 y 2 vienen del orden de integraci\u00f3n de cada m\u00e9todo), es decir\n\\begin{equation*}\n k' = \\lceil k^{1-\\frac{3}{2}} \\rceil,\n\\end{equation*}\ncon $\\lceil \\rceil$ la funci\u00f3n techo.\n\nEl siguiente c\u00f3digo define una funci\u00f3n que nos permite inicializar la cantidad de pasos deseada para la ecuaci\u00f3n de Lane-Emden. Tambi\u00e9n permite ser utilizada para inicializar m\u00e9todos de otro orden.\n\n\n```python\ndef inicializar(cond_inicial, gamma, dt, pasos, orden):\n \"\"\" Inicializa Lane-Emden usando Adams-Moulton de 2do orden (trapezoidal)\n utilizando un paso de forma tal de mantener consistente el orden de\n aproximaci\u00f3n posterior:\n Entrada:\n - cond_incial: arreglo de dimensiones (2) con las condiciones\n iniciales para u0 y u1.\n - gamma: \u00edndice politr\u00f3pico.\n - dt: paso temporal del m\u00e9todo que continuar\u00e1 la integraci\u00f3n;\n - pasos: cantidad de pasos a generar usando AM2;\n - orden: orden del m\u00e9todo que continuar\u00e1 la integraci\u00f3n.\n Devuelve:\n - u: arreglo de dimensiones (pasos, 2) con la ecuaci\u00f3n de Lane-Emden\n integrada sobre la cantidad indicada de pasos.\"\"\"\n import numpy as np\n import scipy.optimize as spoptimize\n\n # Calculo los pasos para el inicializador por cada paso que debe realizar\n # el integrador de orden superior.\n pasos_ini = dt**(1-orden/2)\n # pasos_ini seguramente no sea entero. Tomo el primer entero mayor a\n # pasos_ini y defino el dt del inicializador coherentemente\n pasos_ini = int(np.ceil(pasos_ini))\n dt_ini = dt/pasos_ini\n\n # Creo un arreglo para ir guardando la integraci\u00f3n y agrego condicion inicial\n u_ini = np.zeros( (pasos_ini*pasos+1, 2) )\n u_ini[0] = cond_inicial\n\n fn = np.zeros( 2 ) # Donde voy a ir guardando f^n\n fn1 = np.zeros( 2 ) # Donde voy a ir guardando f^{n-1}\n\n # Comienzo a integrar usando una regla trapezoidal impl\u00edcita (AM2)\n for n in range(0, pasos_ini*pasos):\n ts = (n+1)*dt_ini # t^{n+1}\n tn = n*dt_ini # t^n\n\n fn[0] = u_ini[n,1] # f_1^n\n fn[1] = -u_ini[n,0]**gamma - 2*u_ini[n,1]/tn # f_2^n\n\n # Salvo la singularidad en 0 para y''.\n if n==0: fn[1] = -1/3\n\n # Obtengo estimaci\u00f3n incial para buscar raices de y^{n+1} usando Euler\n est = u_ini[n] + dt_ini*fn\n\n # Funci\u00f3n a la que le voy a buscar las raices\n def f_raices(us):\n fs = np.array([ us[1], -us[0]**gamma - 2*us[1]/ts ]) # f^{n+1}\n return us - u_ini[n] - (fs + fn)*dt_ini/2 # AM2\n \n # Le busco las raices usando Newton-Krylov (i.e. obtengo u_ini^{n+1})\n u_ini[n+1] = spoptimize.newton_krylov(f_raices, est)\n\n # Devuelvo solo los valores de u para el paso temporal original \n return u_ini[pasos_ini::pasos_ini]\n```\n\n**Luego de correr esa celda**, ya pueden correr el siguiente c\u00f3digo para integrar la ecuaci\u00f3n de Lane-Emden con el mecanismo propuesto.\n\n\n```python\n# Vamos a integrar Lane-Emden con Adams-Moulton de 3er orden (2 pasos)\nimport numpy as np\nimport scipy.optimize as spoptimize\nimport matplotlib.pyplot as plt\n\n# Las siguientes dos l\u00edneas filtran algunas advertencias que genera el m\u00e9todo\n# de b\u00fasqueda de raices. No las usen en sus c\u00f3digos.\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\n# Para algunos valores de gamma conozco la soluci\u00f3n anal\u00edtica, la aprovecho.\ndef sol_analitica(t, gamma):\n if gamma == 0:\n return np.array([ 1-t**2/6, -t/3 ])\n if gamma == 1:\n return np.array([ np.sin(t)/t, (t*np.cos(t) - np.sin(t))/t**2 ])\n if gamma == 5:\n return np.array([ 1/np.sqrt(1+t**2/3), -np.sqrt(3)*t/(t**2+3)**1.5 ])\n\n\ntf = 10 # \"Tiempo\" final de integraci\u00f3n\ngamma = 5 # \u00cdndice politr\u00f3pico\nanalitica = gamma in [0, 1, 5] # Veo si existe sol. analitica para gamma\ndts = np.array([ 1e-1, 1e-2, 1e-3 ]) # Pasos temporales a explorar\n\n# Si no tengo soluci\u00f3n analitica elijo un \u00fanico dt para integrar.\nif not analitica: dt = np.array([1e-3])\n\nfn = np.zeros( 2 ) # Donde voy a ir guardando f^n\nfn1 = np.zeros( 2 ) # Donde voy a ir guardando f^{n-1}\n\n# Si hay soluci\u00f3n anal\u00edtica voy a ir guardando el error de cada integraci\u00f3n\nif analitica: errs = np.zeros( (dts.size, 2) )\n\n# Integro para cada dt\nfor i, dt in enumerate(dts):\n pasos = int(round(tf/dt)) # Cantidad de pasos\n u = np.zeros( (pasos+1, 2) ) # Arreglo para u_0 (u[:,0]) y u_1 (u[:,1])\n\n # Condiciones iniciales\n u[0,0] = 1\n u[0,1] = 0\n\n # Inicializo el paso que no puedo hacer con AM3.\n u[1] = inicializar(u[0], gamma, dt, 1, 3)\n\n # Comienzo a integrar Adams-Moulton de 3er orden (AM3)\n for n in range(1, pasos):\n ts = (n+1)*dt # t^{n+1}\n tn = n*dt # t^n\n tn1 = (n-1)*dt # t^{n-1}\n\n if n == 1: tn1 = 1e-50 # Para evitar 0/0 al evaluar f en la t=0.\n\n fn[0] = u[n,1] # f_1^n\n fn[1] = -u[n,0]**gamma - 2*u[n,1]/tn # f_2^n\n\n fn1[0] = u[n-1,1] # f_1^{n-1}\n fn1[1] = -u[n-1,0]**gamma - 2*u[n-1,1]/tn1 # f_2^{n-1}\n\n # Salvo la singularidad para y'' en t=0\n if n==1: fn1[1] = -1/3\n\n # Obtengo estimaci\u00f3n incial para buscar raices de y^{n+1} usando Euler\n est = u[n] + dt*fn\n\n # Funci\u00f3n a la que le voy a buscar las raices\n def f_raices(us):\n fs = np.array([ us[1], -us[0]**gamma - 2*us[1]/ts ]) # f^{n+1}\n return us - u[n] - ( 5*fs + 8*fn - fn1)*dt/12 # AM3\n \n # Le busco las raices con Newton-Krylov (i.e. obtengo u^{n+1})\n u[n+1] = spoptimize.newton_krylov(f_raices, est)\n\n # Si tengo soluci\u00f3n anal\u00edtica, calculo el error\n if analitica: errs[i] = np.abs(u[n+1] - sol_analitica(ts, gamma))\n\n# Grafico\nif analitica:\n fig, axs = plt.subplots(1, 2, figsize=(8,4), constrained_layout=True)\nelse:\n fig, ax = plt.subplots(1, 1, figsize=(4,4), constrained_layout=True)\n axs = [ax]\n\nt = np.arange(u.shape[0])*dt\naxs[0].plot(t, u[:,0], c=\"C1\", label=\"Num\u00e9rica\", lw=6)\naxs[0].set_xlabel(\"$t$\")\naxs[0].set_ylabel(\"$y$\")\nfig.suptitle(f\"Ecuaci\u00f3n de Lane-Emden para $\\gamma={gamma}$\", fontsize=16)\naxs[0].set_title(\"Soluci\u00f3n $y(t)$\")\nif analitica:\n axs[0].plot(t, sol_analitica(t,gamma)[0], \"--k\", label=\"Exacta\")\n axs[1].loglog(1/dts, errs[:,0], 'x', label=\"$y$\")\n axs[1].loglog(1/dts, errs[:,1], 'o', label=\"$\\dot{y}$\")\n axs[1].loglog(1/dts, 1e-1*dts**3, \"--k\", label=\"$\\propto k^3$\")\n axs[1].legend()\n axs[1].set_title(f\"Errores en $y$ e $y'$ para $t={t[-1]:.1f}$\")\n axs[1].set_xlabel(\"$1/k$\")\n axs[1].set_ylabel(\"Error\")\naxs[0].legend();\n```\n\nComo pueden probar ustedes mismos, el c\u00f3digo propuesto aprovecha el hecho de que para $\\gamma \\in \\{0, 1, 5 \\}$ existe una soluci\u00f3n anal\u00edtica y calcula el error en la integraci\u00f3n. Pueden ver que para estos valores de $\\gamma$ el orden de precisi\u00f3n del m\u00e9todo es el esperado (i.e. $\\propto k^3$), excepto para $\\gamma = 0$ donde en todos los casos est\u00e1 cerca del error asociado a la precisi\u00f3n aritm\u00e9tica finita.\n\nVale remarcar finalmente que el uso de un m\u00e9todo impl\u00edcito en este ejemplo conlleva un objetivo puramente pedag\u00f3gico, de forma de ilustrar su uso para un caso no-lineal y de resolver el problema de la inicializaci\u00f3n. Para una ecuaci\u00f3n poco r\u00edgida como esta resulta m\u00e1s eficiente el uso de algoritmos expl\u00edcitos, como RK4.\n\n## **Paso adaptativo**\n\nUn elemento que no quer\u00edamos dejar de mencionar, aunque no lo veremos en el curso, es que los m\u00e9todos vistos anteriormente pueden reformularse, con mayor o menor dificultad, para funcionar con $k$ variable. De esta manera, cuando la EDO se comporta m\u00e1s r\u00edgidamente se disminuye $k$, mientras que si luego la integraci\u00f3n se vuelve menos r\u00edgida es posible agrandar el paso. Numerosos solvers autom\u00e1ticos de EDOs, como `scipy.integrate.odeint` implementan esta estrategia.\n\nSin embargo, esta estrategia no siempre resulta computacionalmente \u00f3ptima cuando se tiene conocimiento del problema f\u00edsico en cuesti\u00f3n. Por ejemplo, al integrar las ecuaciones de Navier-Stokes, se conoce a priori el rango de escalas temporales que se deben resolver correctamente y, por tanto, las estrategias con $k$ fijo pueden resultar m\u00e1s apropiadas.\n\n## **Referencias**:\n- [_Finite Difference and Spectral Methods for Ordinary and Partial Differential Equations_; N. L. Trefethen (1996)](https://people.maths.ox.ac.uk/trefethen/pdetext.html).\n- [_An Introduction to Numerical Modeling of the Atmosphere_; D. A. Randall](http://hogback.atmos.colostate.edu/group/dave/at604pdf/AT604_LaTeX_Book.pdf).\n", "meta": {"hexsha": "f242cf7569e98a7a639b359130372abb05178187", "size": 681764, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Practica_2-2021.ipynb", "max_stars_repo_name": "mfontanaar/introduccion-metodos-numericos", "max_stars_repo_head_hexsha": "d535ad30e8263e8514003edb266d56166418e08b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Practica_2-2021.ipynb", "max_issues_repo_name": "mfontanaar/introduccion-metodos-numericos", "max_issues_repo_head_hexsha": "d535ad30e8263e8514003edb266d56166418e08b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Practica_2-2021.ipynb", "max_forks_repo_name": "mfontanaar/introduccion-metodos-numericos", "max_forks_repo_head_hexsha": "d535ad30e8263e8514003edb266d56166418e08b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 253.7268328991, "max_line_length": 350829, "alphanum_fraction": 0.8814457789, "converted": true, "num_tokens": 28786, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4726834766204328, "lm_q2_score": 0.25386099567919973, "lm_q1q2_score": 0.1199958980159688}} {"text": "Probabilistic Programming and Bayesian Methods for Hackers \n========\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n#### Looking for a printed version of Bayesian Methods for Hackers?\n\n_Bayesian Methods for Hackers_ is now a published book by Addison-Wesley, available on [Amazon](http://www.amazon.com/Bayesian-Methods-Hackers-Probabilistic-Addison-Wesley/dp/0133902838)! \n\n\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assumes that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json, matplotlib\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) / 2, 2, k + 1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2 * p / (1 + p), color=\"#348ABD\", lw=3)\n# plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2 * (0.2) / 1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Is my code bug-free?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1. / 3, 2. / 3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.ylim(0,1)\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n#### Expected Value\nExpected value (EV) is one of the most important concepts in probability. The EV for a given probability distribution can be described as \"the mean value in the long run for many repeated samples from that distribution.\" To borrow a metaphor from physics, a distribution's EV as like its \"center of mass.\" Imagine repeating the same experiment many times over, and taking the average over each outcome. The more you repeat the experiment, the closer this average will become to the distributions EV. (side note: as the number of repeated experiments goes to infinity, the difference between the average outcome and the EV becomes arbitrarily small.)\n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1. / l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0, 1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```python\nimport pymc as pm\n\nalpha = 1.0 / count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nlambda_1 = pm.Exponential(\"lambda_1\", alpha)\nlambda_2 = pm.Exponential(\"lambda_2\", alpha)\n\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```python\nprint \"Random output:\", tau.random(), tau.random(), tau.random()\n```\n\n Random output: 39 10 32\n\n\n\n```python\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. Deterministic functions will be covered in Chapter 2. \n\n\n```python\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n# Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [****************100%******************] 40000 of 40000 complete\n\n\n\n```python\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```python\nfigsize(12.5, 10)\n# histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data) - 20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n# type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n# type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n# type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg/).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "f1937edaf0e3896fda264043258d28da9eca6a6c", "size": 347818, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Chapter1.ipynb", "max_stars_repo_name": "tomkimpson/BayesianMethodsForHackers", "max_stars_repo_head_hexsha": "597f2f24810e0376835ccbe4a13f7e133204f2da", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Chapter1.ipynb", "max_issues_repo_name": "tomkimpson/BayesianMethodsForHackers", "max_issues_repo_head_hexsha": "597f2f24810e0376835ccbe4a13f7e133204f2da", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Chapter1.ipynb", "max_forks_repo_name": "tomkimpson/BayesianMethodsForHackers", "max_forks_repo_head_hexsha": "597f2f24810e0376835ccbe4a13f7e133204f2da", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-01-24T02:59:37.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-03T06:25:03.000Z", "avg_line_length": 309.9982174688, "max_line_length": 90684, "alphanum_fraction": 0.8970898573, "converted": true, "num_tokens": 11653, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4263215925474903, "lm_q2_score": 0.2814055953761018, "lm_q1q2_score": 0.11996928157251441}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nPromijeni vidljivost ovdje.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nPromijeni vidljivost ovdje.\n\n\n\n```python\n%matplotlib notebook\n\nimport numpy as np\nimport control as control\nimport matplotlib.pyplot as plt\nimport ipywidgets as widgets\nimport sympy as sym\n# from IPython.display import Markdown # For displaying Markdown and LaTeX code\n\nsym.init_printing()\ncontinuous_update=False\n```\n\n## Pogre\u0161ka stacionarnog stanja\n\nS obzirom na ulaznu prijenosnu funkciju $I(s)$ i prijenosnu funkciju sustava otvorene petlje $G(s)$, pogre\u0161ka u stacionarnom stanju $e(\\infty)$ sustava zatvorene petlje mo\u017ee se, u slu\u010daju jedinstvene povratne veze (engl. unity feedback), odrediti kao:\n\n\\begin{equation}\n e(\\infty)=\\lim_{s\\to0}\\frac{sI(s)}{1+G(s)}.\n\\end{equation}\n\nu slu\u010daju jedini\u010dne step funkcije kao ulaza $I(s)=\\frac{1}{s}$ dobiva se:\n\n\\begin{equation}\n e_{step}(\\infty)=\\frac{1}{1+\\lim_{s\\to0}G(s)},\n\\end{equation}\n\nu slu\u010daju rampa funkcije $I(s)=\\frac{1}{s^2}$:\n\n\\begin{equation}\n e_{ramp}(\\infty)=\\frac{1}{\\lim_{s\\to0}sG(s)},\n\\end{equation}\n\na u slu\u010daju paraboli\u010dne funkcije $I(s)=\\frac{1}{s^3}$:\n\n\\begin{equation}\n e_{parabolic}(\\infty)=\\frac{1}{\\lim_{s\\to0}s^2G(s)}\n\\end{equation}\n\n\n### Sustavi bez integracije\n\nPrimjer prijenosne funckije $G(s)$ sustava bez integracije mo\u017ee se definirati kao:\n\n\\begin{equation}\n G(s) = \\frac{K}{as^2 + bs + c}\n\\end{equation}\n\nPogre\u0161ka stacionarnog stanja u slu\u010daju sustava bez integracije u unaprijednoj stazi je beskona\u010dna za ulaznu funkciju rampe i ulaznu paraboli\u010dnu funkciju.\n\n### Sustavi s jednom integracijom\n\nPrimjer prijenosne funckije $G(s)$ sustava s jednom integracijom mo\u017ee se definirati kao:\n\n\\begin{equation}\n G(s) = \\frac{K(as^2 + bs + c)}{s(ds^2 + es + fc)}\n\\end{equation}\n\nPogre\u0161ka stacionarnog stanja u slu\u010daju sustava s jednom integracijom u unaprijednoj stazi je beskona\u010dna za ulaznu paraboli\u010dnu funkciju.\n\n---\n\n### Kako koristiti ovaj interaktivni primjer?\n\n- Izaberite izme\u0111u sustava bez integracije i sustava s jednom integracijom.\n- Pomi\u010dite kliza\u010de za promjenu vrijednosti $a$, $b$, $c$ (koeficijenata prijenosne funkcije) i $K$ (poja\u010danja).\n\n\n```python\nstyle = {'description_width': 'initial'}\n\nlayout1 = widgets.Layout(width='auto', height='auto') #set width and height\n\nsystemSelect = widgets.ToggleButtons(\n options=[('bez integracije', 0), ('jedna integracija', 1)],\n description='Odaberi sustav: ',style=style)\nfunctionSelect = widgets.ToggleButtons(\n options=[('jedini\u010dna step funkcija', 0), ('rampa funkcija', 1), ('paraboli\u010dna funkcija', 2)],\n description='Odaberi ulaznu funkciju: ',style=style)\n\nfig=plt.figure(num='Pogre\u0161ka stacionarnog stanja')\nfig.set_size_inches((9.8,3))\nfig.set_tight_layout(True)\nf1 = fig.add_subplot(1, 1, 1)\n\nf1.grid(which='both', axis='both', color='lightgray')\n\nf1.set_ylabel('ulaz, izlaz')\nf1.set_xlabel('$t$ [s]')\n\ninputf, = f1.plot([],[])\nresponsef, = f1.plot([],[])\nerrorf, = f1.plot([],[])\n\nann1=f1.annotate(\"\", xy=([0], [0]), xytext=([0], [0]))\nann2=f1.annotate(\"\", xy=([0], [0]), xytext=([0], [0]))\n\ndisplay(systemSelect)\ndisplay(functionSelect)\n\ndef create_draw_functions(K,a,b,c,index_system,index_input):\n \n num_of_samples = 1000\n total_time = 150\n t = np.linspace(0, total_time, num_of_samples) # time for which response is calculated (start, stop, step)\n \n if index_system == 0:\n \n Wsys = control.tf([K], [a, b, c])\n ess, G_s, s, n = sym.symbols('e_{step}(\\infty), G(s), s, n')\n sys1 = control.feedback(Wsys)\n \n elif index_system == 1:\n \n Wsys = control.tf([K,K,K*a], [1, b, c, 0])\n ess, G_s, s, n = sym.symbols('e_{step}(\\infty), G(s), s, n')\n sys1 = control.feedback(Wsys) \n \n global inputf, responsef, ann1, ann2\n \n if index_input==0:\n infunction = np.ones(len(t))\n infunction[0]=0\n tout, yout = control.step_response(sys1,t)\n s=sym.Symbol('s')\n if index_system == 0:\n limit_val = sym.limit((K/(a*s**2+b*s+c)),s,0)\n elif index_system == 1:\n limit_val = sym.limit((K*s*s+K*s+K*a)/(s*s*s+b*s*s+c*s),s,0)\n e_inf=1/(1+limit_val)\n \n elif index_input==1:\n infunction=t;\n tout, yout, xx = control.forced_response(sys1, t, infunction)\n if index_system == 0:\n limit_val = sym.limit(s*(K/(a*s**2+b*s+c)),s,0) \n elif index_system == 1:\n limit_val = sym.limit(s*((K*s*s+K*s+K*a)/(s*s*s+b*s*s+c*s)),s,0)\n e_inf=1/limit_val\n \n elif index_input==2:\n infunction=t*t\n tout, yout, xx = control.forced_response(sys1, t, infunction)\n if index_system == 0:\n limit_val = sym.limit(s*s*(K/(a*s**2+b*s+c)),s,0)\n elif index_system == 1:\n limit_val = sym.limit(s*s*((K*s*s+K*s+K*a)/(s*s*s+b*s*s+c*s)),s,0)\n e_inf=1/limit_val\n \n ann1.remove()\n ann2.remove() \n \n if type(e_inf) == sym.numbers.ComplexInfinity:\n print('Pogre\u0161ka stacionarnog stanja je beskona\u010dna.')\n elif e_inf==0:\n print('Pogre\u0161ka stacionarnog stanja je nula.')\n else:\n print('Pogre\u0161ka stacionarnog stanja je jednaka %f.'% (e_inf,)) \n \n# if type(e_inf) == sym.numbers.ComplexInfinity:\n# display(Markdown('Steady-state error is infinite.'))\n# elif e_inf==0:\n# display(Markdown('Steady-state error is zero.'))\n# else:\n# display(Markdown('Steady-state error is equal to %f.'%(e_inf,)))\n\n \n if type(e_inf) != sym.numbers.ComplexInfinity and e_inf>0: \n ann1=plt.annotate(\"\", xy=(tout[-60],infunction[-60]), xytext=(tout[-60],yout[-60]), arrowprops=dict(arrowstyle=\"|-|\", connectionstyle=\"arc3\"))\n ann2=plt.annotate(\"$e(\\infty)$\", xy=(145, 1.), xytext=(145, (yout[-60]+(infunction[-60]-yout[-60])/2)))\n elif type(e_inf) == sym.numbers.ComplexInfinity:\n ann1=plt.annotate(\"\", xy=(0,0), xytext=(0,0), arrowprops=dict(arrowstyle=\"|-|\", connectionstyle=\"arc3\"))\n ann2=plt.annotate(\"\", xy=(134, 1.), xytext=(134, (1 - infunction[-10])/2 + infunction[-10]))\n elif type(e_inf) != sym.numbers.ComplexInfinity and e_inf==0: \n ann1=plt.annotate(\"\", xy=(0,0), xytext=(0,0), arrowprops=dict(arrowstyle=\"|-|\", connectionstyle=\"arc3\"))\n ann2=plt.annotate(\"\", xy=(134, 1.), xytext=(134, (1 - yout[-10])/2 + yout[-10]))\n \n f1.lines.remove(inputf)\n f1.lines.remove(responsef)\n \n inputf, = f1.plot(t,infunction,label='ulaz',color='C0')\n responsef, = f1.plot(tout,yout,label='izlaz',color='C1')\n \n f1.relim()\n f1.autoscale_view()\n \n f1.legend()\n\nK_slider=widgets.IntSlider(min=1,max=8,step=1,value=1,description='$K$',continuous_update=False)\na_slider=widgets.IntSlider(min=0,max=8,step=1,value=1,description='$a$',continuous_update=False)\nb_slider=widgets.IntSlider(min=0,max=8,step=1,value=1,description='$b$',continuous_update=False)\nc_slider=widgets.IntSlider(min=1,max=8,step=1,value=1,description='$c$',continuous_update=False)\n\ninput_data=widgets.interactive_output(create_draw_functions,\n {'K':K_slider,'a':a_slider,'b':b_slider,'c':c_slider,\n 'index_system':systemSelect,'index_input':functionSelect})\n\ndef update_sliders(index):\n global K_slider, a_slider, b_slider, c_slider\n \n Kval=[1, 1, 1]\n aval=[1, 1, 1]\n bval=[2, 2, 2]\n cval=[6, 6, 6]\n \n K_slider.value=Kval[index]\n a_slider.value=aval[index]\n b_slider.value=bval[index]\n c_slider.value=cval[index]\n \ninput_data2=widgets.interactive_output(update_sliders,\n {'index':functionSelect})\n\n\ndisplay(K_slider,a_slider,b_slider,c_slider,input_data)\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Odaberi sustav: ', options=(('bez integracije', 0), ('jedna integracija', 1)), styl\u2026\n\n\n\n ToggleButtons(description='Odaberi ulaznu funkciju: ', options=(('jedini\u010dna step funkcija', 0), ('rampa funkci\u2026\n\n\n\n IntSlider(value=1, continuous_update=False, description='$K$', max=8, min=1)\n\n\n\n IntSlider(value=1, continuous_update=False, description='$a$', max=8)\n\n\n\n IntSlider(value=2, continuous_update=False, description='$b$', max=8)\n\n\n\n IntSlider(value=6, continuous_update=False, description='$c$', max=8, min=1)\n\n\n\n Output()\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6c22d7fa4ae3f8ecdc34452aa94948a56e712404", "size": 83421, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_hr/examples/02/TD-17-Pogreska_stacionarnog_stanja.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-17-Pogreska_stacionarnog_stanja-checkpoint.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-17-Pogreska_stacionarnog_stanja-checkpoint.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 68.3778688525, "max_line_length": 33843, "alphanum_fraction": 0.6764244015, "converted": true, "num_tokens": 2758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4493926344647597, "lm_q2_score": 0.2658804614657029, "lm_q1q2_score": 0.11948472103077824}} {"text": "```python\n\n```\n\n\n```python\n# Environment Check -- Deactivate on a working host\nimport sys\nprint(sys.executable)\nprint(sys.version)\nprint(sys.version_info)\n```\n\n /opt/jupyterhub/bin/python3\n 3.8.2 (default, Apr 27 2020, 15:53:34) \n [GCC 9.3.0]\n sys.version_info(major=3, minor=8, micro=2, releaselevel='final', serial=0)\n\n\n# Nature Inspired Portal Design\n\nThis notebook examines the design of a portal (like a doorway) using Biological Inspired Design principles from ENGR 1320 (we will examine geological inspiration too), and Computational Thinking from ENGR 1330.\n\n\n## Design Process\nA design flowchart for a civil engineering project is shown below\n\n\n\nThe flowchart identifies four major phases represented by rectangular blocks, with information and decision flows.\nThe diagram depicts decision flow as a feed-forward process, whereas the information is a feed-back process, the result of a decision in an attempt to move the design from planning to operation creates information requirements that feed backward, and may influence the decision. \n\n### Planning \nThe planning phase identifies a specific plan from a general idea.\n\n### Design\nThe design phase identifies within the limits of the specific plan a complete detailed design; dimensions, materials selection, sub-system sequencing, and such.\n\n### Construction\nThe construction phase is carried out within the limits defined by the detailed design to create a complete physical project; a building, vehicle, report, computer program, or such -- in Civil Engineering the result is usually something in the built environment, although process control software (SCADA for instance) also fits within this process flow, as does a drainage master plan; the point here is construction actually builds the project into some tangible object.\n\n### Operation\nThe operation phase is the actual use of the object along with meaningful recordkeeping to assess the success or failure of the project. The operation phase is limited by what the three prior phases have provided.\n\nThis process diagram is a useful roadmap for an engineering design, now onward with the example\n\n## Problem Layout\n\nThe portal frame below is to be designed in steel (a material selection that can be examined further on) on a rigid-plastic basis to have a factor of safety of 2.0 for the loading condition shown.\n\n\n\nThe two columns are of identical section and the beam may have a different section.\n\nIn classical design literature the design process at this point is stated something along the lines of\n\n\"Because member lengths are known, and the material is specified, the design process consists of selecting the appropriate member cross section. As a rigid-plastic design is required, an appropriate choice for a measure of the size of a member is its fully plastic moment. The designer has to make decisions upon the fully plastic moment of the beam member, to which a variable $M_1$ will be assigned, and the column members to which variable $M_2$ is assigned.\"\n\nWe will proceede with this analysis process initially to frame a design approach, then appeal to nature-inspired for some additional guidance if we relax the material choice, and expand our concept of a material cross section to a composite (e.g. a truss is a macroscopic composite of air and steel) section.\n\nSome principles from `statics` we will apply in what follows include:\n\n - kinematics (failure mode geometries) \n - rigid joint (moment transmission) vs pin connection \n - virtual work \n\n\n\n\n#### Failure Modes\nLimits on the values the two moments can take are governed by the factor of safety requirement of 2.0 against collapse under the given loading. The designer wants to ensure that in a possible collapse mode of the frame, the work done on the frame by the factored applied loads does not exceede the energy capacity of the plastic deformations (rotations at plastic hinges -- the corner junctions, possibly welds, and the support connections -- also possibly welds) of the frame.\n\nThe figure below shows six possible collapse mechanisms of the frame and the energy-balance requirement associated with each kinematic mechanism. There are three general failure mechanisms: a beam failure (a and b), a sway failure (c and d), and combined failure (e and f). There are two mechanisms for each general failure type because until the sections are specified we do not know if the beam is weaker than the column ($M_1 < M_2$) or the opposite.\n\n\n\nThe hinge failuers at junctions B and C of the frame will always occur in the weaker member (the junctions here are rigid, not pins, hence will transmit moments) at the joins because less energy is needed to fail the weaker member.\n\nWith ech of the six failure modes, there is a relationship between the work done by the factored loads and the energy absorbed by the deformations. \n\n\n\n\n\n\n##### Virtual Work Mode A\n\nAll the work is done by the 40kN load (20kN actual x 2.0 factor of safety), because the beam is the weaker member. The columns do not deform, thus the horizontal loading does no work. The work is\n\n\\begin{equation}\nW_B = F \\cdot d = 2 \\times 20 kN \\times 3 m \\times \\theta\n\\end{equation}\n\n##### Virtual Work Mode B\n\nAll the work is done by the 40kN load (20kN actual x 2.0 factor of safety), because the beam is the weaker member. The columns do not deform, thus the horizontal loading does no work. The work is\n\n\\begin{equation}\nW_B = F \\cdot d = 2 \\times 20 kN \\times 3 m \\times \\theta\n\\end{equation}\n\nThus virtual work applied for mode B is $120 \\theta$ kN-m. \n\nThis work is absorbed in the 4 hinges formed - two at the center of the beam, each with a bend angle of $\\theta$ and two at the beam-column connection also with value $\\theta$, here the columns are bending at their tops.\n\nThe column bends are $ 2 \\times M_2 \\times \\theta $ kN-m, the beam bend(s) are $ 2 \\times M_1 \\times \\theta $ where $M_1$ and $M_2$ are the maximum bending moments in the beam and columns respectively. \n\nFor the frame to survive the load the work absorbed by the bends must exceede the applied work or\n\n\\begin{equation}\n2 M_2 \\theta + 2 M_1 \\theta >= 120 \\theta\n\\end{equation}\n\n\n\n\n\n```python\ncolumn_height = 3.0\nbeam_length = 6.0\nload_vertical = 20.0\nload_horizontal = 10.0\nfactor_safety = 2.0\n#work case b\nangle = 1\nwork_b = factor_safety * load_vertical * (beam_length/2.0) * angle\nwork_b\n```\n\n\n\n\n 120.0\n\n\n\n\n\n##### Virtual Work Mode C\n\nAll the work is done by the 40kN load (20kN actual x 2.0 factor of safety), because the beam is the weaker member. The columns do not deform, thus the horizontal loading does no work. The work is\n\n\\begin{equation}\nW_B = F \\cdot d = 2 \\times 20 kN \\times 3 m \\times \\theta\n\\end{equation}\n\n\n```python\n\n```\n\n##### Virtual Work Mode D\n\nAll the work is done by the 40kN load (20kN actual x 2.0 factor of safety), because the beam is the weaker member. The columns do not deform, thus the horizontal loading does no work. The work is\n\n\\begin{equation}\nW_B = F \\cdot d = 2 \\times 20 kN \\times 3 m \\times \\theta\n\\end{equation}\n\n##### Virtual Work Mode E\n\nAll the work is done by the 40kN load (20kN actual x 2.0 factor of safety), because the beam is the weaker member. The columns do not deform, thus the horizontal loading does no work. The work is\n\n\\begin{equation}\nW_B = F \\cdot d = 2 \\times 20 kN \\times 3 m \\times \\theta\n\\end{equation}\n", "meta": {"hexsha": "7d5c45865b8861ca041cd4a39b9378580f524939", "size": 10341, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "9-MyJupyterNotebooks/51-NatureInspiredPortalDesign/.ipynb_checkpoints/51-NaturalInspiredDesign-checkpoint.ipynb", "max_stars_repo_name": "dustykat/engr-1330-psuedo-course", "max_stars_repo_head_hexsha": "3e7e31a32a1896fcb1fd82b573daa5248e465a36", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9-MyJupyterNotebooks/51-NatureInspiredPortalDesign/.ipynb_checkpoints/51-NaturalInspiredDesign-checkpoint.ipynb", "max_issues_repo_name": "dustykat/engr-1330-psuedo-course", "max_issues_repo_head_hexsha": "3e7e31a32a1896fcb1fd82b573daa5248e465a36", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9-MyJupyterNotebooks/51-NatureInspiredPortalDesign/.ipynb_checkpoints/51-NaturalInspiredDesign-checkpoint.ipynb", "max_forks_repo_name": "dustykat/engr-1330-psuedo-course", "max_forks_repo_head_hexsha": "3e7e31a32a1896fcb1fd82b573daa5248e465a36", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5529411765, "max_line_length": 486, "alphanum_fraction": 0.6363020984, "converted": true, "num_tokens": 1771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46490157137338844, "lm_q2_score": 0.2568319913875189, "lm_q1q2_score": 0.1194015963750141}} {"text": "```python\nimport holoviews as hv\nhv.extension('bokeh')\nhv.opts.defaults(hv.opts.Curve(width=500), \n hv.opts.Points(width=500), \n hv.opts.Image(width=500, colorbar=True, cmap='Viridis'))\n```\n\n\n```python\nimport numpy as np\nimport scipy.signal\nimport scipy.linalg\n```\n\n# Estimadores adaptivos parte II\n\nEn esta lecci\u00f3n veremos algunos estimadores adaptivos que extienden el filtro LMS que revisamos en la lecci\u00f3n anterior\n\n## Algoritmo de M\u00ednimos Cuadrados Recursivos \n\n\nEl algoritmo LMS minimiza el error instantaneo y es simple y eficiente. Pero en algunos casos su convergencia es demasiado lenta\n\n:::{tip}\n\nPodemos obtener un filtro adaptivo que converge m\u00e1s r\u00e1pido si utilizamos el error hist\u00f3rico en lugar del error instantaneo\n\n:::\n\nSigamos considerando un filtro tipo FIR con $L+1$ pesos que se actualizan de en cada \u00e9poca $n$\n\n$$\ny_n = \\sum_{k=0}^L w_{n, k} u_{n-k}\n$$\n\nEl algoritmo RLS (*Recursive Least Squares*) es un m\u00e9todo online que minimiza el error hist\u00f3rico, es decir la suma de errores desde la muestra inicial hasta la actual\n\n$$\n\\begin{align}\nJ^H_n(\\textbf{w}) &= \\sum_{i=L}^n \\beta^{n-i} |e_i|^2 \\nonumber \\\\\n&= \\sum_{i=L}^n \\beta^{n-i} (d_i - \\sum_{k=0}^{L} w_{i, k} u_{i-k} )^2, \\nonumber\n\\end{align}\n$$\n\ndonde $n$ es el \u00edndice del instante actual y $\\beta \\in [0, 1]$ es el \"factor de olvido\", que usualmente es un valor cercano pero ligeramente menor que $1$.\n\nAdicionalmente se agrega un regularizador a los pesos\n\n$$\nJ^w_n = \\lambda \\| \\textbf{w}_{n} \\|^2 = \\lambda \\sum_{k=1} w_{n, k}^2\n$$\n\nPara evitar divergencias en el proceso de entrenamiento\n\n:::{important}\n\nLa funci\u00f3n de costo total del filtro RLS es la suma de error hist\u00f3rico y el regularizador\n\n:::\n\n### Soluci\u00f3n exacta del filtro RLS\n\nSi derivamos la funci\u00f3n de costo e igualamos a cero obtenemos la siguiente regla\n\n$$\n\\begin{align}\n\\textbf{w}_n &= (U_n^T \\pmb{\\beta} U_n + \\lambda I)^{-1} U_n^T \\pmb{\\beta} \\textbf{d}_n \\nonumber \\\\\n&= \\Phi_n^{-1} \\theta_n \\nonumber\n\\end{align}\n$$\n\ndonde reconocemos los siguientes t\u00e9rminos\n\n- Matriz de correlaci\u00f3n ponderada y regularizada: $\\Phi_n = U_n^T \\pmb{\\beta} U_n + \\lambda I$\n- Vector de correalaci\u00f3n cruzada ponderada: $\\theta_n = U_n^T \\pmb{\\beta} \\textbf{d}_n$\n\nque se definen en funci\u00f3n\n\n$$\n\\textbf{d}_n = \\begin{pmatrix} d_n \\\\ d_{n-1} \\\\ \\vdots \\\\ d_{L+1} \\end{pmatrix} \\quad\n\\textbf{u}_n = \\begin{pmatrix} u_n \\\\ u_{n-1} \\\\ \\vdots \\\\ u_{n-(L+1)} \\end{pmatrix} \\quad\n\\pmb{\\beta} = I \\begin{pmatrix} \\beta \\\\ \\beta^{1} \\\\ \\beta^{2} \\vdots \\\\ \\beta^{n-L-1} \\end{pmatrix}\n\\quad \nU_n = \\begin{pmatrix}\n\\textbf{u}_n^T \\\\ \\textbf{u}_{n-1}^T \\\\ \\vdots \\\\ \\textbf{u}_{L+1}^T \\\\\n\\end{pmatrix} \\in \\mathbb{R}^{n - (L+1) \\times L+1}\n$$\n\ndonde $I$ es la matriz identidad\n\n:::{note}\n\nEsta soluci\u00f3n es similar a la del filtro de Wiener. Es dif\u00edcil actualizarla a medida que llegan nuevas observaciones y adem\u00e1s es muy costosa debido al c\u00e1lculo del inverso de la matriz de correlaci\u00f3n\n\n:::\n\n### Soluci\u00f3n recursiva del filtro RLS\n\nEn lugar de la soluci\u00f3n cerrada, es m\u00e1s conveniente actualizar los pesos de forma recursiva. Las condiciones iniciales son \n\n- $\\Phi_0 = \\lambda^{-1} I$\n- $\\theta_0 = 0$\n\ny luego la actualizaci\u00f3n viene dada por \n\n- $\\Phi_{n} = \\beta \\Phi_{n-1} + \\textbf{u}_n \\textbf{u}_n^T$ \n- $\\theta_{n} = \\beta \\theta_{n-1} + \\textbf{u}_n d_n $ \n- $\\textbf{w}_n = \\Phi_n^{-1} \\theta_n$\n\nPodemos evitar invertir la matriz de correlaci\u00f3n si usamos el lema de inversi\u00f3n de matrices \n\n$$\n(A + UCV)^{-1} = A^{-1} - A^{-1} U (C^{-1} + VA^{-1} U)^{-1} V A^{-1}\n$$\n\ncon $A = \\Phi_{n-1}^{-1}$, $C=1$, $U= \\textbf{u}_n$ y $V = \\textbf{u}_n^T$. \n\nDe esta forma podemos actualizar $\\Phi_{n}^{-1}$ directamente sin tener que invertirla, como se muestra a continuaci\u00f3n\n\n$$\n\\begin{align}\n\\Phi_{n}^{-1} &= \\left(\\beta \\Phi_{n-1} + \\textbf{u}_n \\textbf{u}_n^T \\right)^{-1} \\nonumber \\\\\n&= \\beta^{-1} \\Phi_{n-1}^{-1} - \\beta^{-2} \\frac{\\Phi_{n-1}^{-1} \\textbf{u}_n \\textbf{u}_n^T \\Phi_{n-1}^{-1} }{1 + \\beta^{-1} \\textbf{u}_n^T \\Phi_{n-1}^{-1} \\textbf{u}_n} \\nonumber \\\\\n&= \\beta^{-1} \\Phi_{n-1}^{-1} - \\beta^{-1} \\textbf{k}_n \\textbf{u}_n^T \\Phi_{n-1}^{-1}, \\nonumber \n\\end{align}\n$$\n\ndonde llamamos **ganancia** al factor\n\n$$\n\\textbf{k}_n = \\frac{\\beta^{-1} \\Phi_{n-1}^{-1} \\textbf{u}_n }{1 + \\beta^{-1} \\textbf{u}_n^T \\Phi_{n-1}^{-1} \\textbf{u}_n}\n$$\n\n\n\nEl \u00faltimo paso es obtener al regla de actualizaci\u00f3n de pesos\n\n$$\n\\begin{align}\n\\textbf{w}_n &= \\Phi_n^{-1} \\theta_n \\nonumber \\\\\n&= \\Phi_n^{-1} \\left ( \\beta \\theta_{n-1} + \\textbf{u}_n d_n \\right) \\nonumber \\\\\n&= \\left ( \\beta^{-1} \\Phi_{n-1}^{-1} - \\beta^{-1} \\textbf{k}_n \\textbf{u}_n^T \\Phi_{n-1}^{-1} \\right ) \\beta \\theta_{n-1} + \\Phi_n^{-1} \\textbf{u}_n d_n \\nonumber \\\\\n&= \\textbf{w}_{n-1} - \\textbf{k}_n \\textbf{u}_n^T \\textbf{w}_{n-1} + \\Phi_n^{-1} \\textbf{u}_n d_n \\nonumber \\\\\n&= \\textbf{w}_{n-1} + \\textbf{k}_n ( d_n - \\textbf{u}_n^T \\textbf{w}_{n-1} ) \\nonumber \\\\\n&= \\textbf{w}_{n-1} + \\textbf{k}_n e_n \\nonumber \n\\end{align}\n$$\n\ndonde usamos que $\\textbf{w}_{n-1} = \\Phi_{n-1}^{-1} \\theta_{n-1}$ y $\\textbf{k}_n = \\Phi_n^{-1} \\textbf{u}_n$\n\n\n\n\n:::{note} \n\nCon esto tenemos un algoritmo de orden cuadr\u00e1tico en lugar de orden c\u00fabico. Esto sigue siendo mayor que LMS que era de orden lineal pero tiene la ventaja de converger m\u00e1s rapidamente.\n\n:::\n\n### Resumen del algoritmo RLS e implementaci\u00f3n en Python\n\n```{prf:algorithm} Algoritmo RLS\n:nonumber:\n\n**Hyper-par\u00e1metros:** $L$, $\\lambda$, $\\beta$\n\n1. Inicializar $\\Phi_0^{-1} = \\lambda I$ y $\\textbf{w}_0 = 0$\n2. Para $n \\in [1, \\infty]$\n - Calcular la ganancia\n \n $$\n \\textbf{k}_n = \\frac{\\Phi_{n-1}^{-1} \\textbf{u}_n }{\\beta + \\textbf{u}_n^T \\Phi_{n-1}^{-1} \\textbf{u}_n}\n $$ \n \n - Calcular el error \n \n $$\n e_n = d_n - \\textbf{u}_n^T \\textbf{w}_{n-1} \n $$\n \n - Actualizar el error de pesos \n \n $$\n \\textbf{w}_n = \\textbf{w}_{n-1} + \\textbf{k}_n e_n \n $$ \n \n - Actualizar el inverso de la matriz de correlaci\u00f3n \n \n $$\n \\Phi_{n}^{-1} = \\beta^{-1} \\Phi_{n-1}^{-1} - \\beta^{-1} \\textbf{k}_n \\textbf{u}_n^T \\Phi_{n-1}^{-1}\n $$\n\n```\n\n\n\n\n- Hiperpar\u00e1metro $\\beta$: Define la memor\u00eda efectiva del sistema y repercute en la convergencia y estabilidad del filtro. Como punto de partida se sugiere un valor de $\\beta \\approx 0.99$. En general $\\beta \\in [0.9, 1.0)$\n- Hiperpar\u00e1metro $\\lambda$: Mientras m\u00e1s peque\u00f1o sea su valor mayor ser\u00e1 la regularizaci\u00f3n. Se recomienda $\\lambda < 0.01/\\sigma_u^2$ donde $\\sigma_u$ es la desviaci\u00f3n est\u00e1ndar de la se\u00f1al de entrada. En la pr\u00e1ctica se pueden calibrar con validaci\u00f3n cruzada al igual que $L$\n\n\n```python\nclass Filtro_RLS:\n \n def __init__(self, L, beta=0.99, lamb=1e-2):\n self.L = L\n self.w = np.zeros(shape=(L+1, ))\n self.beta = beta\n self.lamb = lamb\n self.Phi_inv = lamb*np.eye(L+1)\n \n def update(self, un, dn):\n # C\u00e1lculo de la ganancia\n pi = np.dot(un.T, self.Phi_inv)\n kn = pi.T/(self.beta + np.inner(pi, un))\n # Actualizar el vector de pesos\n error = dn - np.dot(self.w, un)\n self.w += kn*error\n # Actualizar el inverso de Phi\n self.Phi_inv = (self.Phi_inv - np.outer(kn, pi))*self.beta**-1\n return np.dot(self.w, un)\n```\n\n### Ejemplo: ALE con filtro RLS\n\nVeamos como reacciona el filtro RLS ante cambios bruscos usando el ejemplo de la lecci\u00f3n pasada. Comparemos con el filtro NLMS\n\n\n```python\nnp.random.seed(12345)\nFs, f0 = 100, 5\nt = np.arange(0, 3, 1/Fs)\ns = np.sin(2.0*np.pi*t*f0)\ns[t>1] += 5\nu = s + 0.5*np.random.randn(len(t))\n\nclass Filtro_NLMS:\n \n def __init__(self, L, mu, delta=1e-6):\n self.L = L\n self.w = np.zeros(shape=(L+1, ))\n self.mu = mu\n self.delta = delta\n \n def update(self, un, dn):\n unorm = np.dot(un, un) + self.delta\n error = dn - np.dot(self.w, un)\n self.w += 2*self.mu*error*(un/unorm)\n return np.dot(self.w, un)\n```\n\n\n```python\nL = 20\nlms = Filtro_NLMS(L, 0.02)\nrls = Filtro_RLS(L, 0.99, 1e-2)\n\nu_pred = np.zeros(shape=(len(u), 2))\nfor k in range(L+1, len(u)):\n u_pred[k, 0] = lms.update(u[k-L-1:k][::-1], u[k])\n u_pred[k, 1] = rls.update(u[k-L-1:k][::-1], u[k])\n```\n\n\n```python\ns1 = hv.Curve((t, s), 'Tiempo', 'Se\u00f1al', label='Limpia')\ns2 = hv.Scatter((t, u), 'Tiempo', 'Se\u00f1al', label='Contaminada')\ns3 = hv.Curve((t, u_pred[:, 0]), 'Tiempo', 'Se\u00f1al', label='Filtrada (LMS)')\ns4 = hv.Curve((t, u_pred[:, 1]), 'Tiempo', 'Se\u00f1al', label='Filtrada (RLS)')\nhv.Overlay([s1, s2, s3, s4]).opts(hv.opts.Overlay(legend_position='top'), \n hv.opts.Curve(ylim=(-5, 10), height=350))\n```\n\n:::{note} \n\nRLS es capaz de seguir los cambios de la se\u00f1al en menos tiempo que el filtro LMS\n\n:::\n\n## Perceptr\u00f3n \n\nEl perceptr\u00f3n es un filtro adaptivo desarrollado por [Frank Rosemblatt en 1962](https://en.wikipedia.org/wiki/Frank_Rosenblatt) con el objetivo de hacer **clasificaci\u00f3n supervisada de patrones**\n\nAsumiremos que\n\n- La respuesta deseada tiene dos categor\u00edas: $d_n \\in \\{-1, +1\\}$. El perceptr\u00f3n resuelve un problema de **clasificaci\u00f3n binario**\n- La entrada es continua y de $L$ dimensiones: $u_n \\in \\mathbb{R}^L$\n- Se tienen $N$ tuplas $(u_n, d_n)$ para entrenar el filtro\n\nEl filtro tiene arquitectura FIR con $L+1$ coeficientes pero se agrega una funci\u00f3n no lineal $\\phi(\\cdot)$ en la salida del filtro\n\n$$\n\\begin{align}\ny_n &= \\phi \\left(b + \\sum_{k=1}^{M} w_k u_{nk} \\right) \\nonumber \\\\\n&= \\phi \\left(b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle \\right), \\nonumber \n\\end{align}\n$$\n\nLos coeficientes del filtro son el escalar $b$ y el vector $\\textbf{w}$. \n\n:::{note}\n\nEste filtro corresponde al modelo matem\u00e1tico de una neurona de [McCulloch y Pitts](https://link.springer.com/article/10.1007/BF02478259), el antecesor de las actuales redes neuronales profundas\n\n:::\n\nEn la implementaci\u00f3n original se utiliz\u00f3 la siguiente funci\u00f3n no lineal o funci\u00f3n de activaci\u00f3n\n\n$$\n\\phi(z) = \\text{sign}(z) = \\begin{cases} +1 & z > 0 \\\\0 & z=0\\\\-1 & z<0 \\end{cases}\n$$\n\nLa siguiente figura esquematiza el modelo y su inspiraci\u00f3n biol\u00f3gica\n\n\n\n- Las coeficientes del filtro simulan la importancia o peso de las dendritas\n- La funci\u00f3n no lineal simula el ax\u00f3n que dispara un est\u00edmulo el\u00e9ctrico cuando el voltaje supera un umbral\n\n\n\n### Ajuste de la neurona artificial: Algoritmo perceptron\n\nLa neurona ajusta sus par\u00e1metros con cada ejemplo que recibe. Asumiendo que tenemos un dataset con $N$ tuplas $(d_i, u_i)$ donde $d_i$ es la etiqueta y $u_i$ es el vector de entrada\n\n\n```{prf:algorithm} Algoritmo Perceptron\n:nonumber:\n\n**Hyper-par\u00e1metros:** $\\mu$\n\n1. Inicializaci\u00f3n de los par\u00e1metros: $b=0$, $w_i=0, \\forall i = 1, \\ldots, L$\n1. Inicializaci\u00f3n del contador: $c=0$\n2. Para $\\text{nepoch}=1,2,\\ldots, \\infty$\n 1. Hacer una permutaci\u00f3n del dataset\n 2. $c = 0$\n 3. Para $n=1,2,\\ldots,N$\n - Si $\\text{sign} \\left(b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle \\right) \\neq d_i$ entonces\n \n $$\n \\textbf{w} = \\textbf{w} + \\mu d_n \\textbf{u}_n\n $$\n\n y\n\n $$\n b = b + \\mu d_n\n $$ \n \n de lo contrario \n\n $$\n c = c + 1 \n $$ \n \n - Si $c=N$ entonces detener el entrenamiento\n \n```\n\nNotas:\n\n- Se completa una \u00e9poca de entrenamiento cuando se han presentado los $N$ ejemplos del conjunto de entrenamiento\n- El hiperpar\u00e1metro $\\mu$ es la tasa de aprendizaje de la neurona\n- Detenemos el entrenamiento cuando todos los ejemplos est\u00e1n bien clasificados. Tambi\u00e9n se puede detener el entrenamiento si se cumple un cierto n\u00famero fijo de \u00e9pocas o al cumplir un cierto n\u00famero de \u00e9pocas sin cambios en $b$ y $w$\n- La permutaci\u00f3n de los ejemplos en cada \u00e9poca puede evitar sesgos y acelerar la convergencia\n- El algoritmo perceptr\u00f3n est\u00e1 garantizado a converger en tiempo finito si el problema es **linealmente separable**. Si el problema no es **linealmente separable** la convergencia se puede forzar disminuyendo gradualmente $\\mu$\n\n\n### Interpretaci\u00f3n como una aplicaci\u00f3n de gradiente descendente estoc\u00e1stico (SGD)\n\nEl algoritmo de ajuste de la neurona puede considerarse como una minimizaci\u00f3n de la siguiente funci\u00f3n de costo\n\n$$\n\\mathcal{L}(b, \\textbf{w} ) = \\text{max} \\Big(0 ~, - d_n ( b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle) \\Big)\n$$\n\ncuya derivada es \n\n$$\n\\frac{d \\mathcal{L}}{d \\textbf{w}} = \\begin{cases} 0 & d_n ( b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle) \\geq 0 \\\\ - d_n \\textbf{u}_n & d_n ( b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle) < 0\n\\end{cases}\n$$\n\n$$\n\\frac{d \\mathcal{L}}{d b} = \\begin{cases} 0 & d_n ( b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle) \\geq 0 \\\\ - d_n & d_n ( b + \\langle \\textbf{w}, \\textbf{u}_n \\rangle) < 0\n\\end{cases}\n$$\n\nes decir que la derivada es cero si el ejemplo est\u00e1 bien clasificado\n\nNotemos que si aplicamos SGD sobre esta funci\u00f3n de costo\n\n$$\n\\textbf{w} = \\textbf{w} - \\mu \\frac{d \\mathcal{L}}{d \\textbf{w}}\n$$\n\n$$\nb = b - \\mu \\frac{d \\mathcal{L}}{db}\n$$\n\nse recuperan las reglas de ajuste vistas anteriormente\n\n### Ejemplo: Clasificaci\u00f3n binaria con perceptr\u00f3n\n\nA continuaci\u00f3n se muestra como ajustar un perceptr\u00f3n a medida que se presentan los ejemplos en un problema de clasificaci\u00f3n linealmente separable\n\n\n```python\nN = 5 # Ejemplos por clase\nL = 2 # Dimensi\u00f3n de los datos\nnp.random.seed(1234)\nu = np.concatenate((np.random.randn(N, L), \n 4 + np.random.randn(N, L)))\nd = np.ones(shape=(2*N,)); \nd[:N] = -1.\n```\n\n\n```python\n# Par\u00e1metros e hiperpar\u00e1metros\nb, w = 0, np.zeros(shape=(L, ))\nmu = 1e-6\nmax_epoch = 2\n\nw_history = np.zeros(shape=(max_epoch*len(d), L))\nb_history = np.zeros(shape=(max_epoch*len(d),))\nu_history = np.zeros(shape=(max_epoch*len(d), 2))\n# Entrenamiento\nfor nepoch in range(max_epoch):\n idx = np.random.permutation(len(d))\n for n, (un, dn) in enumerate(zip(u[idx], d[idx])):\n if dn*(b+np.inner(w, un)) <= 0.:\n w += mu*dn*un\n b += mu*dn \n \n u_history[nepoch*len(d)+n, :] = un\n w_history[nepoch*len(d)+n, :] = w\n b_history[nepoch*len(d)+n] = b\n```\n\nLa neurona define un hiperplano que separa el espacio en dos clases. En la siguiente animaci\u00f3n se muestra el ajuste de la neurona y en consecuencia la modificaci\u00f3n del hiperplano. \n\n\n```python\nx_plot = np.linspace(-2, 7, num=10)\nhiperplano = lambda x, w, b : -b/(w[1]) - x*w[0]/(w[1])\n\nc1 = hv.Points((u[:N, 0], u[:N, 1]), label='Clase 1').opts(size=10, height=350, xlim=(-2, 7), ylim=(-3, 7))\nc2 = hv.Points((u[N:, 0], u[N:, 1]), label='Clase 2').opts(size=10)\np = hv.HoloMap(kdims='Iteraci\u00f3n')\nfor i in range(len(b_history)):\n plane = hv.Curve((x_plot, hiperplano(x_plot, w_history[i, :], b_history[i])), label='Hiperplano').opts(color='k')\n selected = hv.Points((u_history[i, 0], u_history[i, 1])).opts(color='k', size=10)\n p[i] = c1 * c2 * plane * selected\n\nhv.output(p.opts(legend_position='top'), holomap='gif', fps=1)\n```\n\n\n:::{note} \n\nEl hiperplano se traslada y rota (transformaci\u00f3n lineal) cada vez que el ejemplo seleccionado (marcado en negro) est\u00e1 mal clasificado\n\n:::\n\n### M\u00e1s all\u00e1 del perceptron \n\n- El modelo de neurona con salida sigmoide se conoce como **regresi\u00f3n log\u00edstica**\n- Tanto el perceptr\u00f3n como el regresor log\u00edstico se pueden extender a m\u00e1s de dos clases: **regresor softmax**\n- Conectando varias neuronas en cadena se forma lo que se conoce como una perceptr\u00f3n multicapa. Este es un ejemplo de **red neuronal artificial**\n- Las redes neuronales artificiales se estudian en mayor detalle en el curso de **inteligencia artificial** (INFO257)\n\n\n\n", "meta": {"hexsha": "ce010f377f3c4c65ddcf7dbd33353b1b49b86b3b", "size": 23565, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/unit3/lecture3.ipynb", "max_stars_repo_name": "phuijse/UACH-INFO183", "max_stars_repo_head_hexsha": "0e1b6bef0bd80cda2753bd11e62016268f2de638", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2018-08-27T23:53:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-16T23:31:05.000Z", "max_issues_repo_path": "lectures/unit3/lecture3.ipynb", "max_issues_repo_name": "phuijse/UACH-INFO183", "max_issues_repo_head_hexsha": "0e1b6bef0bd80cda2753bd11e62016268f2de638", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/unit3/lecture3.ipynb", "max_forks_repo_name": "phuijse/UACH-INFO183", "max_forks_repo_head_hexsha": "0e1b6bef0bd80cda2753bd11e62016268f2de638", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-01-04T17:43:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-07T16:07:18.000Z", "avg_line_length": 34.0534682081, "max_line_length": 284, "alphanum_fraction": 0.5186505411, "converted": true, "num_tokens": 5473, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46490157137338844, "lm_q2_score": 0.256831980010821, "lm_q1q2_score": 0.11940159108596939}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\n\n```python\n# Here's what lambda ~ Exp(alpha) looks like!\n# It gives lots of weight towards small lambda. Since lambda is a parameter of the Poisson distribution,\n# this seems like the wrong distribution of lambda.\n\na = np.linspace(0, 100, 100)\nexpo = stats.expon\nalpha = 1.0/count_data.mean()\nlambda_ = [alpha]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\n# plt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,.06)\nplt.title(\"Probability density function of an Exponential random variable\");\n```\n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)\n```\n\nIn the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.\n\n\n```python\nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n\n```python\nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n Multiprocess sampling (2 chains in 2 jobs)\n CompoundStep\n >Metropolis: [tau]\n >Metropolis: [lambda_2]\n >Metropolis: [lambda_1]\n\n\n\n\n
\n \n \n 100.00% [30000/30000 00:25<00:00 Sampling 2 chains, 0 divergences]\n
\n\n\n\n Sampling 2 chains for 5_000 tune and 10_000 draw iterations (10_000 + 20_000 draws total) took 27 seconds.\n The number of effective samples is smaller than 25% for some parameters.\n\n\n\n```python\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n```\n\n\n```python\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\nlambda_1_samples.mean(), lambda_2_samples.mean()\n```\n\n\n\n\n (17.778793343328523, 22.700592150300373)\n\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n(lambda_2_samples / lambda_1_samples).mean() - 1\n```\n\n\n\n\n 0.2785043328596446\n\n\n\n\n```python\n# It's not actually that different from dividing the means\n(lambda_2_samples.mean() / lambda_1_samples.mean()) - 1\n```\n\n\n\n\n 0.27683536851609514\n\n\n\n\n```python\n# Here's what the divided distribution looks like\nfigsize(12.5, 4)\nplt.hist(lambda_2_samples / lambda_1_samples, bins=30, density=True)\nplt.xlim([0, 2])\nplt.xlabel(\"$\\lambda_1 / \\lambda_2$ value\")\nplt.ylabel(\"probability\") # y-axis is not scaled correctly! not sure why, but don't want to debug now\nplt.title(r\"$\\lambda_1 / \\lambda_2$\");\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\nlambda_1_samples[tau_samples < 45].mean()\n```\n\n\n\n\n 17.781022995669\n\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "33bd4ba0a066f8dad33f7cee521ab090dfd6eec9", "size": 334497, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_stars_repo_name": "kennysong/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "343a0f9ccacc6051689a18ee28bc82dfbdd7ccb4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_issues_repo_name": "kennysong/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "343a0f9ccacc6051689a18ee28bc82dfbdd7ccb4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb", "max_forks_repo_name": "kennysong/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "343a0f9ccacc6051689a18ee28bc82dfbdd7ccb4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 255.341221374, "max_line_length": 90736, "alphanum_fraction": 0.8989886307, "converted": true, "num_tokens": 11949, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.30404167496654744, "lm_q2_score": 0.39233683016710835, "lm_q1q2_score": 0.11928674699507348}} {"text": "

Notebook A: Exploring through the most useful and important features within a Jupyter notebook\n\n***\n_This will be an incomplete and biased run-through of the important features and functions in a jupyter notebook_\n\nIncomplete, because no one notebook (or set) could cover all of the available features and abilities of the Jupyter project\n*** \n# 1. Cell Types\n# 2. Editing modes\n# 3. Imports and output\n# 4. Help \n\n\n### Great sources of Jupyter notebooks to explore and tick off the list:\n -Jupyter.org (https://jupyter.org/try) \n -The Carpentries (https://software-carpentry.org/lessons/index.html) \n -And the main Jupyter notebook documentation (https://jupyter-notebook.readthedocs.io/en/stable/) \n -IPython informat, the precedant of Jupyter(https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Index.ipynb) \n \n\n\n>We love stories. The ability to provide information and **context** to accompany _data_ and `code` means that, in addtion to providing comments for the function and behavior of the `code` you can return the results of each stage of your computation, as well as allowing a more gerenal discussion of motivations and possible findings\n\n# 1. Cell types:\n\n1. Markdown \n2. Code\n3. Heading\n4. Raw NBConvert\n\n\n\n## 1_1: Markdown and Formatting\n\n
\n

When unsure you can always import 🐼 🐼

\n
\n\n\nNot that I am encouraging even a light dusting of emoji, but a great number of options are available to communicate:\nhttps://www.w3schools.com/charsets/ref_emoji.asp \n\n\nFurther resources:\n- https://daringfireball.net/projects/markdown/\n- https://www.w3schools.com/\n\n\n
\n
\n Why this matters? \n
\n
\n\n***\n### The first thing we may need to look at is the types of information that we want to provide to the reader \n#### (99% of the time, that will be you). \n***\nOver time these might be:\n - Overall aims and research goals\n - Specific tasks to be achived here\n - Descriptions of data \n - Libraires and code \n\n## 1_2: Code cells\n\n\n```python\n#code and comments\na = 1\nb = 2\na + b\n```\n\n\n\n\n 3\n\n\n\n```python\ndef TimesTable(val=1, n=10):\n for i in range(1,n):\n print (i*val)```\n\n```html\n

Hello world!

```\n\n### Equations: LaTeX and Mathjax options to illustrate \n\nFull LaTex is an option for those familiar with it\n\n\n```latex\n%%latex\n\\begin{align}\nF(k) = {\\sum}_0^{\\infty}(x) e^{2\\pi}y\n\\end{align}\n```\n\n\n\\begin{align}\nF(k) = {\\sum}_0^{\\infty}(x) e^{2\\pi}y\n\\end{align}\n\n\n\n### MathJax provides \nhttps://www.mathjax.org/\n\n\n```python\n# https://www.mathjax.org/\n\nfrom IPython.display import Math\nMath(r'F(k) = \\ {\\sum}_0^{\\infty}(x) e^{2\\pi}y')\n```\n\n\n\n\n$\\displaystyle F(k) = \\ {\\sum}_0^{\\infty}(x) e^{2\\pi}y$\n\n\n\n#### But markdown can also shorten this process\nby using '$' before and after your text\n\n$F(k) = \\ {\\sum}_0^{\\infty}(x) e^{2\\pi}y$\n\n# 2. Editing Modes\n\n# 3. Imports and output\n\n\n```python\nimport IPython.display as ipd\nimport random\n```\n\n\n```python\nimage = ipd.Image('https://upload.wikimedia.org/wikipedia/commons/f/fb/High_five%21%21.jpg', width=200)\nipd.display(image) , image.metadata\n```\n\n\n \n\n \n\n\n\n\n\n (None, {})\n\n\n\n\n```python\nh_fives= []\nh_fives.append(r'')\nh_fives.append(r'')\nh_fives.append(r'')\nh_fives.append(r'')\n```\n\n\n```python\n#a high-five for you (re-run for another)\n\nwebout = ipd.HTML(h_fives[random.randint(0,3)])\nwebout\n```\n\n\n\n\n\n\n\n\n###### You also have many options provided by IPython and Jupyter for other media to enrich you presentations and explainations.\n\nMore detailed notes and notebooks are provided here:\n\nhttps://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Index.ipynb\n\nhttps://jupyter.org/try\n\n\n\n```python\nvid =ipd.YouTubeVideo('3VDw7XIulIk', autoplay='0', width=720, height=400)\n\nipd.display(vid)\n```\n\n\n\n\n\n\n\n\n```python\n#a list of the available cell and line 'magics'\n\n%lsmagic?\n```\n\n\n```python\n%%html\n

Hello internet!

\n\n```\n\n\n

Hello internet!

\n\n\n\n
\n\n\n```python\ndef TimesTable(val=1, n=10):\n for i in range(1,n):\n print (i*val)\n```\n\n\n```python\n#%timeit\n%time TimesTable(7,10)\n```\n\n 7\n 14\n 21\n 28\n 35\n 42\n 49\n 56\n 63\n Wall time: 0 ns\n\n\n# 4. Help\n\n\n```python\n?random.randint\n#core python\n```\n\n\n```python\nimport pandas as pd\n```\n\n\n```python\n?pd\n\n```\n\n## Conclusion: Jupyter notebooks are an environment in which you can learn (and recall), explain and explore. Code, data, and context\n\n# Please try:\n - creating and renaming a new notebook for yourself\n - making a copy of an exisitng notebook\n - Search for an example of an interactive notebook from your area of research\n - \n \n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0dc1053363742d0043cdcdcb61c30de756474c59", "size": 1040300, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "NotebookA_kicking_tyres.ipynb", "max_stars_repo_name": "LozRiviera/LAB_Open_Summer_School19", "max_stars_repo_head_hexsha": "7e423ef99ffd09db2b65612da388afbf0a74f773", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-09-19T12:34:17.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-26T16:18:27.000Z", "max_issues_repo_path": "NotebookA_kicking_tyres.ipynb", "max_issues_repo_name": "LozRiviera/LAB_Open_Summer_School19", "max_issues_repo_head_hexsha": "7e423ef99ffd09db2b65612da388afbf0a74f773", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NotebookA_kicking_tyres.ipynb", "max_forks_repo_name": "LozRiviera/LAB_Open_Summer_School19", "max_forks_repo_head_hexsha": "7e423ef99ffd09db2b65612da388afbf0a74f773", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1834.7442680776, "max_line_length": 1015985, "alphanum_fraction": 0.9564635201, "converted": true, "num_tokens": 1553, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3629692055196168, "lm_q2_score": 0.3276683073862188, "lm_q1q2_score": 0.11893350520593343}} {"text": "```\n# this mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)\n\n# enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment3/'\nFOLDERNAME = 'cs231n/assignments/assignment2/'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# this downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content\n```\n\n Mounted at /content/drive\n /content/drive/My Drive/cs231n/assignments/assignment2/cs231n/datasets\n /content\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. \nOne idea along these lines is batch normalization which was proposed by [1] in 2015.\n\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(' means: ', x.mean(axis=axis))\n print(' stds: ', x.std(axis=axis))\n print() \n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n \tThere will be an option for Colab users and another for Jupyter (local) users.\n\n\n\n```\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n## Batch normalization: forward\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n# Now means should be close to beta and stds close to gamma\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n## Batch normalization: backward\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n#You should expect to see relative errors between 1e-13 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.7029281043741803e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n## Batch normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hart part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n test function???\n dx difference: 9.608004855382517e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 1.26x\n\n\n## Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\nHINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64, \n normalization='batchnorm')\n\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.4295217770600983\n W1 relative error: 1.47e-05\n W2 relative error: 2.46e-06\n W3 relative error: 2.87e-09\n b1 relative error: 4.15e-06\n b2 relative error: 8.72e-07\n b3 relative error: 1.04e-10\n beta1 relative error: 1.38e-07\n beta2 relative error: 3.55e-09\n gamma1 relative error: 6.30e-08\n gamma2 relative error: 2.74e-09\n \n Running check with reg = 3.14\n Initial loss: 7.161193776133827\n W1 relative error: 1.64e-05\n W2 relative error: 8.17e-07\n W3 relative error: 1.00e+00\n b1 relative error: 1.62e-05\n b2 relative error: 7.75e-07\n b3 relative error: 1.38e-10\n beta1 relative error: 1.69e-08\n beta2 relative error: 5.19e-09\n gamma1 relative error: 8.22e-09\n gamma2 relative error: 9.13e-09\n\n\n# Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.368423\n (Epoch 0 / 10) train acc: 0.101000; val_acc: 0.107000\n (Epoch 1 / 10) train acc: 0.307000; val_acc: 0.252000\n (Iteration 21 / 200) loss: 1.925888\n (Epoch 2 / 10) train acc: 0.447000; val_acc: 0.311000\n (Iteration 41 / 200) loss: 1.989997\n (Epoch 3 / 10) train acc: 0.521000; val_acc: 0.274000\n (Iteration 61 / 200) loss: 1.784510\n (Epoch 4 / 10) train acc: 0.598000; val_acc: 0.295000\n (Iteration 81 / 200) loss: 1.231503\n (Epoch 5 / 10) train acc: 0.577000; val_acc: 0.283000\n (Iteration 101 / 200) loss: 1.194501\n (Epoch 6 / 10) train acc: 0.689000; val_acc: 0.343000\n (Iteration 121 / 200) loss: 0.907169\n (Epoch 7 / 10) train acc: 0.735000; val_acc: 0.302000\n (Iteration 141 / 200) loss: 1.044532\n (Epoch 8 / 10) train acc: 0.748000; val_acc: 0.314000\n (Iteration 161 / 200) loss: 0.754309\n (Epoch 9 / 10) train acc: 0.779000; val_acc: 0.301000\n (Iteration 181 / 200) loss: 0.800480\n (Epoch 10 / 10) train acc: 0.808000; val_acc: 0.311000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696059\n (Epoch 6 / 10) train acc: 0.535000; val_acc: 0.345000\n (Iteration 121 / 200) loss: 1.557987\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.304000\n (Iteration 141 / 200) loss: 1.432189\n (Epoch 8 / 10) train acc: 0.628000; val_acc: 0.339000\n (Iteration 161 / 200) loss: 1.034116\n (Epoch 9 / 10) train acc: 0.654000; val_acc: 0.342000\n (Iteration 181 / 200) loss: 0.905795\n (Epoch 10 / 10) train acc: 0.712000; val_acc: 0.328000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?\n\n## Answer:\nFrom the experiment, we can see that with batch normalization, the network is more stable with reasonable result over wide range of weight initilization and almost always better than the network without batch normalization layers by the same weight initialization. Without the batch normalization layers, the network only yeilds good results in small range of weight intialization. And after some weight initialization thresholds, both networks start perform poorly.\n\nEvery layers' weights are initialized with the same weight scale. However, without batch normalization, the mean and variance of input values for each layer varies a lot from each other. So, if the weight scale works (help the training to converge) for a layer, it does not garantee to work with other layers, because the input values is in the different range (different means and variances). It is not the case for networks with batch normalization layers, because the batch norm layers garantee the output of each layer is normalized to have mean 0 and 1 standard deviation, and then scale and shift the values by 2 learned parameter gamma and beta. In other words, the input values are consistent over the layers, make it possiple for the model to converge with large range of weight scales.\n\n\n# Batch normalization and batch size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ', b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\nAccording to the result, we can see that the performance of batch normalization is affected by the batch size. Specifictly, the smaller the batch size the worse. From the result, the batch normalization net with smallest batch size even perform worse than the baseline. The reason for this problem is that the batch normalization layers estimates the entire dataset statistics by caculating the statictis of each batch. Small batch will result in more noise in the statistics and make the model perform poorly.\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n3. Batch Normalization: gamma = 1/std, beta =0\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n# Means should be close to zero and stds close to one\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n# Now means should be close to beta and stds close to gamma\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n#You should expect to see relative errors between 1e-12 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n function test3\n dx error: 1.4336158494902849e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and batch size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\ndef run_dimsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n batch_size = 32\n dims = [[5] * 4, [10] * 4, [20] * 4, [50] * 4]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_dim = dims[0]\n\n print('No normalization: dim size = ', dims[0][0])\n model = FullyConnectedNet(dims[0], weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=batch_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(dims)):\n dims_ = dims[i]\n print('Normalization: dim size = ', dims_[0])\n bn_model = FullyConnectedNet(dims_, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=batch_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n```\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n\n```\nln_solvers_dsize, solver_dsize, batch_sizes = run_dimsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_dsize, ln_solvers_dsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=[5, 10, 20, 50])\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_dsize, ln_solvers_dsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=[5, 10, 20, 50])\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n1. [INCORRECT] According the above result, trained with 5 layers which is considered to be deep, the network still works well with layer normalization\n2. [CORRECT] It is like the problem with small batch size in batch.normalization. The layer normalization caculates the statistics of the on each data point, having too small number of dimensions makes the statistics noiser and reduces the representation of the data points.\n3. [CORRECT] Having a high regularization term prevent the network from learning the complex transformations and potentially underfits the training data.\n\n\n\n```\n\n```\n", "meta": {"hexsha": "29ac058aff49e4add43ee61e691543e459643477", "size": 541772, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "quochungto/stanford-cs231n-assignment-solution", "max_stars_repo_head_hexsha": "22f6a97f29c4188d57a581822ce1dbd37a36b882", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "quochungto/stanford-cs231n-assignment-solution", "max_issues_repo_head_hexsha": "22f6a97f29c4188d57a581822ce1dbd37a36b882", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "quochungto/stanford-cs231n-assignment-solution", "max_forks_repo_head_hexsha": "22f6a97f29c4188d57a581822ce1dbd37a36b882", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 541772.0, "max_line_length": 541772, "alphanum_fraction": 0.9393102634, "converted": true, "num_tokens": 10288, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.25982564369245537, "lm_q1q2_score": 0.11877584170515952}} {"text": "
\n\n
\n \n
\n\n**Course:** TVM4174 - Hydroinformatics for Smart Water Systems - Spring 2022\n\n# Using Plotly Package for Interactive Visualization\n \n*Developed by Bastian Habbel and Leonardo Sigmund*\n\n
\n\n \n\n\n# Introduction\n\nThe plotly Python library is an interactive, open-source plotting library that supports over 40 unique chart types covering a wide range of statistical, financial, geographic, scientific, and 3-dimensional use-cases. Built on top of the Plotly JavaScript library (plotly.js), plotly enables Python users to create interactive web-based visualizations that can be displayed in Jupyter notebooks, saved to standalone HTML files, or served as part of pure Python-built web applications using Dash. The plotly Python library is sometimes referred to as \"plotly.py\" to differentiate it from the JavaScript library.\n\nFull details for the plotly library are given in the [Plotly Documentation](https://plotly.com/python/). This notebook aims to give a basic understanding of the possibilities of this package in the field of water distribution, general data sciences and mapping.\n\nThe sublibraries which are used in this notebook are `plotly.express` as `px`'and `plotly.graph_object` as `go`. Eventhough it's not necessary to reload the package for every chapter, we decided to leave it in the code so that the chapters can be used indepedent.\n\n# Previous knowledge\n\nThis notebook requires some previous knowledge some packages. In case you have difficulties in understanding the code, we recommend you to take a look on [Mark Bakker's Tutorials](https://mbakker7.github.io/exploratory_computing_with_python/).\n- `matplotbib`: We refer to it in some parts since the working process is similar. \n - Notebook 1: [Basics and Plotting](https://nbviewer.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook1_basics_plotting/py_exploratory_comp_1_sol.ipynb)\n- `numpy`: Used for functions \n - Notebook 4: [Functions](https://nbviewer.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook4_functions/py_exploratory_comp_4_sol.ipynb) \n- `pandas`: In general useful for data handling with dataframes\n - Notebook 8: [Pandas and time series](https://nbviewer.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook8_pandas/py_exploratory_comp_8_sol.ipynb)\n- `wntr`: This package should be known from the class, it is used to analyze water distribution networks. There may be a good notebook available on blackboard which explains the necessary basics for this package.\n \n \n\n# Content\n\n**0. [Installation of the package](#0.-Installation-of-the-package)**\n\n**1. [Plotting data series](#1.-Plotting-data-series)**\n - Basic bar plot\n - Customized plot\n - Range slider and selectors\n - Basic line plot\n \n**2. [Statistical Analysis: Histogram](#2-Statistical-Analysis:-Histogram)**\n\n**3. [3D plotting](#3.-3D-plots)**\n - 3D scatter plots\n - 3D surface plots\n \n**4. [Map plotting](#4.-Map-plotting)**\n\n**5. [Plotting a water distribution network](#5.-Plotting-a-water-distribution-network)**\n\n**6. [Animations](#6.-Animations)**\n\n**7. [Saving figures](#7.-Saving-figures)**\n\n\nThere are some tasks included, which you can use to test your knowledge. The [Solutions to the tasks](#Task-solutions) are at the end of this notebook.\n\nLet's start!\n\n# 0. Installation of the package\n\nPlotly can be installed using `pip`. Once installed on the kernel, this step can be skipped. Be aware, maybe the version is updated...\n\n\n```python\npip install plotly==5.7.0 \n```\n\n# 1. Plotting data series\n\n### Import data\n\nFirst, we need some example data. In this case, we use annual rainfall data for Rotterdam which is stored under `rotterdam_rainfall_2012.txt`. Here, we prepare the data since we start with a raw textfile.\n\n\n```python\nimport pandas as pd\n\n#1. read data\nraindailyRotterdam = pd.read_csv('data/rotterdam_rainfall_2012.txt', sep=',', skiprows=9, parse_dates=['YYYYMMDD'],skipinitialspace=True)\n\n#2. convert to mm/d\nraindailyRotterdam.iloc[:,2] = raindailyRotterdam.iloc[:,2] * 0.1\n\n#3. replace false values\nraindailyRotterdam.loc[raindailyRotterdam.RH<0.0, 'RH'] = 0.0\n```\n\n### Basic bar plot\n\nThe library `express`, usually imported as `xp`, is build on basis of the library `graph_objects`, but makes the code more simple.\nUse Plotly Express to show the data in an interactive graph:\n\n\n```python\nimport plotly.express as px\n\nfig = px.bar(raindailyRotterdam, x='YYYYMMDD', y='RH')\nfig.show()\n```\n\nYou can already zoom in the graph and see more details compared to other methods. Give it a try with your mouse! There is a menu on the top-right corner which e.g. allows you to come back to the initial state.\n\n### Customized plot\n\nWe can store and costumize the figure, similar to the procedure used in `matplotlib`:\n\n- `ticklabelmode` indicates where the labels are positioned.\n- `dtick` defines the inferval of labels.\n\n\n```python\n#store the plot in a variable\nfig = px.bar(raindailyRotterdam, x='YYYYMMDD', y='RH')\n\n#add a title. The content of the following line could also be part of the brackets above:\nfig.update_layout(title = 'Rainfall distribution Rotterdamm')\n\n#costumize the axis\nfig.update_xaxes(title='2012', ticklabelmode=\"period\")\nfig.update_yaxes(title='Precipitation [mm/day]', dtick='2')\n\n#show the graph\nfig.show()\n```\n\n### Rangeslider and selectors\n\nAs you may have noticed, it's sometimes hard to zoom into a part of the graph without changing the scaling in y-direction without purpose. Therefore, Plotly has invented additional navigation tools that will be shown in this chapter.\n\n#### Rangeslider\nA rangeslider can be added to allow a closer look on the data series by setting the boolean `rangeslider_visible` on true:\n\n\n```python\nfig = fig.update_xaxes(rangeslider_visible=True)\n```\n\nMore information about the rangeslider can be found in the [documentation](https://plotly.com/python/reference/layout/xaxis/#layout-xaxis-rangeslider).\n\n#### Selectors\nAdditional selectors for pre-defined time-intervals might be useful as well:\n\n\n```python\nfig = fig.update_xaxes(\n rangeslider_visible=True,\n rangeselector=dict(\n buttons=list([\n dict(count=7, label=\"7 days\", step=\"day\", stepmode=\"todate\"),\n dict(count=1, label=\"1 month\", step=\"month\", stepmode=\"backward\"),\n dict(count=6, label=\"1/2 year\", step=\"month\", stepmode=\"backward\"),\n dict(step=\"all\")\n ])\n )\n)\n```\n\nIt is important to mention that it's necessary to parse the dates before so that plotly knows where to set the timesteps. Same here, in case you have further questions, they are probably answered in the [documentation](https://plotly.com/python/reference/layout/xaxis/#layout-xaxis-rangeselector).\n\n\nLet's have a look on the result:\n\n\n```python\nfig.show()\n```\n\nThe selectors appear on top of the graph and the slider on the bottom.\n\nGive it a try! You can always come back to the initial view using the navigation in the top-right corner.\n\n### Basic line plot\n\nSome data, for instance yearly trends, are better to show in a line than in a barplot. Plotly express provides interactive line charts for this purpose as well as will be shown in the following. \n\nGapminder Foundation is a non-profit venture registered in Stockholm, Sweden, that promotes sustainable global development and achievement of the United Nations Millennium Development Goals by increased use and understanding of statistics and other information about social, economic and environmental development at local, national and global levels. This also touches our topic, the water distribution networks. Let us have a global view on the development of water distribution in the least-developed countries regarding water access.\n\nTherefore, we first import the data about about overall water access, provided by Gapminder Foundation and the World Bank. A look in the dataframe tells us that there is data of 168 countries worldwide, whereas not all the datasets are complete. We will use Plotly to visualize the development of the 10 countries having the worst access to water in 2000.\n\n\n```python\n#Using Pandas to read the csv file and sort the data\nwater = pd.read_csv('data/gapminder_water_access.csv', index_col=0)\nranking2000 = water.sort_values('2000')\nworst2000 = ranking2000.head(10)\n```\n\n\n```python\n#Using plotly for visualization:\nfig = px.line(worst2000.T, color_discrete_sequence=px.colors.diverging.balance)\n\n#add a title. The content of the following line could also be part of the brackets above:\nfig.update_layout(title = 'Development of 10 countries with poorest water distribution in year 2000')\n\n#costumize the axis\nfig.update_xaxes(title='Year', ticklabelmode=\"period\", dtick = 1)\nfig.update_yaxes(title='overall water access [%]', dtick='10')\n```\n\nAs you can see, eventhough the situation is not perfect yet, a lot of improvement has taken place!\n\nEven if the lines have different colors, it might be difficult to distinguish between the different countries. The interactive graphic helps out in two different ways: Either, you can hover over the lines and you will see the country as well as the x & y parameters, or you can use the legend on the right side to deselect (one click) or to isolate (double click) one country. Give it a try!\n\n### Task 1: show a time-series analysis\n\nIn this task, you have to combine the two examples from above: You should create a visualization of the precipitation measurement in Sagelva from 2018 to 2020.\n\n- create a pandas dataframe from the csv file named `'Sagelva_Precipitation.csv'`from the folder `'data'` and have a look on it\n- create a barplot using the plotly express library. Be aware of the following things:\n - Bars: The bars should not add up but stand beside each other --> use `barmode`\n - Title: Add a `title` for the plot\n - Axis: Add a titles and make sure that the dates are shown correctly on the x axes\n - Legend: make sure that you can select and deselect the different years\n- Add the additional navigation tool `rangeslider`\n\n\n```python\n# Your code here...\n```\n\n[Solution to task 1](#Solution-to-task-1)\n\n# 2. Statistical analysis: Histogram\n\nBack to the first data set about the precipitation in Rotterdam.\n\nA statistical analysis of the data can be helpful to find a pattern. The express library has therefore as well some tools inclded of which one will be shown here. On top, we use the `graph objects` library, usually imported as `go`, which is the basis for the plotly express library, in order to add a trace and show the daily precipitation as well.\n\n\n```python\nimport plotly.express as px\nimport plotly.graph_objects as go\n\nfig = px.histogram(raindailyRotterdam, x='YYYYMMDD', y='RH', histfunc='avg', title=\"Monthly Average Rainfall\", range_y=['-1','25'], range_x=['2012-01-01', '2012-12-31'])\nfig.update_traces(xbins_size=\"M1\")\nfig.update_xaxes(title='2012',showgrid=True, ticklabelmode=\"period\", dtick=\"M1\", tickformat=\"%b\\n%Y\")\nfig.update_layout(bargap=0.05)\nfig.add_trace(go.Scatter(mode='markers', x = raindailyRotterdam['YYYYMMDD'], y = raindailyRotterdam['RH'], name=\"daily\"))\n```\n\n### Task 2: Time series analysis\n\nPlease use the rainfall data of Rotterdam to create a barplot of the total rainfall per month. Represent the daily variation with a line.\n\n\n```python\n# Your code here...\n```\n\n[Solution to task 2](#Solution-to-task-2)\n\n# 3. 3D plots\n\nPlotly enables you to create 3-dimensional graphs, that you can pan and zoom using the mouse.\n\n### 3D scatter plots\n\nThe simplest form of these are scatter plots. These can be used to graph point that are dependend on 2 variables.\n\nFurther examples can be found in the official [documentation](https://plotly.com/python/3d-scatter-plots/).\n\nHere is an example of the measured pressures at different scensors (hydrants) for different scenarios in a water distribution system. Used are the final results of the 2nd Assignment of the Hydroinformatics course.\n\nThe margins are removed, because axis labels can't move into the margin areas in 3d plots.\n\n\n```python\n# import packages\nimport plotly.express as px\nimport numpy as np\nimport pandas as pd\n\n#read in data and convert into usable format\ndf = pd.read_csv('data/pressures.csv', decimal='.', sep=',')\ndf = df.melt('Sensors')\ndf.columns = ['Sensors', 'Scenarios', 'Pressure [m]']\ndf2 = pd.read_csv('data/SSE_values.csv', decimal='.', sep=',')\ndf2 = df2.melt('Sensors')\ndf.insert(3, 'SSE', df2[\"value\"])\n\n#plot the data\nfig = px.scatter_3d(df, x='Sensors', y='Scenarios', z='Pressure [m]', height=700)\n\n#remove margins\nfig.update_layout(margin=dict(l=0, r=0, t=0, b=0))\n\nfig.show()\n```\n\nYou can use your mouse to rotate the 3D plot and the mousewheel to zoom in. Try it out!\n\nAs a forth dimension color can be used. In the following plot the sum of squared errors (SSE) between the measured and simulated pressures over all scenarios and sensors in the water distribution system are indicated by their `color`.\nFurthermore can the `opacity`be changed\n\n\n```python\n#plot the data\nfig = px.scatter_3d(df, x=\"Sensors\", y=\"Scenarios\", z='Pressure [m]', color = \"SSE\", \n opacity = .6, title=\"Measured pressures\", width=800, height=800)\n\nfig.show()\n```\n\nTo change the appearence of 3-dimenionsal plots the function `fig.update_scenes` is used. Here are a few examples of what can be changed. Also check the [documentation](https://plotly.com/python/reference/layout/scene/).\n\n- `xaxis_nticks` changes the number of ticks on the xaxis.\n- `camera_projection_type`can be used to change the camera projection between \"perspective\" and \"orthographic\".\n- `xaxis_color` changes all colors associated with this axis.\n- `xaxis_gridcolor`can be used to change the color of the x-axis grid. RGB-values can also be used to define the color.\n- `xaxis_backgroundcolor`is used to change the color of the backround.\n- `xaxis_tickangle`is used to change the angle of the ticklabels.\n- `xaxis_ticks`is used to disable the tickmarks\n- `xaxis_tickfont` is used to change the `color`, the `family`(the font) and the `size`of the tick labels.\n\nTo avoid the labels from getting cut off and make sure the title is displayed on the sides, the `margin` can be set with the function `fig.update_layout`. In 3d plots, axis labels don't actually pertrude into this margin area though.\n\n\n```python\n# change appearence\nfig.update_scenes(xaxis_nticks=12, \n camera_projection_type=\"orthographic\", \n xaxis_color=\"black\",\n xaxis_gridcolor='rgb(204, 204, 204)',\n yaxis_gridcolor='rgb(204, 204, 204)',\n zaxis_gridcolor='rgb(204, 204, 204)',\n xaxis_backgroundcolor=\"white\",\n yaxis_backgroundcolor=\"white\",\n zaxis_backgroundcolor=\"white\",\n xaxis_tickangle= 30, \n xaxis_ticks=\"\", yaxis_ticks=\"\", \n xaxis_tickfont=dict(size=11, family=\"PT Sans Narrow\"),\n yaxis_tickfont=dict(size=11, family=\"PT Sans Narrow\"),\n zaxis_tickfont=dict(size=11, family=\"PT Sans Narrow\"))\n\n# increase margin\nfig.update_layout(margin=dict(l=50, r=50, t=100, b=50))\n```\n\n### 3D surface plots\n\nThe Rastrigin function is a well-known function, whith a lot of local optima, but only one global optimum. It is defined as:\n\n\\begin{align}\nf(\\mathbf{x}) \\ = \\ a \\cdot D + \\sum_{i=1}^{D} \\left(x_{i}^{2} - a \\cdot \\cos (2 \\pi x_i) \\right)\n\\end{align}\n\nFor more details check it out on [Wikipedia](https://de.wikipedia.org/wiki/Rastrigin-Funktion)!\n\nTo display it as a 3-dimensional surface plot, we use the `go.Figure`function, part of `plotly.graph_objects`.\n\n\n```python\n# define the rastrigin function for D dimensions\ndef rastrigin(xvector, a=15):\n \n D = len(xvector)\n value = D * a \n for x in xvector:\n value += x ** 2 - a * np.cos(2 * np.pi * x)\n return value\n```\n\n\n```python\n# calculate data for desired range\nx = np.linspace(-7, 7, 250)\nX, Y = np.meshgrid(x, x)\nZ = rastrigin([X,Y])\n```\n\n\n```python\n#create surface figure\nimport plotly.graph_objects as go\n\nfig = go.Figure(data=[go.Surface(x=X, y=Y, z=Z)],)\n\nfig.update_layout(title='Rastrigin function',\n width=800, height=800,\n margin=dict(l=65, r=50, b=90, t=90))\n\nfig.show()\n```\n\nYou can see the many local optima, but there is only one global minimum, which is located at (0,0).\n\n### Task 3: Create a surface plot of a monkey saddle\n\nUse your gained knowledge to create a 3d surface plot of a \"monkey saddle\". \nA monkey saddle is the name of the following function: \n\\begin{align}\nz \\ = \\ x^{3}-3xy^{2}\n\\end{align}\n\nFor more information check it out on [Wikipedia](https://en.wikipedia.org/wiki/Monkey_saddle)!\n\n- Show the graph in a range from -10 to 10 (in x and y direction)\n- Change the colors to make it look like \"dark mode\"\n\n\n```python\n# Your code here...\n```\n\n[Solution to task 3](#Solution-to-task-3)\n\n# 4. Map plotting\n\nThe map feature can be useful for any kind of geographical information. For instance, data regarding water-distribution networks can be visualized as well as socio-geographical behaviors like the distribution of a population within a country.\n\nFor this task, we import a table with cities and small towns in Norway. The data is downloaded from [SimpleMaps.com](https://simplemaps.com/data/no-cities) and adapted to the unicode to be readable with pandas. More information about pandas can be found in [Mark Bakkers Notebook 8](https://nbviewer.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook8_pandas/py_exploratory_comp_8_sol.ipynb).\n\n\n```python\n#Installation of the Packages\nimport pandas as pd\nimport plotly.express as px\n\n#read data\nplacesNo = pd.read_csv('data/places_norway.csv')\n```\n\nWe can use the [Scatter Mapbox](https://plotly.com/python/mapbox-layers/) function which requires latitudinal and longitudinal coordinates and can show us the cities in a color code according to the population.\n\n- `dataframe, lat='...', lon='...` This function needs to know where to find the coordiates in the dataframe. Coordinates are given in the Decimal Degree format with one latitude (y - direction: North/South) and one longitude(x - direction: East/West) value.\n- `layout.mapbox.style` defines the lowest layer, also known as base map. \n - It refers to an open library and is used by known services as STRAVA, lonely planet and national geographic. \n - Default is a white background, so if you want to have the map in the background you have to set a style. \n - Three commonly used are `'open-street-map'`, `'carto-positron'` and `'stamen-terrain'`. \n - Further styles can be seen as well in the [documentation](https://plotly.com/python/mapbox-layers/).\n - It is possible to use satellite images provided by the Unitest States Geological Survey (USGS).\n- `hover_name` defines the upper row which is shown when you move your mouse and hover about a point on the map.\n- `hover_data` will be shown in the box as well\n- `zoom` defines the initial zoom level. 1 shows the entire world, 10 is suitable to show points in a city.\n- The layout is updated to adapt the outputsize and add a title.\n\n\n```python\nmap = px.scatter_mapbox(placesNo, lat='lat', lon='lng', \n mapbox_style = 'carto-positron', \n hover_name = 'City',\n hover_data = [\"Population\", \"Status\"],\n color = 'Administration', \n color_discrete_sequence=px.colors.cyclical.IceFire,\n size = 'Population',\n zoom=3.1)\nmap.update_layout(title='Cities in Norway',\n width=950, height=800)\nmap.show()\n```\n\nAs well as in the time series analysis, we can use the mouse to navigate through the map. Thereby we can extract more knowledge out of the interactive map as would be possible in a standard map. Give it a try and find out the population of Trondheim!\n\n\n### Task 4: Create a map\n\nLet's use the gained knowledge and create a map of all the swedish cities:\n- The csv- file you need is called `'places_sweden.csv'`\n- Select a darker basemap as in the example and adapt the color scheme.\n- Make sure that the city names, the authority type as well as the population is shown. You may have to take a look into the dataframe to see the correct labels.\n- Does this map need another initial zoom- level?\n\n\n```python\n# Your code here...\n```\n\n[Solution to task 4](#Solution-to-task-4)\n\n# 5. Plotting a water distribution network\n\nThe `wntr`package can be used as a link between python and an EPANET network model. More information about EPANET is given in the lectures as well as in the [documentation](https://epanet22.readthedocs.io/en/latest/index.html). \n\nIn the `wntr`package there is an implementation of `plotly`, which can be used to create basic interactive plots of the water network model. [Documentation](https://wntr.readthedocs.io/en/stable/apidoc/wntr.graphics.network.html?highlight=plot_interactive) \n\nBut this functionality is quite limited, as only `node_attribute`s can be displayed. The following example shows how to create this type of interactive html file using the `wntr`package.\n\n\n```python\nimport wntr\n```\n\n\n```python\n# get elevation data\nwn = wntr.network.WaterNetworkModel('data/Exercise_Original.inp')\nelevation = wn.query_node_attribute('elevation')\n\n# create plot (creates an html file)\nwntr.graphics.plot_interactive_network(wn, filename='data/Interactive_Model.html', auto_open=False, \n node_attribute=elevation, node_cmap = \"magma\", \n title= \"Elevation of Network Model\", \n figsize = (854,480))\n```\n\nTo display an html file inside of jupyter notebooks, the `IPython.display` can be used. \n\n\n```python\n# display html file in jupyter notebooks\nfrom IPython.display import IFrame\nIFrame(src='data/Interactive_Model.html', width=980, height=550)\n```\n\n## 6. Animations\n\nTo create a simple animaton in a scatter plot, `animation_frame`and `animation_group`can be used. \n- `animation_frame` defines over which data the animation takes place. We use the time in years.\n- `animation_group`: rows with the same group are considered as the same object and therefore animated from one frame to the next. \n\nCheck out the [documentation](https://plotly.com/python/animations/) for more details.\n\n\n```python\n# Animated scatter plot\nimport pandas as pd\nimport plotly.express as px\n\ndf_worlddata = px.data.gapminder()\nfig = px.scatter(df_worlddata, x=\"gdpPercap\", y=\"lifeExp\", animation_frame=\"year\", animation_group=\"country\",\n size=\"pop\", color=\"continent\", hover_name=\"country\",\n log_x=True, size_max=55, range_x=[100,100000], range_y=[25,90])\nfig.update_xaxes(title='GDP per capita [PPP]')\nfig.update_yaxes(title='Life Expectancy [years]')\nfig.show()\n```\n\nYou can make use of the interactive features again! By hovering over the circles you can see, which country they represent and all their associated data. Furthermore, you can use the legend on the right side to deselect (one click) or to isolate (double click) one continent, to make the number of countries less overwhelming.\n\nTry it out yourself!\n\nThe same concept can also be applied to barplots using the `px_bar` function.\n\n\n```python\n# Animated bar plot\npx.bar(df_worlddata, x=\"continent\", y=\"pop\", color=\"continent\",\n animation_frame=\"year\", animation_group=\"country\", range_y=[0,4000000000])\nfig.update_xaxes(title='Continent')\nfig.update_yaxes(title='Population')\n```\n\n### Task 6: Animate population change\n\nUse the gained knowledge to create a barplot animation for the relative population change of African countries since 1952. \n- Use the gapminder data as a basis\n- Display the countries' absolute population as a color palette\n\n\n```python\n# Your code here...\n```\n\n[Solution to task 6](#Solution-to-task-6)\n\n# 7. Saving figures\n\nIt might be useful for you to save your figures in order to use them somewhere else. For sure, you can simply make screenshots and add them to your text. But the interesting part of the plotly package is that it's interactive. Therefore, it has to be saved in a different way.\n\n### html\n\nThe easiest way is to write a html file which you can for instance use on another homepage or add to an email:\n\n\n```python\nfig.write_html('data/fig.html')\n```\n\nThe last figures saved under the variable fig is saved now in the location `data/fig.html` or the path that you have chosen. It's important that the folder already exists. Take care to not overwrite your files!\n\nNow, we can have a look on the saved figure as well here in the notebook:\n\n\n```python\n# display html file in jupyter notebooks\nfrom IPython.display import IFrame\nIFrame(src='data/fig.html', width=980, height=550)\n```\n\nYou can use this code as well to open any other html file from your computer.\n\n### Dash\n\nSince it can open a whole new chapter to write a web application which uses the plotly library, we recommend you to go to the documentation of [Dash](https://plotly.com/dash/) in case you want to use the data on another platform. As well, for all the plots shown above, there a ways to let it run on a server using Dash, it is described in each documentation chapter at the end of the page. It might be worth it, there are many possibilities and as mentioned before, this is used by big compainies like National Geographic, Lonely Planet, and so on.\n\nThat's it, we hope you have learned something :)\n\n[Click here to go back to the start](#Introduction) \n\n
\n
\n\n# Task solutions\n\n
\n
\n\n# Solution to task 1\n\n\n```python\nimport plotly.express as px\nimport pandas as pd\n\n#1. read data\nraindaily = pd.read_csv('data/Sagelva_Precipitation.csv', parse_dates=['Date'], index_col=0)\n\n#create barplot\nfig = px.bar(raindaily, barmode='group')\n\n#add a title\nfig.update_layout(title = 'Daily Precipitation in Sagelva')\n\n#costumize the axis\nfig.update_xaxes(title='Year', ticklabelmode=\"period\", rangeslider_visible=True)\nfig.update_yaxes(title='Daily precipitation [mm]')\n```\n\nContinue with chapter 2: [Statistical analysis](#2.-Statistical-analysis:-Histogram)\n\n# Solution to task 2\n\n\n```python\nimport plotly.express as px\nimport plotly.graph_objects as go\nimport pandas as pd\n\n#prepare data\nraindailyRotterdam = pd.read_csv('data/rotterdam_rainfall_2012.txt', sep=',', skiprows=9, parse_dates=['YYYYMMDD'],skipinitialspace=True)\nraindailyRotterdam.iloc[:,2] = raindailyRotterdam.iloc[:,2] * 0.1\nraindailyRotterdam.loc[raindailyRotterdam.RH<0.0, 'RH'] = 0.0\n\n#make plot\nfig = px.histogram(raindailyRotterdam, x='YYYYMMDD', y='RH', histfunc='sum', title=\"Monthly Total Rainfall\", range_y=['-1','160'], range_x=['2012-01-01', '2012-12-31'])\nfig.update_traces(xbins_size=\"M1\")\nfig.update_xaxes(title = '2012', showgrid=True, ticklabelmode=\"period\", dtick=\"M1\", tickformat=\"%b\\n%Y\")\nfig.update_layout(bargap=0.05)\nfig.add_trace(go.Scatter(mode='lines', x = raindailyRotterdam['YYYYMMDD'], y = raindailyRotterdam['RH'], name=\"daily\"))\n```\n\nContinue with chapter 3: [3D plotting](#3.-3D-plots)\n\n# Solution to task 3\n\n\n```python\n#define the Monkey saddle function\ndef monkey(x,y):\n z=x**3-3*x*y**2\n return z\n```\n\n\n```python\n#create xyz data for this formula\nimport numpy as np\nr = np.linspace(-10, 10, 250)\nx,y = np.meshgrid(r, r)\nz = monkey(x,y)\n```\n\n\n```python\n#make graph\nimport plotly.graph_objects as go\nfig = go.Figure(data=[go.Surface(x=x, y=y, z=z)],)\n\nfig.update_layout(title='Monkey saddle function',\n width=800, height=800,\n margin=dict(l=65, r=50, b=90, t=90))\n\n#dark mode\nfig.update_scenes(xaxis_gridcolor='white',\n yaxis_gridcolor='white',\n zaxis_gridcolor='white',\n xaxis_backgroundcolor=\"black\",\n yaxis_backgroundcolor=\"black\",\n zaxis_backgroundcolor=\"black\")\nfig.show()\n```\n\nContinue with chapter 4: [Map plotting](#4.-Map-plotting)\n\n# Solution to task 4\n\n\n```python\nimport pandas as pd\nimport plotly.express as px\n\nplacesSe = pd.read_csv('data/places_sweden.csv')\nmap = px.scatter_mapbox(placesSe, lat='lat', lon='lng', \n mapbox_style = 'carto-darkmatter', \n hover_name = 'City',\n hover_data = [\"Population\", \"Status\"],\n color = 'Administration', \n color_discrete_sequence=px.colors.cyclical.IceFire,\n size = 'Population',\n zoom=3.25)\nmap.update_layout(title='Cities in Sweden',\n width=950, height=800)\nmap.show()\n```\n\nContinue with chapter 5: [Plotting a water distribution network](#5.-Plotting-a-water-distribution-network)\n\n# Solution to task 6\n\n\n```python\n#import packages\nimport pandas as pd\nimport plotly.express as px\nimport warnings\nwarnings.filterwarnings('ignore')\n\n#import data\ndf_worlddata = px.data.gapminder()\ndf_africa = df_worlddata.loc[df_worlddata['continent']=='Africa']\n\n#calculate relative population\nfor i,line in df_africa.iterrows():\n if line[\"year\"]==1952:\n basepop = line[\"pop\"]\n df_africa.at[i,\"rel_pop\"]= line[\"pop\"]/basepop\n```\n\n\n```python\n#create barplot\nfig = px.bar(df_africa, x=\"country\", y=\"rel_pop\", color=\"pop\",\n animation_frame=\"year\", animation_group=\"country\", range_y=[0,8], \n title= \"Relative population change in African countries since 1952\")\n\nfig.update_layout(width=1000, height=800)\nfig['layout']['updatemenus'][0]['pad']=dict(r= 10, t= 150)\nfig['layout']['sliders'][0]['pad']=dict( t= 150,)\n\n#update axes labels\nfig.update_xaxes(title='Country')\nfig.update_yaxes(title='Relative Population (compared to 1952)')\n\nfig.update_scenes(xaxis_nticks=100)\nfig.show()\n```\n\nFinish with chapter 7: [Saving figures](#7.-Saving-figures)\n", "meta": {"hexsha": "f17087fbfc7150a008ea960dee2bdb36bc166371", "size": 47979, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Group_Projects/Plotly Package for Interactive Visualization/Plotly Package for Interactive Visualization.ipynb", "max_stars_repo_name": "steffelbauer/2022_Hydroinformatics", "max_stars_repo_head_hexsha": "b043984243fb5ae5559d40252d74ce024a15313e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Group_Projects/Plotly Package for Interactive Visualization/Plotly Package for Interactive Visualization.ipynb", "max_issues_repo_name": "steffelbauer/2022_Hydroinformatics", "max_issues_repo_head_hexsha": "b043984243fb5ae5559d40252d74ce024a15313e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Group_Projects/Plotly Package for Interactive Visualization/Plotly Package for Interactive Visualization.ipynb", "max_forks_repo_name": "steffelbauer/2022_Hydroinformatics", "max_forks_repo_head_hexsha": "b043984243fb5ae5559d40252d74ce024a15313e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.2006711409, "max_line_length": 620, "alphanum_fraction": 0.5887784239, "converted": true, "num_tokens": 7518, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.403566839388498, "lm_q2_score": 0.29421497216298875, "lm_q1q2_score": 0.1187354064165923}} {"text": "\n\n\n# PHY321: Conservative forces, examples and theory\n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway\n\nDate: **Feb 18, 2022**\n\nCopyright 1999-2022, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n## Aims and Overarching Motivation\n\n### Monday February 14\n\nDiscussion of conditions for conservative forces and summing up our\ndiscussion on conservative forces. Discussion of potential surfaces\nand their interpretations. \n\n**Reading suggestion**: Taylor sections 4.6, 4.9, 4.10 and 5.1 and 5.2 on harmonic oscillations.\n\n### Wednesday February 16\n\nThe Earth-Sun problem and energy-conserving algorithms and how to encode in more efficient ways various algorithms for solving the equations of motion (Euler, Euler-Cromer and Velocity Verlet).\n\n* [Links to Julie's material on code reusability](https://github.com/mhjensen/Physics321/tree/master/doc/pub/week7/ipynb)\n\n**Reading suggestions**: Taylor section 4.8 and these notes\n\n### Friday February 18\n\nWorking on the Earth-Sun problem and hw 5. Hints on various exercises.\n\n**Reading suggestions:** Taylor chapters 3 and 4 and these notes.\n\n## The curl of a force and link between Line Integrals and conservative forces\n\nThe concept of line integrals plays an important role in our discussion of energy conservation,\nour definition of potentials and conservative forces.\n\nLet us remind ourselves of some the basic elements (most of you may\nhave seen this in a calculus course under the general topic of vector\nfields).\n\nWe define a path integration $C$, that is we integrate\nfrom a point $\\boldsymbol{r}_1$ to a point $\\boldsymbol{r}_2$. \nLet us assume that the path $C$ is represented by an arc length $s$. In three dimension we have the following representation of $C$\n\n$$\n\\boldsymbol{r}(s)=x(s)\\boldsymbol{e}_1+y(s)\\boldsymbol{e}_2+z(s)\\boldsymbol{e}_3,\n$$\n\nthen our integral of a function $f(x,y,z)$ along the path $C$ is defined as\n\n$$\n\\int_Cf(x,y,z)ds=\\int_a^bf\\left(x(s),y(s),z(s)\\right)ds,\n$$\n\nwhere the initial and final points are $a$ and $b$, respectively.\n\n## Exactness and Independence of Path\n\nWith the definition of a line integral, we can in turn set up the\ntheorem of independence of integration path.\n\nLet us define\n$f(x,y,z)$, $g(x,y,z)$ and $h(x,y,z)$ to be functions which are\ndefined and continuous in a domain $D$ in space. Then a line integral\nlike the above is said to be independent of path in $D$, if for every\npair of endpoints $a$ and $b$ in $D$ the value of the integral is the\nsame for all paths $C$ in $D$ starting from a point $a$ and ending in\na point $b$. The integral depends thus only on the integration limits\nand not on the path.\n\n## Differential Forms\n\nAn expression of the form\n\n$$\nfdx+gdy+hdz,\n$$\n\nwhere $f$, $g$ and $h$ are functions defined in $D$, is a called a first-order differential form\nin three variables.\nThe form is said to be exact if it is the differential\n\n$$\ndu= \\frac{\\partial u}{\\partial x}dx+\\frac{\\partial u}{\\partial y}dy+\\frac{\\partial u}{\\partial z}dz,\n$$\n\nof a differentiable function $u(x,y,z)$ everywhere in $D$, that is\n\n$$\ndu=fdx+gdy+hdz.\n$$\n\nIt is said to be exact if and only if we can then set\n\n$$\nf=\\frac{\\partial u}{\\partial x},\n$$\n\nand\n\n$$\ng=\\frac{\\partial u}{\\partial y},\n$$\n\nand\n\n$$\nh=\\frac{\\partial u}{\\partial z},\n$$\n\neverywhere in the domain $D$.\n\n## In Vector Language\n\nIn vector language the above means that the differential form\n\n$$\nfdx+gdy+hdz,\n$$\n\nis exact in $D$ if and only if the vector function (it could be a force, or velocity, acceleration or other vectors we encounter in this course)\n\n$$\n\\boldsymbol{F}=f\\boldsymbol{e}_1+g\\boldsymbol{e}_2+h\\boldsymbol{e}_3,\n$$\n\nis the gradient of a function $u(x,y,z)$\n\n$$\n\\boldsymbol{v}=\\boldsymbol{\\nabla}u=\\frac{\\partial u}{\\partial x}\\boldsymbol{e}_1+\\frac{\\partial u}{\\partial y}\\boldsymbol{e}_2+\\frac{\\partial u}{\\partial z}\\boldsymbol{e}_3.\n$$\n\n## Path Independence Theorem\n\nIf this is the case, we can state the path independence theorem which\nstates that with functions $f(x,y,z)$, $g(x,y,z)$ and $h(x,y,z)$ that fulfill the above\nexactness conditions, the line integral\n\n$$\n\\int_C\\left(fdx+gdy+hdz\\right),\n$$\n\nis independent of path in $D$ if and only if the differential form under the integral sign is exact in $D$.\n\nThis is the path independence theorem. \n\nWe will not give a proof of the theorem. You can find this in any vector analysis chapter in a mathematics textbook.\n\nWe note however that the path integral from a point $p$ to a final point $q$ is given by\n\n$$\n\\int_p^q\\left(fdx+gdy+hdz\\right)=\\int_p^q\\left(\\frac{\\partial u}{\\partial x}dx+\\frac{\\partial u}{\\partial y}dy+\\frac{\\partial u}{\\partial z}dz\\right)=\\int_p^qdu.\n$$\n\nAssume now that we have a dependence on a variable $s$ for $x$, $y$ and $z$. We have then\n\n$$\n\\int_p^qdu=\\int_{s_1}^{s_2}\\frac{du}{ds}ds = u(x(s),y(s),z(s))\\vert_{s=s_1}^{s=s_2}=u(q)-u(p).\n$$\n\nThis last equation\n\n$$\n\\int_p^q\\left(fdx+gdy+hdz\\right)=u(q)-u(p),\n$$\n\nis the analogue of the usual formula\n\n$$\n\\int_a^bf(x)dx=F(x)\\vert_a^b=F(b)-F(a),\n$$\n\nwith $F'(x)=f(x)$.\n\n## Work-Energy Theorem again\n\nWe remember that a the work done by a force\n$\\boldsymbol{F}=f\\boldsymbol{e}_1+g\\boldsymbol{e}_2+h\\boldsymbol{e}_3$ on a displacemnt $d\\boldsymbol{r}$\n\n$$\nW=\\int_C\\boldsymbol{F}d\\boldsymbol{r}=\\int_C(fdx+gdy+hdz).\n$$\n\nFrom the path independence theorem, we know that this has to result in\nthe difference between the two endpoints only. This is exact if and\nonly if the force is the force $\\boldsymbol{F}$ is the gradient of a scalar\nfunction $u$. We call this scalar function, which depends only the\npositions $x,y,z$ for the potential energy $V(x,y,z)=V(\\boldsymbol{r})$.\n\nWe have thus\n\n$$\n\\boldsymbol{F}(\\boldsymbol{r})\\propto \\boldsymbol{\\nabla}V(\\boldsymbol{r}),\n$$\n\nand we define this as\n\n$$\n\\boldsymbol{F}(\\boldsymbol{r})= -\\boldsymbol{\\nabla}V(\\boldsymbol{r}).\n$$\n\nSuch a force is called **a conservative force**. The above expression can be used to demonstrate\nenergy conservation.\n\n## Additional Theorem\n\nFinally we can define the criterion for exactness and independence of\npath. This theorem states that if $f(x,y,z)$, $g(x,y,z)$ and\n$h(x,y,z)$ are continuous functions with continuous first partial derivatives in the domain $D$,\nthen the line integral\n\n$$\n\\int_C\\left(fdx+gdy+hdz\\right),\n$$\n\nis independent of path in $D$ when\n\n$$\n\\frac{\\partial h}{\\partial y}=\\frac{\\partial g}{\\partial z},\n$$\n\nand\n\n$$\n\\frac{\\partial f}{\\partial z}=\\frac{\\partial h}{\\partial x},\n$$\n\nand\n\n$$\n\\frac{\\partial g}{\\partial x}=\\frac{\\partial f}{\\partial y}.\n$$\n\nThis leads to the **curl** of $\\boldsymbol{F}$ being zero\n\n$$\n\\boldsymbol{\\nabla}\\times\\boldsymbol{F}=\\boldsymbol{\\nabla}\\times\\left(-\\boldsymbol{\\nabla}V(\\boldsymbol{r})\\right)=0!\n$$\n\n## Summarizing\n\nA conservative force $\\boldsymbol{F}$ is a defined as the partial derivative of a scalar potential which depends only on the position,\n\n$$\n\\boldsymbol{F}(\\boldsymbol{r})= -\\boldsymbol{\\nabla}V(\\boldsymbol{r}).\n$$\n\nThis leads to conservation of energy and a path independent line integral as long as the curl of the force is zero, that is\n\n$$\n\\boldsymbol{\\nabla}\\times\\boldsymbol{F}=\\boldsymbol{\\nabla}\\times\\left(-\\boldsymbol{\\nabla}V(\\boldsymbol{r})\\right)=0.\n$$\n\n## Graphing the potential energy and what we can learn from that\n\nThis is taken from homework 4, exercises 5.\n\nA particle is under the influence of a force $F=-kx+kx^3/\\alpha^2$, where $k$ and $\\alpha$ are constants and $k$ is positive.\n\nDetermine $V(x)$ and discuss the motion. It can be convenient here to\nmake a sketch/plot of the potential as function of $x$.\n\nWe assume that the potential is zero at say $x=0$. Integrating the force from zero to $x$ gives\n\n$$\nV(x) = -\\int_0^x F(x')dx'=\\frac{kx^2}{2}-\\frac{kx^4}{4\\alpha^2}.\n$$\n\n## Making the plot\n\nThe following code plots the potential. We have chosen values of $\\alpha=k=1.0$. Feel free to experiment with other values. We plot $V(x)$ for a domain of $x\\in [-2,2]$.\n\n\n```\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\n\nx0= -2.0\nxn = 2.1\nDeltax = 0.1\nalpha = 1.0\nk = 1.0\n#set up arrays\nx = np.arange(x0,xn,Deltax)\nn = np.size(x)\nV = np.zeros(n)\nV = 0.5*k*x*x-0.25*k*(x**4)/(alpha*alpha)\nplt.plot(x, V)\nplt.xlabel(\"x\")\nplt.ylabel(\"V\")\nplt.show()\n```\n\n## Interpreting the results\n\nFrom the plot here (with the chosen parameters) \n1. we see that with a given initial velocity we can overcome the potential energy barrier\n\nand leave the potential well for good.\n1. If the initial velocity is smaller (see next exercise) than a certain value, it will remain trapped in the potential well and oscillate back and forth around $x=0$. This is where the potential has its minimum value. \n\n2. If the kinetic energy at $x=0$ equals the maximum potential energy, the object will oscillate back and forth between the minimum potential energy at $x=0$ and the turning points where the kinetic energy turns zero. These are the so-called non-equilibrium points.\n\n## Final interpretations\n\nWhat happens when the energy of the particle is $E=(1/4)k\\alpha^2$? Hint: what is the maximum value of the potential energy?\n\nFrom the figure we see that\nthe potential has a minimum at at $x=0$ then rises until $x=\\alpha$ before falling off again. The maximum\npotential, $V(x\\pm \\alpha) = k\\alpha^2/4$. If the energy is higher, the particle cannot be contained in the\nwell. The turning points are thus defined by $x=\\pm \\alpha$. And from the previous plot you can easily see that this is the case ($\\alpha=1$ in the abovementioned Python code).\n\n## The Earth-Sun system\n\nWe will now venture into a study of a system which is energy\nconserving. The aim is to see if we (since it is not possible to solve\nthe general equations analytically) we can develop stable numerical\nalgorithms whose results we can trust!\n\nWe solve the equations of motion numerically. We will also compute\nquantities like the energy numerically.\n\nWe start with a simpler case first, the Earth-Sun system in two dimensions only. The gravitational force $F_G$ on the earth from the sun is\n\n$$\n\\boldsymbol{F}_G=-\\frac{GM_{\\odot}M_E}{r^3}\\boldsymbol{r},\n$$\n\nwhere $G$ is the gravitational constant,\n\n$$\nM_E=6\\times 10^{24}\\mathrm{Kg},\n$$\n\nthe mass of Earth,\n\n$$\nM_{\\odot}=2\\times 10^{30}\\mathrm{Kg},\n$$\n\nthe mass of the Sun and\n\n$$\nr=1.5\\times 10^{11}\\mathrm{m},\n$$\n\nis the distance between Earth and the Sun. The latter defines what we call an astronomical unit **AU**.\n\n## The Earth-Sun system, Newton's Laws\n\nFrom Newton's second law we have then for the $x$ direction\n\n$$\n\\frac{d^2x}{dt^2}=-\\frac{F_{x}}{M_E},\n$$\n\nand\n\n$$\n\\frac{d^2y}{dt^2}=-\\frac{F_{y}}{M_E},\n$$\n\nfor the $y$ direction.\n\nHere we will use that $x=r\\cos{(\\theta)}$, $y=r\\sin{(\\theta)}$ and\n\n$$\nr = \\sqrt{x^2+y^2}.\n$$\n\nWe can rewrite\n\n$$\nF_{x}=-\\frac{GM_{\\odot}M_E}{r^2}\\cos{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}x,\n$$\n\nand\n\n$$\nF_{y}=-\\frac{GM_{\\odot}M_E}{r^2}\\sin{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}y,\n$$\n\nfor the $y$ direction.\n\n## The Earth-Sun system, rewriting the Equations\n\nWe can rewrite these two equations\n\n$$\nF_{x}=-\\frac{GM_{\\odot}M_E}{r^2}\\cos{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}x,\n$$\n\nand\n\n$$\nF_{y}=-\\frac{GM_{\\odot}M_E}{r^2}\\sin{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}y,\n$$\n\nas four first-order coupled differential equations\n\n$$\n\\frac{dv_x}{dt}=-\\frac{GM_{\\odot}}{r^3}x,\n$$\n\n$$\n\\frac{dx}{dt}=v_x,\n$$\n\n$$\n\\frac{dv_y}{dt}=-\\frac{GM_{\\odot}}{r^3}y,\n$$\n\n$$\n\\frac{dy}{dt}=v_y.\n$$\n\n## Building a code for the solar system, final coupled equations\n\nThe four coupled differential equations\n\n$$\n\\frac{dv_x}{dt}=-\\frac{GM_{\\odot}}{r^3}x,\n$$\n\n$$\n\\frac{dx}{dt}=v_x,\n$$\n\n$$\n\\frac{dv_y}{dt}=-\\frac{GM_{\\odot}}{r^3}y,\n$$\n\n$$\n\\frac{dy}{dt}=v_y,\n$$\n\ncan be turned into dimensionless equations or we can introduce astronomical units with $1$ AU = $1.5\\times 10^{11}$. \n\nUsing the equations from circular motion (with $r =1\\mathrm{AU}$)\n\n$$\n\\frac{M_E v^2}{r} = F = \\frac{GM_{\\odot}M_E}{r^2},\n$$\n\nwe have\n\n$$\nGM_{\\odot}=v^2r,\n$$\n\nand using that the velocity of Earth (assuming circular motion) is\n$v = 2\\pi r/\\mathrm{yr}=2\\pi\\mathrm{AU}/\\mathrm{yr}$, we have\n\n$$\nGM_{\\odot}= v^2r = 4\\pi^2 \\frac{(\\mathrm{AU})^3}{\\mathrm{yr}^2}.\n$$\n\n## Building a code for the solar system, discretized equations\n\nThe four coupled differential equations can then be discretized using Euler's method as (with step length $h$)\n\n$$\nv_{x,i+1}=v_{x,i}-h\\frac{4\\pi^2}{r_i^3}x_i,\n$$\n\n$$\nx_{i+1}=x_i+hv_{x,i},\n$$\n\n$$\nv_{y,i+1}=v_{y,i}-h\\frac{4\\pi^2}{r_i^3}y_i,\n$$\n\n$$\ny_{i+1}=y_i+hv_{y,i},\n$$\n\n## Code Example with Euler's Method\n\nThe code here implements Euler's method for the Earth-Sun system using a more compact way of representing the vectors. Alternatively, you could have spelled out all the variables $v_x$, $v_y$, $x$ and $y$ as one-dimensional arrays.\n\n\n```\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n\nDeltaT = 0.001\n#set up arrays \ntfinal = 10 # in years\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and x\nt = np.zeros(n)\nv = np.zeros((n,2))\nr = np.zeros((n,2))\n# Initial conditions as compact 2-dimensional arrays\nr0 = np.array([1.0,0.0])\nv0 = np.array([0.0,2*pi])\nr[0] = r0\nv[0] = v0\nFourpi2 = 4*pi*pi\n# Start integrating using Euler's method\nfor i in range(n-1):\n # Set up the acceleration\n # Here you could have defined your own function for this\n rabs = sqrt(sum(r[i]*r[i]))\n a = -Fourpi2*r[i]/(rabs**3)\n # update velocity, time and position using Euler's forward method\n v[i+1] = v[i] + DeltaT*a\n r[i+1] = r[i] + DeltaT*v[i]\n t[i+1] = t[i] + DeltaT\n# Plot position as function of time \nfig, ax = plt.subplots()\n#ax.set_xlim(0, tfinal)\nax.set_ylabel('y[AU]')\nax.set_xlabel('x[AU]')\nax.plot(r[:,0], r[:,1])\nfig.tight_layout()\nsave_fig(\"EarthSunEuler\")\nplt.show()\n```\n\n## Problems with Euler's Method\n\nWe notice here that Euler's method doesn't give a stable orbit. It\nmeans that we cannot trust Euler's method. In a deeper way, as we will\nsee in homework 5, Euler's method does not conserve energy. It is an\nexample of an integrator which is not\n[symplectic](https://en.wikipedia.org/wiki/Symplectic_integrator).\n\nHere we present thus two methods, which with simple changes allow us to avoid these pitfalls. The simplest possible extension is the so-called Euler-Cromer method.\nThe changes we need to make to our code are indeed marginal here.\nWe need simply to replace\n\n\n```\n r[i+1] = r[i] + DeltaT*v[i]\n```\n\nin the above code with the velocity at the new time $t_{i+1}$\n\n\n```\n r[i+1] = r[i] + DeltaT*v[i+1]\n```\n\nBy this simple caveat we get stable orbits.\nBelow we derive the Euler-Cromer method as well as one of the most utlized algorithms for sovling the above type of problems, the so-called Velocity-Verlet method.\n\n## Deriving the Euler-Cromer Method\n\nLet us repeat Euler's method.\nWe have a differential equation\n\n\n
\n\n$$\n\\begin{equation}\ny'(t_i)=f(t_i,y_i) \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nand if we truncate at the first derivative, we have from the Taylor expansion\n\n\n
\n\n$$\n\\begin{equation}\ny_{i+1}=y(t_i) + (\\Delta t) f(t_i,y_i) + O(\\Delta t^2), \\label{eq:euler} \\tag{2}\n\\end{equation}\n$$\n\nwhich when complemented with $t_{i+1}=t_i+\\Delta t$ forms\nthe algorithm for the well-known Euler method. \nNote that at every step we make an approximation error\nof the order of $O(\\Delta t^2)$, however the total error is the sum over all\nsteps $N=(b-a)/(\\Delta t)$ for $t\\in [a,b]$, yielding thus a global error which goes like\n$NO(\\Delta t^2)\\approx O(\\Delta t)$. \n\nTo make Euler's method more precise we can obviously\ndecrease $\\Delta t$ (increase $N$), but this can lead to loss of numerical precision.\nEuler's method is not recommended for precision calculation,\nalthough it is handy to use in order to get a first\nview on how a solution may look like.\n\nEuler's method is asymmetric in time, since it uses information about the derivative at the beginning\nof the time interval. This means that we evaluate the position at $y_1$ using the velocity\nat $v_0$. A simple variation is to determine $x_{n+1}$ using the velocity at\n$v_{n+1}$, that is (in a slightly more generalized form)\n\n\n
\n\n$$\n\\begin{equation} \ny_{n+1}=y_{n}+ v_{n+1}+O(\\Delta t^2)\n\\label{_auto2} \\tag{3}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\nv_{n+1}=v_{n}+(\\Delta t) a_{n}+O(\\Delta t^2).\n\\label{_auto3} \\tag{4}\n\\end{equation}\n$$\n\nThe acceleration $a_n$ is a function of $a_n(y_n, v_n, t_n)$ and needs to be evaluated\nas well. This is the Euler-Cromer method.\n\n**Exercise**: go back to the above code with Euler's method and add the Euler-Cromer method.\n\n## Deriving the Velocity-Verlet Method\n\nLet us stay with $x$ (position) and $v$ (velocity) as the quantities we are interested in.\n\nWe have the Taylor expansion for the position given by\n\n$$\nx_{i+1} = x_i+(\\Delta t)v_i+\\frac{(\\Delta t)^2}{2}a_i+O((\\Delta t)^3).\n$$\n\nThe corresponding expansion for the velocity is\n\n$$\nv_{i+1} = v_i+(\\Delta t)a_i+\\frac{(\\Delta t)^2}{2}v^{(2)}_i+O((\\Delta t)^3).\n$$\n\nVia Newton's second law we have normally an analytical expression for the derivative of the velocity, namely\n\n$$\na_i= \\frac{d^2 x}{dt^2}\\vert_{i}=\\frac{d v}{dt}\\vert_{i}= \\frac{F(x_i,v_i,t_i)}{m}.\n$$\n\nIf we add to this the corresponding expansion for the derivative of the velocity\n\n$$\nv^{(1)}_{i+1} = a_{i+1}= a_i+(\\Delta t)v^{(2)}_i+O((\\Delta t)^2)=a_i+(\\Delta t)v^{(2)}_i+O((\\Delta t)^2),\n$$\n\nand retain only terms up to the second derivative of the velocity since our error goes as $O(h^3)$, we have\n\n$$\n(\\Delta t)v^{(2)}_i\\approx a_{i+1}-a_i.\n$$\n\nWe can then rewrite the Taylor expansion for the velocity as\n\n$$\nv_{i+1} = v_i+\\frac{(\\Delta t)}{2}\\left( a_{i+1}+a_{i}\\right)+O((\\Delta t)^3).\n$$\n\n## The velocity Verlet method\n\nOur final equations for the position and the velocity become then\n\n$$\nx_{i+1} = x_i+(\\Delta t)v_i+\\frac{(\\Delta t)^2}{2}a_{i}+O((\\Delta t)^3),\n$$\n\nand\n\n$$\nv_{i+1} = v_i+\\frac{(\\Delta t)}{2}\\left(a_{i+1}+a_{i}\\right)+O((\\Delta t)^3).\n$$\n\nNote well that the term $a_{i+1}$ depends on the position at $x_{i+1}$. This means that you need to calculate \nthe position at the updated time $t_{i+1}$ before the computing the next velocity. Note also that the derivative of the velocity at the time\n$t_i$ used in the updating of the position can be reused in the calculation of the velocity update as well.\n\n## Adding the Velocity-Verlet Method\n\nWe can now easily add the Verlet method to our original code as\n\n\n```\nDeltaT = 0.01\n#set up arrays \ntfinal = 10 # in years\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and x\nt = np.zeros(n)\nv = np.zeros((n,2))\nr = np.zeros((n,2))\n# Initial conditions as compact 2-dimensional arrays\nr0 = np.array([1.0,0.0])\nv0 = np.array([0.0,2*pi])\nr[0] = r0\nv[0] = v0\nFourpi2 = 4*pi*pi\n# Start integrating using the Velocity-Verlet method\nfor i in range(n-1):\n # Set up forces, air resistance FD, note now that we need the norm of the vecto\n # Here you could have defined your own function for this\n rabs = sqrt(sum(r[i]*r[i]))\n a = -Fourpi2*r[i]/(rabs**3)\n # update velocity, time and position using the Velocity-Verlet method\n r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a\n rabs = sqrt(sum(r[i+1]*r[i+1]))\n anew = -4*(pi**2)*r[i+1]/(rabs**3)\n v[i+1] = v[i] + 0.5*DeltaT*(a+anew)\n t[i+1] = t[i] + DeltaT\n# Plot position as function of time \nfig, ax = plt.subplots()\nax.set_ylabel('y[AU]')\nax.set_xlabel('x[AU]')\nax.plot(r[:,0], r[:,1])\nfig.tight_layout()\nsave_fig(\"EarthSunVV\")\nplt.show()\n```\n\nYou can easily generalize the calculation of the forces by defining a function\nwhich takes in as input the various variables. We leave this as a challenge to you.\n\n## Hints for exercises 2 and 3\n\n**Taylor exercise 3.11:**\n\nThis exercise is discussed in Taylor's chapter 3.2.\n\nConsider the rocket of mass $M$ moving with velocity $v$. After a\nbrief instant, the velocity of the rocket is $v+\\Delta v$ and the mass\nis $M-\\Delta M$. Momentum conservation gives\n\n$$\n\\begin{eqnarray*}\nMv&=&(M-\\Delta M)(v+\\Delta v)+\\Delta M(v-v_e)\\\\\n0&=&-\\Delta Mv+M\\Delta v+\\Delta M(v-v_e),\\\\\n0&=&M\\Delta v-\\Delta Mv_e.\n\\end{eqnarray*}\n$$\n\n## Exercise 2\n\nIn the second step we ignored the term $\\Delta M\\Delta v$ since we\ncan assume it is small. The last equation gives\n\n$$\n\\begin{eqnarray}\n\\Delta v&=&\\frac{v_e}{M}\\Delta M,\\\\\n\\nonumber\n\\frac{dv}{dt}dt&=&\\frac{v_e}{M}dM.\n\\end{eqnarray}\n$$\n\nHere we let $\\Delta v\\rightarrow dv$ and $\\Delta M\\rightarrow dM$.\nWe have also assumed that $M(t) = M_0-kt$. \nIntegrating the expression with lower limits $v_0=0$ and $M_0$, one finds\n\n$$\n\\begin{eqnarray*}\nv&=&v_e\\int_{M_0}^M \\frac{dM'}{M'}\\\\\nv&=&v_e\\ln(M/M_0)\\\\\n&=&v_e\\ln[(M_0-k t)/M_0].\n\\end{eqnarray*}\n$$\n\nWe have ignored gravity here. If we add gravity as the external force, we get when integrating an additional terms $-gt$, that is\n\n$$\nv(t)=v_e\\ln[(M_0-k t)/M_0]-gt.\n$$\n\n## Exercise 3, more rockets\n\nThis is a continuation of the previous exercise and most of the relevant background material can be found in Taylor chapter 3.2. \n\nTaking the velocity from the previous exercise and integrating over time we find the height\n\n$$\ny(t) = y(t_0=0)+\\int_0^tv(t')dt'.\n$$\n\nYou need to insert $v(t)$ from the previous exercise\n\nTo do the integral over time we recall that $M(t)=M_0-\\Delta M t$. We assumed that $\\Delta M=k$ is a constant.\nWe use that $M_0-M=kt$ and assume that mass decreases by a constant $k$ times time $t$.\n\n## Some more manipulations which have to be done\n\nWe will need to compute an integral which goes like\n\n$$\n\\int_0^t \\ln{M(t')}dt' = \\int_0^t \\ln{(M_0-kt')}dt'.\n$$\n\nand defining the variable $u=M_0-kt'$, with $du=-kdt'$ and the new limits $M_0$ when $t=0$ and $M_0-kt$ when time is equal to $t$, we have\n\n$$\n\\int_0^t \\ln{M(t')}dt' = \\int_0^t \\ln{(M_0-kt')}dt'=-\\frac{1}{k}\\int_{M_0}^{M_0-kt} \\ln{(u)}du=-\\frac{1}{k}\\left[u\\ln{(u)}-u\\right]_{M_0}^{M_0-kt}.\n$$\n", "meta": {"hexsha": "c290fbba10505986c17ab4efa3c79ff23576d9dd", "size": 49102, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week7/ipynb/week7.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/pub/week7/ipynb/week7.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pub/week7/ipynb/week7.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.7008784096, "max_line_length": 271, "alphanum_fraction": 0.5181662661, "converted": true, "num_tokens": 7320, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121444839335, "lm_q2_score": 0.3073580295544412, "lm_q1q2_score": 0.11830583828015617}} {"text": "```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\nfrom sympy import init_printing, Matrix, symbols\ninit_printing()\n```\n\n# Solving $A\\underline{x}=\\underline{b}$\n\n## Finding the complete solution to the set for in a linear system\n\nIn this section, we focus on the row view of matrices in order to solve systems of linear equations. Our aim to to find possible solutions for $\\underline{x}$. Consider the example in (1), taken from the previous lecture.\n\n$$\\begin{align}&{x_1}+2{x}_{2}+2{x}_{3}+2{x}_{4}={b}_{1}\\\\&2{x}_{1}+4{x}_{2}+6{x}_{3}+8{x}_{4}={b}_{2}\\\\&3{x_1}+6{x}_{2}+8{x}_{3}+10{x}_{4}={b}_{3}\\end{align}\\tag{1}$$\n\nThe matrix of coefficients and vectors are shown in (2).\n\n$$\\begin{bmatrix}1&2&2&2\\\\2&4&6&8\\\\3&6&8&10\\end{bmatrix}\\begin{bmatrix}{x}_{1}\\\\{x}_{2}\\\\{x}_{3}\\\\{x}_{4}\\end{bmatrix}=\\begin{bmatrix}{b}_{1}\\\\{b}_{2}\\\\{b}_{3}\\end{bmatrix}\\tag{2}$$\n\nMultiplying the matrix of coefficients and the column vector of unknowns is shown in (3), indication that we are adding scalar multiples of vectors.\n\n$${x}_{1}\\begin{bmatrix}1\\\\2\\\\3\\end{bmatrix}+{x}_{2}\\begin{bmatrix}2\\\\4\\\\6\\end{bmatrix}+{x}_{3}\\begin{bmatrix}2\\\\6\\\\8\\end{bmatrix}+{x}_{4}\\begin{bmatrix}2\\\\8\\\\10\\end{bmatrix}=\\begin{bmatrix}{b}_{1}\\\\{b}_{2}\\\\{b}_{3}\\end{bmatrix}\\tag{4}$$\n\nSince the third row is the addition of ($1\\times$) row one and ($1\\times$) row two, solutions on the right-hand side must be of the form shown in (5).\n\n$${b}_{3}={b}_{1}+{b}_{2}\\tag{5}$$\n\nThe question now is: _What values can $\\underline{x}$ possibly take?_\n\nFrom what we have seen up until now, $A\\underline{x}=\\underline{b}$ is solvable (exactly) when $\\underline{b}$ is in the column space of $A$, that is to say, it must be a linear combination of the columns.\n\nWe need to find two solutions, one called the _particular_ and one the _nullspace_ solution.\n\n### The particular solution: $\\underline{x}_\\text{particular}$\n\nA particular solution is one that solves a specific case for $\\underline{b}$. Consider the reduced row-echelon form of the matrix of coefficients, $A$.\n\n\n```python\nA.rref()\n```\n\nNote that columns $2$ and $4$ have zero values as their respective pivots.\n\nTo find $\\underline{x}_\\text{particular}$ or $\\underline{x}_\\text{p}$, we set all free variables to $0$ (in this example case that would be ${x_2}={x_4}=0$, since these are _columns without pivots_). Then we solve $A\\underline{x}=\\underline{b}$ for the pivot variables.\n\nLet's see that in action and calculate $\\underline{x}_\\text{p}$ for (6).\n\n$$\\underline{b} =\\begin{bmatrix}1\\\\5\\\\6\\end{bmatrix}\\tag{6}$$\n\nThe equation (6) above is a proper solution as $1+5=6$ in accordance with (5).\n\nBelow, we create an augmented matrix (that is one with $\\underline{b}$ included).\n\n\n```python\nA_augm = Matrix([[1, 2, 2, 2, 1], [2, 4, 6, 8, 5], [3, 6, 8, 10, 6]])\nA_augm\n```\n\nWe can reduce this to reduced row-echelon form using the `.rref()` method.\n\n\n```python\nA_augm.rref() # The video example is not solved to reduced row echelon form\n```\n\nSo, we are left with a new matrix (called `A1` below) that omits columns $2$ and $4$.\n\n\n```python\nA1 = Matrix([[1, 0, -2] ,[0, 1, (3 / 2)] ,[0, 0, 0]]) # The video example is not solved to reduced row echelon form\nA1\n```\n\nFrom the last row in the reduced row-echelon form, we note that we can set $x_4$ to any value. We have already set is to $0$, though. From the second last row, we can read that $1{x_3}+{x_4}=\\frac{3}{2}$, or then, ${x_3}=\\frac{3}{2}$. Knowing that ${x_2}=0$, we read from the first row that $1{x_1}+2{x_2}+0{x_3}-2{x_4}={x_1}=-2$ as shown in (7).\n\n$$\\begin{align}&x_4=0\\\\&{x_3}=\\frac{3}{2}\\\\&{x_1}=-2\\end{align}\\tag{7}$$\n\nThus, for $\\underline{x}_\\text{p}$ we have (8).\n\n$$\\underline{x}_\\text{p}=\\begin{bmatrix}-2\\\\0\\\\{\\frac{3}{2}}\\\\0\\end{bmatrix}\\tag{8}$$\n\n### The nullspace solution: $\\underline{x}_\\text{nullspace}$\n\nHere we have $\\underline{b}$ as in (9). (We will simplify the notation to $\\underline{x}_\\text{n}$.)\n\n$$\\underline{b}=\\begin{bmatrix}0\\\\0\\\\0\\end{bmatrix}\\tag{9}$$\n\nReduction to reduce row-echelon form and following the same principles as above leads us to two nullspace solutions that we can easily calculate using the `.nullspace()` method for the augmented matrix.\n\n\n```python\nA_augmented_null = Matrix([[1, 2, 2, 2, 0], [2, 4, 6, 8, 0], [3, 6, 8, 10, 0]])\nA_augmented_null.rref()\n```\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\begin{bmatrix}\\left[\\begin{matrix}-2\\\\1\\\\0\\\\0\\end{matrix}\\right], & \\left[\\begin{matrix}2\\\\0\\\\-2\\\\1\\end{matrix}\\right]\\end{bmatrix}$$\n\n\n\nLet's have a look at how we got here. We begin with the last row of the reduced row-echelon form. Since we are free to set a value for $x_4$ (the whole row consisting of zeros), we set it to $t$. Solving back up through rows $2$ and $1$, we have (10) below.\n\n$$\\begin{align}&{x_4}=t\\\\&{x_3}+2t=0\\\\&{x_3}=-2t\\\\&{x_1}+2{x_2}-2t=0\\\\&{x_1}=2t-2{x_2}\\end{align}\\tag{10}$$\n\nThe simplest value to give $t$ is $0$ and with ${x_2}=1$, we have (11).\n\n$$\\begin{align}&{x_4}=0\\\\&{x_3}=0\\\\&{x_2}=1\\\\&{x_1}=-2\\end{align}\\tag{11}$$\n\nIf we set $t=1$ and ${x_2}=0$, we have (12), the other nullspace solution.\n\n$$\\begin{align}&{x_4}=1\\\\&{x_3}=-2\\\\&{x_2}=0\\\\&{x_1}=2\\end{align}\\tag{12}$$\n\nWhy did we alternate by giving the unknowns the values $0$ and $1$? Well, these are the simplest values to give (other than $0$ and $0$, which will be trivial).\n\n### The full set of solutions\n\nWe get the full set of solutions as $\\underline{x}=\\underline{x}_\\text{p}+\\underline{x}_\\text{n}$. This is shown to be so in (13).\n\n$$\\begin{align}&A{\\underline{x}}_\\text{p}=b\\\\&A{\\underline{x}}_\\text{n}=0\\\\{\\therefore}\\quad&{A}\\left({\\underline{x}}_\\text{p}+{\\underline{x}}_\\text{n}\\right)=\\underline{b}\\end{align}\\tag{13}$$\n\nSince we can have constant multiples of the nullspace vector, we have a final solution in (14).\n\n$$ \\overline { x } ={ \\overline { x } }_{ P }+{ \\overline { x } }_{ N }=\\begin{bmatrix} -2 \\\\ 0 \\\\ \\frac { 3 }{ 2 } \\\\ 0 \\end{bmatrix}+{ c }_{ 1 }\\begin{bmatrix} -2 \\\\ 1 \\\\ 0 \\\\ 0 \\end{bmatrix}+{ c }_{ 2 }\\begin{bmatrix} -2 \\\\ 0 \\\\ -2 \\\\ 1 \\end{bmatrix} \\tag{14}$$\n\n## Rank\n\nWe discussed rank in the previous lecture. For any ${m}\\times{n}$ matrix we have a rank (number of pivots). We cannot have more pivots than rows, therefor, for a matrix $A$, $m\\le\\text{rank}\\left(A\\right)$ and $n\\le\\text{rank}\\left(A\\right)$.\n\n### The case of full column rank for a matrix $A$, i.e. $\\text{rank}\\left(A\\right)=n$\n\nThis implies that there are no free variables and the nullspace only has the zero vector (it is a subspace and MUST contain the zero vector. Thus $\\underline{x}=\\underline{x}_\\text{p}$ ONLY (if it exists).\n\nConsider the example with $2$ columns below. One column is not a linear combination of the other and the rank will be $2$. With only 2 unknowns and a rank of $2$, the nullspace will contain only the zero vector, $\\underline{0}$.\n\n\n```python\nA = Matrix([[1, 3], [2, 1], [6, 1], [5, 1]])\nA\n```\n\n\n```python\nA.nullspace()\n```\n\n\n```python\nA.rref() # Just to show the reduced row echelon form\n```\n\n### The case of full row rank, i.e. $\\text{rank}\\left(A\\right)=m$\n\nHere, every row has a pivot. For which $\\underline{b}$ will the the set be solvable? For ALL $\\underline{b}$. How many free variables? We are left with $n-\\text{rank}\\left(A\\right)$ or $n-m$ free variables.\n\n\n```python\nA2 = A.transpose() # Creating a new matrix, which is the transpose of A above (just as an example)\nA2\n```\n\n\n```python\nA2.nullspace() # Showing the two nullspace solutions\n```\n\n\n```python\nA2.rref()\n```\n\n### The case of full (row and column) rank, i.e. $\\text{rank}\\left(A\\right)=m=n$\n\nLet's do an example.\n\n\n```python\nA = Matrix([[1, 2], [3, 1]])\nA\n```\n\n\n```python\nA.nullspace() # Only the zero vector\n```\n\n\n```python\nA.rref()\n```\n\nThis is by definition an invertible matrix (which we will learn about later).\n\nLet's look at an non-invertible matrix. For this $2\\times2$ matrix we will have one free variable (no pivot).\n\n\n```python\nA = Matrix([[2, 4], [3, 6]])\nA\n```\n\n\n```python\nA.nullspace()\n```\n\n\n```python\nA.rref()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "25e0ebe750ebe96ef99df57de2391bdde2567472", "size": 52197, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_8_Solving_nonhomogeneous_systems.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_8_Solving_nonhomogeneous_systems.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_8_Solving_nonhomogeneous_systems.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 51.3244837758, "max_line_length": 3124, "alphanum_fraction": 0.7135850719, "converted": true, "num_tokens": 3403, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4148988313272769, "lm_q2_score": 0.28457600421652673, "lm_q1q2_score": 0.11807025157322316}} {"text": "# Homework #5 (Due 10/14/2021, 11:59pm)\n## Hierarchical Models and the Theory of Variational Inference\n\n**AM 207: Advanced Scientific Computing**
\n**Instructor: Weiwei Pan**
\n**Fall 2021**\n\n**Name: Jiahui Tang**\n\n**Students collaborators: Yujie Cai**\n\n\n\n### Instructions:\n\n**Submission Format:** Use this notebook as a template to complete your homework. Please intersperse text blocks (using Markdown cells) amongst `python` code and results -- format your submission for maximum readability. Your assignments will be graded for correctness as well as clarity of exposition and presentation -- a \u201cright\u201d answer by itself without\u00a0an explanation or is presented with a difficult to follow format will receive no credit.\n\n**Code Check:** Before submitting, you must do a \"Restart and Run All\" under \"Kernel\" in the Jupyter or colab menu. Portions of your submission that contains syntactic or run-time errors will not be graded.\n\n**Libraries and packages:** Unless a problems specifically asks you to implement from scratch, you are welcomed to use any `python` library package in the standard Anaconda distribution.\n\n\n```python\n### Import basic libraries\nimport numpy as np\nimport pandas as pd\nimport sklearn as sk\nfrom sklearn.linear_model import LinearRegression\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n```\n\n\n```python\n# data_df is the dataframe of your data\n# estimates_df is a numpy array of cancer rate estimates, one for each county\ndef scatter_plot_cancer_rates(data_df, estimates=None):\n ax = data_df.plot(kind='scatter', x=\"pop\",y=\"pct_mortality\", alpha=0.1, color=\"grey\")\n bot_kcancer_counties = data_df.sort_values(by='pct_mortality',ascending=True)[:300]\n top_kcancer_counties = data_df.sort_values(by='pct_mortality',ascending=False)[:300]\n top_kcancer_counties.plot(kind='scatter',x=\"pop\",y=\"pct_mortality\",\n alpha=0.1, color=\"red\", ax=ax, logx=True, label = \"highest cancer rate\")\n bot_kcancer_counties.plot(kind='scatter',x=\"pop\",y=\"pct_mortality\",\n alpha=0.1, color=\"blue\", ax=ax, logx=True, label = \"lowest cancer rate\")\n if estimates is not None:\n ax.plot(data_df['pop'], 5 * estimates, '.', alpha=0.2, color=\"green\", label='mean estimates')\n ax.set_ylim([-0.0001, 0.0003])\n \n```\n\n\n```python\n# data_df is the dataframe of your data\n# estimates_df is a numpy array of cancer rate estimates, one for each county\ndef scatter_plot_cancer_rates_orange(data_df, estimates=None):\n ax = data_df.plot(kind='scatter', x=\"pop\",y=\"pct_mortality\", alpha=0.1, color=\"grey\")\n bot_kcancer_counties = data_df.sort_values(by='pct_mortality',ascending=True)[:300]\n top_kcancer_counties = data_df.sort_values(by='pct_mortality',ascending=False)[:300]\n top_kcancer_counties.plot(kind='scatter',x=\"pop\",y=\"pct_mortality\",\n alpha=0.1, color=\"red\", ax=ax, logx=True, label = \"highest cancer rate\")\n bot_kcancer_counties.plot(kind='scatter',x=\"pop\",y=\"pct_mortality\",\n alpha=0.1, color=\"blue\", ax=ax, logx=True, label = \"lowest cancer rate\")\n if estimates is not None:\n ax.plot(data_df['pop'], 5 * estimates, '.', alpha=0.2, color=\"orange\", label='mean estimates')\n ax.set_ylim([-0.0001, 0.0003])\n \n```\n\n# Problem Description: Understanding EM and Variational Inference\n\nIn this problem, we will draw concrete connections between EM and variational inference by applying both methods to a certain class of latent variable models. You'll need to refer to relevant lecture notes on the derivations of EM and the derivation of the variational inference objective. This is an essay question that requires you to engage with complex derivations at a productive but still high level. No implementation is required.\n\n#### Non-Bayesian Latent Variable Model\nRecall the class of latent variable models we studied in lecture:\n\n\n#### Bayesian Latent Variable Model\nA Bayesian version of the same class of models involve adding priors for the model parameters:\n\n\n1. **(Comparing ELBOs)** For the above type of Bayesian latent variable model, write down the ELBO for variational inference with a mean field variational family. Compare the variational inference ELBO for the Bayesian model to the expectation maximization ELBO for the non-Bayesian model. What are the differences and similarities between these two ELBOs?\n\n In both EM and variational inference we optimize the ELBO. Compare the update steps in EM to the update steps in Coordinate Ascent Variational Inference, draw a concrete analogy between them.\n \n ***Hint:*** To make both ELBO's comparable, make sure that both are in terms of $z, y, \\theta, \\phi$.\n

\n \n\n\n \n**Answer:**\n \n \nELBO for **EM** for non-Bayesian model:\n\n$$ELBO^{\\text{EM}}(\\theta, \\phi, q(z)) = \\mathbb{E}_{z\\sim q(z)}\\left[\\log\\left(\\frac{p(y, z|\\theta, \\phi)}{q(z)}\\right)\\right] = \\mathbb{E}_{z\\sim q(z)} \\left[\\log \\left(\\frac{P(y|z,\\phi)p(z|\\theta)}{q(z)}\\right)\\right]\n$$\n\nThe ELBO for **Variational Inference** with a mean field variational family, where $\\psi=(\\theta,\\phi,Z)$:\n\n$$ELBO^{\\text{VI}}(\\psi) = \\mathbb{E}_{\\psi \\sim q(\\psi|\\lambda)}\\left[\\log\\left(\\frac{p(\\psi, Y_1, \\ldots, Y_N |a,b))}{q(\\psi | \\lambda)} \\right)\\right] $$\n\nWrite out in terms of $z, y, \\theta, \\phi$\n \n$$\nELBO = \\mathbb{E}_{z, \\phi, \\theta \\sim q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)} \\left[\\log\\left(\\frac{p(y, z, \\theta, \\phi|a, b)}{q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)}\\right)\\right]\n$$\n\n\n\n**The differences and similarities between these two ELBOs:**\n \n**Similarities:**\n 1. They all incorporate an auxilary function $q(z)$ in order to take the gradients.\n 2. Both take an expectation log ratio between joint distribution over distributions that are not that of the observed variable (auxilary distribution $q$ or variational distribution $q$).\n\n**Differences:**\n1. For VI, we introduce an additional parameter $\\lambda$ for the variation family $Q$, and it is maximized using expetctation wrt the variation family.\n2. In EM, $\\phi$ and $\\theta$ are known constants, and produce point estimation; In VI, they are random variables with distribution and we derive posterior over them; Thus in EM, we optimize the ELBO wrt $\\theta$ and $\\phi$ as point estimates; In variational inference, we optimize over the distribution of $\\theta$ and $\\phi$ \n3. The difference between the ELBO and the KL divergence is the log normalizer (i.e. the evidence), which is the quantity that the ELBO bounds. \n4. In VI, we take the expectation over the distribution of a hyperparameter. In EM, we take the expectation over the distribution of the latent variable. \n\n\n**Special Case**:\n \nEM is a special case of ELBO if we use mean-field VI where the variational distributions $q$ are assumed to be point estimations. That is, Let $\\theta^{*}$ and $\\phi^{*}$ be the unknown location of this point mass:\n $$q_{\\theta}(\\theta)=\\delta\\left(\\theta-\\theta^{*}\\right)$$\n $$q_{\\phi}(\\phi)=\\delta\\left(\\phi-\\phi^{*}\\right)$$\n \nVI will minimize a KL divergence, and so when we minimumize over the variational family, it is the E-step of EM; and the further when minimizing over $\\theta^{*}$ is the M-step of EM.\n\n2. **(Comparing ELBOs and KL-divergences)** Recall that the original objective of variational inference is to minimize a KL-divergence, we rewrote the objective to be that of maximizing the ELBO. Why is directly minimizing the KL-divergence in the original objective difficult (be specific about wherein the difficulty lies)? \n\n In the derivation of the E-step of EM, we reframed an maximization of the ELBO problem as a minimization of a KL-divergence problem. In this case, why was the KL-divergence easier to minimize and the ELBO harder to maximize (use the instantiation of the E-step for Gaussian Mixture Models in Lecture 7 to help support your answer)? \n\n In the notes for Lecture 8, we introduce a way to maximize the variational inference ELBO -- through coordinate ascent. In the derivation of the updates for coordinate ascent, there is a place where we reframed an maximization of the ELBO problem as an equivalent minimization of a KL-divergence problem. Write down the exact form of this equivalence (the two expressions are separated in the derivation by a bunch of lines, you'll need to identify both parts that you need). In this case, why was the KL-divergence easier to minimize and the ELBO harder to maximization (use the instantiation of the update for Gaussian Mixture Models in Lecture 8 to help support your answer)?\n\n Based on this analysis, can you draw some general conclusions about when we'd prefer to minimize the KL-divergence versus when we'd prefer to maximize the ELBO?

\n\n\n \n**Answer:**\n \nRecall that the original objective of variational inference is to minimize a KL-divergence, we rewrote the objective to be that of maximizing the ELBO. Why is directly minimizing the KL-divergence in the original objective difficult (be specific about wherein the difficulty lies)?\n\n \n\n\n\n \n \nIn VI, we are finding a tractable distribution $q$ that best approximate the complex distribution $p$ by minimizing KL divergence.\n \n$$\n\\begin{aligned}\n\\lambda^* &= \\underset{\\lambda}{\\text{argmin}}\\; D_{\\text{KL}}(q(\\psi|\\lambda) \\| p(\\psi|Y_1, \\ldots, Y_N, a, b))) \\\\\n&= \\underset{\\lambda}{\\text{argmin}}\\; \\mathbb{E}_{\\psi \\sim q(\\psi|\\lambda)}\\left[\\log\\left(\\frac{q(\\psi | \\lambda)}{p(\\psi|Y_1, \\ldots, Y_N))}\\right) \\right]\n\\end{aligned}$$\n\nWrite out in terms of $z, y, \\theta, \\phi$\n\n \n$$\n\\underset{\\lambda}{\\min} D_{\\text{KL}}[q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3) \\| p(\\theta, \\phi, z|y, a, b)] = \\underset{\\lambda}{\\min} \\mathbb{E}_{z, \\phi, \\theta \\sim q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)} \\left[\\log \\left( \\frac{q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)}{p(\\theta, \\phi, z|y, a, b)}\\right) \\right]\n$$\n \n\n\nBy Bayes' rule, the denominator in the log ratio\n\n$$\np(\\theta, \\phi, z|y, a, b) = \\frac{p(y, z, \\theta, \\phi| a, b)}{p(y | a, b)} = \\frac{p(y, z, \\theta, \\phi| a, b)}{\\int p(y, z, \\theta, \\phi| a, b) \\mathrm{d}(z, \\theta,\\phi)} \\propto p(y, z, \\theta, \\phi| a, b)\n$$\n\nTherefore\n\\begin{aligned}\nD_{\\text{KL}}[q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3) \\| p(\\theta, \\phi, z|y, a, b)] &= \\mathbb{E}_{z, \\phi, \\theta \\sim q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)} \\left[\\log \\left( \\frac{q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3) p(y | a, b)}{ p(y, z, \\theta, \\phi| a, b)}\\right) \\right]\\\\\n & = \\mathbb{E}_{z, \\phi, \\theta \\sim q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)} \\left[\\log q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3) + \\log p(y | a, b) - \\log p(y, z, \\theta, \\phi| a, b) \\right]\\\\\n \\end{aligned}\n \nBut that $\\log p(y| a, b) = \\int p(y, z, \\theta, \\phi| a, b)\\mathrm{d}(z, \\theta,\\phi) $ involve integration that's intractable to integrate over $(z, \\theta,\\phi)$.\n \nBut with ELBO, \n \n\\begin{aligned}\nELBO &= \\mathbb{E}_{z, \\phi, \\theta \\sim q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)} \\left[\\log\\left(\\frac{p(y, z, \\theta, \\phi|a, b)}{q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)}\\right)\\right]\\\\\n&= \\mathbb{E}_{z, \\phi, \\theta \\sim q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3)} \\left[\\log p(y, z, \\theta, \\phi|a, b) - \\log q(z, \\phi, \\theta |\\lambda_1, \\lambda_2, \\lambda_3) \\right]\n\\end{aligned}\n \nThe term no longer involves $\\log p(y| a, b)$, making it easier to compute.\nThus, directly minimizing the KL-divergence in the original objective is difficult, we used maximizing ELBO.The issue is that the gradient taken is with respect to the parameter $\\psi$ of the distribution of which we are taking the expectation - i.e. we cannot push the gradient into the expectation.\n\nIn the derivation of the E-step of EM, we reframed an maximization of the ELBO problem as a minimization of a KL-divergence problem. In this case, why was the KL-divergence easier to minimize and the ELBO harder to maximize (use the instantiation of the E-step for Gaussian Mixture Models in Lecture 7 to help support your answer)?\n\n\n\n**Why KL-divergence is easier to minimize and the ELBO is harder to maximize in EM:**\n\n1. Theoretically, when KL is minimized, we know the approximated distribution is the same as the true distribution. \n2. Since we already know what one of the distributions are, we can just equate the other distribution to it and be done. Hence, KL divergence is easier to minimize in this situation\n3. Practically, in ELBO, we need to conduct two sets of iterations, one for E-step and another for M step. Also, we only converge to the lower bound, which is not guaranteed to be the true optimized value.\n4. Besides, for the E-step of EM, it is hard to maximize the ELBO with respect to the auxiliary $q$ function due to complex integration and multivariate gradients.\n\nTo be specific, in E-step, \n$$\n\\underset{q}{\\max} ELBO(\\theta^*, \\phi^*, q(z)) = \\underset{q}{\\max} \\mathbb{E}_{z\\sim q(z)}\\left[\\log\\left(\\frac{p(y, z|\\theta^*, \\phi^*)}{q(z)}\\right)\\right]\n$$\n(after M step, we get $\\theta^*, \\phi^*$ as optimized values)\n \nTo get that, we will take gradient of ELBO, that is, $\\nabla \\mathbb{E}_{z\\sim q(z)}\\left[\\log\\left(\\frac{p(y, z|\\theta^*, \\phi^*)}{q(z)}\\right)\\right]$. But taking gradient with respect to the auxiliary $q$ function due to complex integration and multivariate gradients will be very hard.\n \nTherefore, we think about minimize the KL-divergence, as it is easier to minimize KL divergence.\n \n\nIn the notes for Lecture 8, we introduce a way to maximize the variational inference ELBO -- through coordinate ascent. In the derivation of the updates for coordinate ascent, there is a place where we reframed an maximization of the ELBO problem as an equivalent minimization of a KL-divergence problem. Write down the exact form of this equivalence (the two expressions are separated in the derivation by a bunch of lines, you'll need to identify both parts that you need). In this case, why was the KL-divergence easier to minimize and the ELBO harder to maximization (use the instantiation of the update for Gaussian Mixture Models in Lecture 8 to help support your answer)?\n\n\n\n**In coordinate ascent, now why we choose minimizing KL-divergence?**\n\nMaximize ELBO is equivalent to minimize the KL-divergence in coordinate ascent.\n\n$\\underset{\\lambda_i}{\\max}\\mathbb{E}_{\\psi_i \\sim q(\\psi_i|\\lambda_i)}\\left[\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(\\frac{p(\\psi,data)}{q(\\psi|\\lambda)}\\right)\\right]\\right] $\n$\\equiv \\underset{\\lambda_i}{\\min} D_{\\text{KL}}\\left[q(\\psi_i|\\lambda_i)|| z \\exp \\{\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} [\\log p(\\psi_i,data|\\psi_{-i}]\\}\\right]$\n \n \nFor the following, we will use the following notation:\n\n\\begin{aligned}\n\\lambda_{-i} &= [\\lambda_1\\; \\ldots\\; \\lambda_{i-1}\\; \\lambda_{i+1}\\; \\ldots\\; \\lambda_{I}]\\\\\n\\psi_{-i} &= [\\psi_1\\; \\ldots\\; \\psi_{i-1}\\; \\psi_{i+1}\\; \\ldots\\; \\psi_{I}]\\\\\nq(\\psi_{-i}|\\lambda_{-i}) &= \\prod_{j\\neq i}q(\\psi_{j}|\\lambda_{j})\\\\\n\\end{aligned}\n \nFrom lecture 10, we have shown that \n $$\\mathbb{E}_{\\psi \\sim q(\\psi|\\lambda)}[\\ldots] = \\mathbb{E}_{\\psi_i \\sim q(\\psi_i|\\lambda_i)}\\left[\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})}[\\ldots] \\right]$$\n \n \nand the exact form of the equivalence:\n \n\n\\begin{aligned}\n\\underset{\\lambda_i}{\\max} ELBO(\\lambda) &= \\underset{\\lambda_i}{\\max} \\mathbb{E}_{\\psi_i \\sim q(\\psi_i|\\lambda_i)}\\left[\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(\\frac{p(Y_1, \\ldots, Y_N, \\psi_i | \\psi_{-i})}{q(\\psi_{i}|\\lambda_{i})} \\right)\\right]\\right]\\\\\n&= \\underset{\\lambda_i}{\\min}D_{\\text{KL}} \\left[ q(\\psi_{i}|\\lambda_{i})\\| \\mathcal{Z}\\exp\\left\\{\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(p(Y_1, \\ldots, Y_N, \\psi_i | \\psi_{-i}) \\right)\\right]\\right\\}\\right].\n\\end{aligned}\n\nAs the last set of questions, if we set $q(\\psi_{i}|\\lambda_{i})$ equal to $\\mathcal{Z}\\exp\\left\\{\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(p(Y_1, \\ldots, Y_N, \\psi_i | \\psi_{-i}) \\right)\\right]\\right\\}$, we can solve the optimization. This will be easier than if we compute the gradient of of $ELBO$. In Gaussian Mixture models, in lec10 derivation, terms with subscript $-i$ will become constant after iterations.\n\n\nBased on this analysis, can you draw some general conclusions about when we'd prefer to minimize the KL-divergence versus when we'd prefer to maximize the ELBO?\n\n\n \n**Conclusion**\n\n1. Look for complexity of mathematics when implementing the right method\n\n2. We use KL divergence when we know one of the distributions exactly - then it is clear that the to minimize the divergence we just have to equate the other distribution to the other distribution\n\n3. When the form of probability $P(A|B)$ is known and not intractible, we can use KL divergence minimization. When $P(A|B)$ is unknown and hard to compute, we can choose to work with ELBO, but this still require the joint of $A$ and $B$ to be tractable. \n\n3. **(The Mean Field Assumption and Coordinate Ascent)** Describe exactly when and how the mean field assumption is used in the derivation of the coordinate ascent updates.

\n\n\n \n**Answer:**\n\n\nWe assume that the joint $q(\\psi)$ factorizes completely over each dimension of $\\psi$, i.e. $q(\\psi)= \\prod_{i=1}^I q(\\psi_i | \\lambda_i)$. This is called the ***mean field assumption***. This assumes that each dimension of $q(\\psi)$ is independent from the others, which may or may not be valid, but it helps to simplify the expression of $q(\\psi)$, making it easier to find the ELBO. Thus it is a design choice to make, and it could also go wrong with this design choice.\n\nThe coordinate ascent algorithm maximize an objective function ELBO by iteractively maximizing over one dimension, while holding other dimension constant. That's why it requires us to be able to use mean field assumption and factorize $q(\\psi)$ into individual pieces.\n \nTo use mean field assumption, we will maximize the $ELBO$ by following steps in the derivation of the coordinate ascent updates.\n\n1. Use the mean-field assumption to break up the expectation $\\mathbb{E}_{\\psi \\sim q(\\psi|\\lambda)}$ into an iterated expectation $\\mathbb{E}_{\\psi_i \\sim q(\\psi_i|\\lambda_i)}\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})}$\n2. We will rewrite the outer expectation $\\mathbb{E}_{\\psi_i \\sim q(\\psi_i|\\lambda_i)}$ as a negative KL-divergence\n3. We will then maximize the negative KL-divergence by setting the two arguments of the divergence equal to each other\n\n\n\n4. **(Generalizability of CAVI)** Summarize what kind of derivations/math is needed in order instantiate Coordinate Ascent Variational Inference (CAVI) for a given new model (look at what we did for Gaussian Mixture Models in Lecture 8 and predict what you'd need to do for a new model). Based on this, discuss the potential draw backs of using CAVI for Bayesian inference in general. What do these draw backs imply about the practicality of variational inference as an inference method?

\n\n\n\n \n**Answer:**\n \nWe see that \n\n$$\nD_{\\text{KL}} \\left[ q(\\psi_{i}|\\lambda_{i})\\| \\mathcal{Z}\\exp\\left\\{\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(p(Y_1, \\ldots, Y_N, \\psi_i | \\psi_{-i}) \\right)\\right]\\right\\}\\right]\n$$\n\nis minimized when \n\n$$\n q(\\psi_{i}|\\lambda_{i})\\propto \\exp\\left\\{\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(p(Y_1, \\ldots, Y_N, \\psi_i | \\psi_{-i}) \\right)\\right]\\right\\}.\n$$\n\nThis is also exactly where the ELBO is maximized.\n\n\n**Derivations/Math needed:**\n\n1. We need mean field assumption to derive the variational distribution of each variable or parameter independently and separately.\n2. For each i, we need to set $q(\\psi_{i}|\\lambda_{i})$ equal to $\\mathcal{Z}\\exp\\left\\{\\mathbb{E}_{\\psi_{-i} \\sim q(\\psi_{-i}|\\lambda_{-i})} \\left[\\log \\left(p(Y_1, \\ldots, Y_N, \\psi_i | \\psi_{-i}) \\right)\\right]\\right\\}$ to solve for ELBO iteractively in CAVI.\n3. We need lengthy algebraic manipulation to expand the expectation term\n4. Computational wise also expensive, need to iterate through $\\lambda_i$\n\n\n**Drawbacks**\n1. It may never converge\n2. We will never be able to capture the full complexity of target distribution by sampling from posterior q\n3. We are trading off fidelity for computational tracibility.\n4. Computational heavy, we need to derive different $q(\\psi_{i}|\\lambda_{i})$ at each iteration for different models.\n5. We need conduct lengthy posterior derivations for each latent variable / paramter which may result in intractable posterior distributions\n\n\n**Particability**\n\nIt is still practical but it may be need some mathematical derivations when iteractively minimize KL. In each step, variety of complex algorithms and sampling methods can be used.\n \n \n\n5. **(Generalizability of EM)** Summarize what kind of derivations/math is needed in order instantiate Expectation Maximization (EM) for a given new model (look at what we did for Gaussian Mixture Models in Lecture 9 and predict what you'd need to do for a new model). Based on this, discuss the potential draw backs of using EM for MLE inference in general. What do these draw backs imply about the practicality of EM as an inference method?\n\n\n \n\n**Answer:**\n \n0. **Initialization:** Pick $\\theta_0$, $\\phi_0$.\n1. Repeat $i=1, \\ldots, I$ times:\n\n **E-Step:** \n$$q_{\\text{new}}(Z_n) = \\underset{q}{\\mathrm{argmax}}\\; ELBO(\\theta_{\\text{old}}, \\phi_{\\text{old}}, q) = p(Z_n|Y_n, \\theta_{\\text{old}}, \\phi_{\\text{old}})$$\n\n **M-Step:** \n \\begin{aligned}\n \\theta_{\\text{new}}, \\phi_{\\text{new}} &= \\underset{\\theta, \\phi}{\\mathrm{argmax}}\\; ELBO(\\theta, \\phi, q_{\\text{new}})\\\\\n &= \\underset{\\theta, \\phi}{\\mathrm{argmax}}\\; \\sum_{n=1}^N\\mathbb{E}_{Z_n\\sim p(Z_n|Y_n, \\theta_{\\text{old}}, \\phi_{\\text{old}})}\\left[\\log \\left( p(y_n, Z_n | \\phi, \\theta\\right) \\right].\n\\end{aligned}\n\n \nFor the **E-Step**, we just simply calculate the conditional distribution of each $Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old}$. It is usually obtained by employing Bayesian Rule. (The $Z_n$ here is of discrete type.)
\n\n\\begin{align}\np(Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old}) = \\frac{p(y_n|Z_n = k, \\theta_\\text{old})p(Z_n=k | \\phi_{\\text{old}})}{\\sum_{k=1}^K p(y_n|Z_n = k,\\theta_\\text{old})p(Z_n=k | \\phi_{\\text{old}})}\n\\end{align}
\n\n\nFor the **M-Step**, it usually involves expanding the expectation $\\mathbb{E}$, splitting into two parts by property of conditional probability and optimizting two parts seperately to get the solution.
\n\n\\begin{align}\n\\theta^*, \\phi^*\n&= \\underset{\\theta, \\phi}{\\mathrm{argmax}}\\;\\sum_{n=1}^N \\mathbb{E}_{Z_n\\sim p(Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old})} \\left[ \\log\\left(p(y_n, z_n|\\theta, \\phi)\\right)\\right] \\\\\n&= \\underset{\\theta, \\phi}{\\mathrm{argmax}} \\;\\sum_{n=1}^N \\mathbb{E}_{Z_n\\sim p(Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old})} \\left[ \\log p(y_n|z_n, \\theta) + \\log p(z_n|\\phi)\\right] \\\\\n&= \\underset{\\theta, \\phi}{\\mathrm{argmax}} \\;\\sum_{n=1}^N \\mathbb{E}_{Z_n\\sim p(Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old})} \\log p(y_n|z_n, \\theta) + \\sum_{n=1}^N \\mathbb{E}_{Z_n\\sim p(Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old})} \\log p(z_n|\\phi)\n\\end{align}
\n\n\nSince we know the $p(Z_n|Y_n, \\theta_\\text{old}, \\phi_\\text{old})$ in the previous E-Step, we can plug it in and take gradient to solve for $\\theta_\\text{new}$ and $\\phi_\\text{new}$.\n

\n\n \nFor for a given new model, we need to re-derive the equation for $q(z)$ which is the posterior and the update function for M-step which needs taking the gradients in order to instantiate Expectation Maximization (EM). Also, we need to choose $q(z)$ smartly in order to easily re-derive the math.\n\nThe potential draw backs are that, \n1. It has slow convergence.\n2. It makes convergence to the local optima only.\n3. It requires both the probabilities, forward and backward (numerical optimization requires only forward probability), to be tractable so we can actually implement it.\n\nHence, in order to make EM easy to implement for any given latent variable model, we need to ensure that the original distribution has tractable form so that we can update something in the M step. We also should make sure that the posterior is easy to compute so we can use it easily in the E step.\n \nAlthough EM has drawbacks like those listed above, we can still use it and it is heavily used in fields like Biostatistics. And EM is a pretty straight-forward method to solve a latent variable model.\n \n\n# Problem Description: Modeling Kidney Cancer Data\nIn this problem, we will continue to work with the US Kidney Cancer Data set, `kcancer.csv`. This is a dataset of kidney cancer frequencies across the US over 5 years on a per county basis. \n\n**In this homework, we focus on comparing different types of models for this data set.**\n\n\n## Part I: Empirical Bayes\nLet $N$ be the number of counties; let $y_j$ the number of kidney cancer case for the $j$-th county, $n_j$ the population of the $j$-county and $\\theta_j$ be the underlying kidney cancer rate for that county. The following is a Bayesian model for our data:\n\n\\begin{aligned}\ny_j | \\theta_j &\\sim Poisson(5 \\cdot n_j \\cdot \\theta_j), \\quad j = 1, \\ldots, N\\\\\n\\theta_j &\\sim Gamma(\\alpha, \\beta), \\quad j = 1, \\ldots, N\n\\end{aligned}\n\nwhere $\\alpha, \\beta$ are hyper-parameters of the model.\n\n1. **(Visualize the raw cancer rates)** Produce a scatter plot of the raw cancer rates (pct mortality) vs the county population size (in log scale). Highlight the top 300 raw cancer rates in red. Highlight the bottom 300 raw cancer rates in blue. What can you say about the counties with the highest and lowest raw cancer rates.

\n\n\n\n```python\ndf = pd.read_csv(\"kcancer.csv\")\ny = df['dc'].values\nn = df['pop'].values\n\n```\n\n\n```python\n#plt.figure(figsize = (10,10))\nscatter_plot_cancer_rates(df)\nplt.title(\"raw cancer rates vs the county population size\")\nplt.show()\n```\n\n\n \n\n**Answer:**\n\nThe countries with highest and lowest cancer rate are those countries with low populations, located on the left side of the scatterplot. Both the highest rates mortality and the lowest rates mortality counties are not the points have large population.\n \nThis could happen due to drawbacks of MLE estimates, it tends to overfit when data is scarce. Countries with less population may only have few data points, thus resulting in overfitting, exaggerated effect and extreme pct_mortality rate.\n\n2. **(Empirical Bayes)** Using Empirical Bayes and moment matching, choose values for the hyperparameters $\\alpha, \\beta$ based on your data. Use these values of $\\alpha$ and $\\beta$ to obtain posterior distributions for each county.\n\n***Hint:*** You'll first need to derive the fact that the ***evidence*** for a Poisson-Gamma model has a Negative Binomial distribution.

\n\n\n\n \nFrom the last homework, we first derived $p(\\theta_j | \\theta_{-j}, y, \\alpha, \\beta)= Ga(\\theta_j; \\alpha+y_j, 5n_j+\\beta)$ \n\n**Part I.** By Bayes' Theorm,\n$$\n\\begin{align}\np(\\theta_j | \\theta_{-j}, y, \\alpha, \\beta) &= \\frac{p(\\theta| y, \\alpha, \\beta)}{p( \\theta_{-j}| y, \\alpha, \\beta)} \\propto p(\\theta| y, \\alpha, \\beta) = \\prod_{i=1}^N p(\\theta_i|y, \\alpha, \\beta) \\\\\n\\end{align}\n$$\n\nBy holding $\\theta_{-j}, y, \\alpha, \\beta$ constant, we could see that $p(\\theta| y, \\alpha, \\beta)$ only depends on $$p(\\theta_j| y_j, \\alpha, \\beta) =\\frac{ p(\\theta_j, y_j|\\alpha, \\beta)}{p(\\alpha, \\beta)}\\propto p(\\theta_j, y_j|\\alpha, \\beta) = p(y_j|\\theta_j, \\alpha, \\beta)p(\\theta_j|\\alpha,\\beta)$$.\n\nThen, we could compute the joint distribution of $p(\\theta_j, y_j|\\alpha, \\beta)$ as the product of the two distributions.\n \n$$\n\\begin{align}\np(\\theta_j, y_j|\\alpha, \\beta) &=p(y_j|\\theta_j, \\alpha, \\beta)p(\\theta_j|\\alpha,\\beta)\\\\ &= Poisson(y_j; 5\\cdot n_j \\cdot \\theta_j)\\cdot(\\theta_j; \\alpha, \\beta)\\\\\n&= \\frac{(5n_j\\theta_j)^{y_j}e^{-5n_j\\theta_j}}{y_j!}\\cdot\\frac{\\beta^\\alpha\\theta_j^{\\alpha-1}e^{-\\beta\\theta_j}}{\\Gamma(\\alpha)}\\\\\n&\\propto \\theta_j^{\\alpha+y_j-1}e^{-(5n_j+\\beta)\\theta_j}\\\\\n&= Ga(\\theta_j; \\alpha+y_j, 5n_j+\\beta)\n\\end{align}\n$$\nQ.E.D.\n \nThus, we derived that $p(\\theta_j | \\theta_{-j}, y, \\alpha, \\beta)= Ga(\\theta_j; \\alpha+y_j, 5n_j+\\beta)$ \n\n\n\nThen we prove $p(y_j) = NB\\left(r, p\\right)$ using the fact: $\\Gamma(\\alpha) = (\\alpha - 1)!$\n\\begin{aligned}\np(y_j) &= \\frac{p(y_j | \\theta_j) p(\\theta_j)}{p(\\theta_j | y_j)}\\\\\n &= \\frac{Poiss(y_j; 5 \\cdot n_j \\cdot \\theta_j) Ga(\\theta_j; \\alpha, \\beta)}{Ga(\\alpha + y_j, 5 n_j + \\beta)}\\\\\n&= \\frac{\\frac{1}{y_j!} Exp[5 \\cdot n_j \\cdot \\theta_j] \\frac{\\beta^\\alpha}{\\Gamma(\\alpha)} \\theta_j^{\\alpha - 1} Exp[-\\beta\\theta_j]}{\\frac{\\left(5 n_j + \\beta\\right)^{\\left( \\alpha + y_j\\right)}}{\\Gamma\\left(\\alpha + y_j\\right)} \\theta_j^{\\alpha + y_j - 1} Exp[-\\left(5 n_j + \\beta\\right) \\theta_j]}\\\\\n&= \\frac{\\beta^\\alpha\\Gamma\\left(\\alpha + y_j\\right) \\left(5 \\cdot n_j\\right)^{y_j} \\theta_j^{\\alpha + y_j - 1}Exp[-\\left(5 \\cdot n_j + \\beta\\right)\\theta_j]}{y_j!\\Gamma(\\alpha) \\left(5 n_j + \\beta\\right)^{\\left( \\alpha + y_j\\right)}\\theta_j^{\\alpha + y_j - 1} Exp[-\\left(5 n_j + \\beta\\right) \\theta_j]}\\\\\n&= \\frac{\\beta^\\alpha\\Gamma\\left(\\alpha + y_j\\right) \\left(5 \\cdot n_j\\right)^{y_j}}{y_j!\\Gamma(\\alpha) \\left(5 n_j + \\beta\\right)^{\\left( \\alpha + y_j\\right)}}\\\\\n&= \\frac{\\Gamma\\left(\\alpha + y_j\\right)}{y_j!\\Gamma(\\alpha)}\\frac{\\beta^\\alpha}{\\left(5 n_j + \\beta\\right)^{\\alpha}} \\frac{\\left(5 \\cdot n_j\\right)^{y_j}}{\\left(5 n_j + \\beta\\right)^{y_j}}\\\\\n&= \\frac{\\Gamma\\left((\\alpha + y_j - 1) + 1\\right)}{y_j!\\Gamma(\\alpha)} \\left( \\frac{\\beta}{5 n_j + \\beta}\\right)^\\alpha \\left( \\frac{5 \\cdot n_j}{5 n_j + \\beta}\\right)^{y_j}\\\\\n&= \\frac{(\\alpha + y_j - 1)!}{y_j!\\Gamma(\\alpha)}\\left( \\frac{\\beta}{5 n_j + \\beta}\\right)^\\alpha \\left( \\frac{5 \\cdot n_j}{5 n_j + \\beta}\\right)^{y_j}\\\\\n&= \\frac{(\\alpha + y_j - 1)!}{y_j!(\\alpha - 1)!}\\left( \\frac{\\beta}{5 n_j + \\beta}\\right)^\\alpha \\left( \\frac{5 \\cdot n_j}{5 n_j + \\beta}\\right)^{y_j}\\\\\n&= \\binom{\\alpha + y_j - 1}{y_j}\\left( \\frac{\\beta}{5 n_j + \\beta}\\right)^\\alpha \\left( \\frac{5 \\cdot n_j}{5 n_j + \\beta}\\right)^{y_j}\\\\\n&= NB\\left(\\alpha, \\frac{5 n_j}{ 5 n_j + \\beta}\\right)\n\\end{aligned}\n\n \nThe theoritical mean and variance for negative binomial distribution:\n\\begin{align}\n\\mathbb{E}\\left[y_j\\right] &= \\frac{\\alpha5n_j}{\\beta} \\\\\n\\text{Var}\\left[y_j\\right] &= \\frac{\\alpha 5 n_j (5n_j+\\beta)}{\\beta^2}\n\\end{align}\n\nThen, the empirical counterparts adjusted by the population of each county:\n\\begin{aligned}\n\\widehat{\\mathbb{E}}\\left[\\frac{y_j}{n_j}\\right] &= \\frac{5\\alpha}{\\beta}\\\\\n\\widehat{\\text{Var}}\\left[\\frac{y_j}{n_j}\\right] &= \\frac{5 \\alpha (5 n_j+\\beta)}{n_j\\beta^2} = \\frac{5 \\alpha(5\\bar{n}+\\beta)}{\\bar{n}\\beta^2}\n\\end{aligned}\nFor the variance case we shall put population mean instead of $n_j$\n \nSolve and get:\n\\begin{aligned}\n\\alpha &= \\frac{\\bar{pop}\\widehat{\\mathbb{E}}^2 }{\\bar{pop} \\widehat{\\text{Var}} - \\widehat{\\mathbb{E}}} \\\\ \\\\\n\\beta &= \\frac{5 \\bar{pop} \\widehat{ \\mathbb{E}} }{\\bar{pop}\\widehat{\\text{Var}} - \\widehat{\\mathbb{E}}}\n\\end{aligned}\n \n \n\n\n```python\ne_hat = np.mean(y/n)\nvar_hat = np.var(y/n)\npop_mean = np.mean(n)\n\nalpha = pop_mean*e_hat**2/(pop_mean*var_hat - e_hat)\nbeta = 5*pop_mean*e_hat/(pop_mean*var_hat - e_hat)\n\nprint(\"empirical cancer rate population adjusted mean: = {}\".format(e_hat))\nprint(\"empirical cancer rate population adjusted var = {}\".format(var_hat))\nprint(\"empirical population mean = {}\".format(pop_mean))\nprint('')\n\nprint(\"alpha = {}\".format(alpha))\nprint(\"beta = {}\".format(beta))\n\n```\n\n empirical cancer rate population adjusted mean: = 5.786552354108626e-05\n empirical cancer rate population adjusted var = 2.527523010812238e-09\n empirical population mean = 160512.41021522647\n \n alpha = 1.5451734918796929\n beta = 133514.17193888978\n\n\n\n\n\nKnowing from the derivation that $$\\theta_i|y_i \\sim Ga(\\alpha + y_j, 5 n_j + \\beta)$$\nwe can sample $\\theta$ from $Ga(1.5451734918796929 + y_j, 5 n_j + 133514.17193888978)$ and compute the mean or mode from the samples:\n\n\n```python\n# update estimate\nthetas = []\nfor i in range(5000):\n thetas.append(np.random.gamma(alpha + df['dc'].values, 1. / (beta + 5 * n)))\nthetas = np.array(thetas)\nthetas_mean = np.mean(thetas, axis=0)\n```\n\n3. **(Posterior Means)** Produce a scatter plot of the raw cancer rates (pct mortality) vs the county population size (in log scale). Highlight the top 300 raw cancer rates in red. Highlight the bottom 300 raw cancer rates in blue. Finally, on the same plot again, scatter plot the posterior mean cancer rate estimates vs the county population size, highlight these means in green. \n\n Using the scatter plot, explain why using the posterior means (from our model) to estimate cancer rates is preferable to studying the raw rates themselves.\n\n\n```python\nscatter_plot_cancer_rates(df, estimates=thetas_mean)\nplt.legend()\nplt.show()\n```\n\n\n \n\n**Answer:**\n\nComparing to previous plot, we can observe that the posterior means to estimate cancer rate produce less extreme pct_mortality result. For countries with low population, it no longer has extreme high or low cancer rate, instead, they trend towards population average. It is preferable because the mean estimate mitigates the effect of small population size and overfitting problem.\n\n## Part II: Hierarchical Bayes\nRather than choosing fixed constants for the hyperparameters $\\alpha, \\beta$, following the Bayesian philosophy, we typically put additional priors on quantities of which we are uncertain. That is, we model the kidney cancer rates using a ***hierarchical model***:\n\n\\begin{aligned}\ny_j| \\theta_j &\\sim Poisson(5 \\cdot n_j \\cdot \\theta_j), \\quad j = 1, \\ldots, N\\\\\n\\theta_j | \\alpha, \\beta &\\sim Ga(\\alpha, \\beta), \\quad j = 1, \\ldots, N\\\\\n\\alpha &\\sim Ga(a, b)\\\\\n\\beta &\\sim Ga(c, d)\n\\end{aligned}\nwhere $a, b, c, d$ are hyperparameters. \n\n1. **(Posterior Marginal Means)** Produce a scatter plot of the raw cancer rates (pct mortality) vs the county population size (in log scale). Highlight the top 300 raw cancer rates in red. Highlight the bottom 300 raw cancer rates in blue. Finally, on the same plot again, scatter plot the mean of the posterior marginal distribution over $\\theta_j$, i.e. $p(\\theta_j|y_1, \\ldots, y_N)$, vs the county population size (in log scale), highlight these means in orange. \n\n You should use `pymc3` to sample from the posterior. Compare `pymc3`'s sampler with your sampler from the previous homework, what is the difference (if any) in the performance of these samplers?

\n\n\n\n```python\n# import relevant package\nimport pymc3 as pm\nfrom pymc3 import model_to_graphviz\n```\n\n\n```python\n# hyperparameters\na, b, c, d = 9, 6, 9, 0.00001\nburn_in = 0.1\nthinning = 10\n\ny = df['dc'].values\nn = df['pop'].values\naverage_pop = n.mean()\n\n```\n\n\n```python\n#define hierarchical model in pymc3\nwith pm.Model() as hierarchical_model:\n #priors on alpha\n alpha = pm.Gamma('alpha', alpha=a, beta=b)\n #priors on beta\n beta = pm.Gamma('beta', alpha=c, beta=d)\n #priors on theta\n theta = pm.Gamma('theta', alpha=alpha, beta=beta, shape=len(y))\n \n #convert rate into number of disease incidents\n mu = theta * 5 * n\n \n #likelihood\n y_obs = pm.Poisson('y', mu=mu, observed=y)\n \n```\n\n\n```python\n#draw graphical model for the thing we just defined \n# model_to_graphviz(hierarchical_model)\n\n```\n\n\n```python\nwith hierarchical_model:\n # using default sampler\n trace = pm.sample(1000, tune=1000, target_accept = 0.9) \n print(f'DONE')\n```\n\n :3: FutureWarning: In v4.0, pm.sample will return an `arviz.InferenceData` object instead of a `MultiTrace` by default. You can pass return_inferencedata=True or return_inferencedata=False to be safe and silence this warning.\n trace = pm.sample(1000, tune=1000, target_accept = 0.9)\n Auto-assigning NUTS sampler...\n Initializing NUTS using jitter+adapt_diag...\n Multiprocess sampling (4 chains in 4 jobs)\n NUTS: [theta, beta, alpha]\n\n\n\n\n
\n \n \n 100.00% [8000/8000 00:34<00:00 Sampling 4 chains, 0 divergences]\n
\n\n\n\n Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 48 seconds.\n The number of effective samples is smaller than 25% for some parameters.\n\n\n DONE\n\n\n\n```python\nN = len(trace['theta'])\ntheta_trace = trace['theta'][int(burn_in * N)::thinning]\npymc_post_mean = np.mean(theta_trace, axis=0)\n```\n\n\n```python\n# make scatter plot\nscatter_plot_cancer_rates_orange(df,estimates = pymc_post_mean )\nplt.title(\"pct_mortality vs pop in kcancer dataset\")\nplt.legend()\n\n```\n\n\n \n**Interpretation**:\n\nFrom this graph, we could observe that mean estimate from the posterior marginal distribution that sampled by hierarchical model using pymc3 now shrink to population mean, with more shrinkage for low population counties. \n\n\n```python\nfig, ax = plt.subplots(1, 2, figsize=(30, 5))\n#N = len(trace['theta'].T[county_1])\nalpha_trace = trace['alpha'][int(burn_in * N)::thinning]\nbeta_trace = trace['beta'][int(burn_in * N)::thinning]\n\nax[0].plot(range(len(alpha_trace)), alpha_trace, color='gray')\nax[0].set_title('Trace Plot for Alphas')\nax[0].set_ylabel('alpha')\nax[1].plot(range(len(beta_trace)), beta_trace, color='gray')\nax[1].set_title('Trace Plot for Betas')\nax[1].set_ylabel('beta')\nplt.show()\n```\n\n\n```python\n#select the county to visualize the traceplot for theta\n#same countries to compare with last homework\ncounty_1 = 0\ncounty_2 = 1\n\n#plot the traceplots for one theta and alpha and beta\nfig, ax = plt.subplots(1, 2, figsize=(30, 5))\nN = len(trace['theta'].T[county_1])\ntheta_trace = trace['theta'][int(burn_in * N)::thinning].T[county_1]\nax[0].plot(range(len(theta_trace)), theta_trace, color='gray', alpha=0.5)\nax[0].set_title('trace plot for theta_{}'.format(county_1))\n\ntheta_trace = trace['theta'][int(burn_in * N)::thinning].T[county_2]\nax[1].plot(range(len(theta_trace)), theta_trace, color='gray', alpha=0.5)\nax[1].set_title('trace plot for theta_{}'.format(county_2))\n```\n\n\n \n**Interpretation**:\n\nComparing to last homework where visual graph show clearly evidence of non-convergence of $\\alpha$ and $\\beta$ after multiple rounds of Gibbs-MH sampling, this graph shows a considerable level of convergence, and looks better than last time.\n \nI selected two counties that is the same as last time in homework 4 and compare for posterior marginal distribution of $\\theta$. It seems they have converged and are fluctuating within a small range too. It seems this sampler perform better than our Gibbs MH sampler implemented. This could be due to a better choice of proposal alpha beta distribution and hyperparameters, and a more complex implementation of sampling method.\n\n2. **(Hierarchical Bayes vs Empirical Bayes)** Compare the shrinkage of the posterior marginal means of the hierarchical model to the shrinkage of the posterior means from the Bayesian model with empirical Bayes estimates for $\\alpha, \\beta$. What is the difference in shrinkage between the full hierarchical model and the Bayesian model with empirical Bayes?\n\n\n```python\nscatter_plot_cancer_rates(df, estimates=thetas_mean)\nplt.title(\"pct_mortality vs pop in kcancer dataset (empirical bayes)\")\nplt.legend()\nplt.show()\n\nscatter_plot_cancer_rates_orange(df, estimates = pymc_post_mean)\nplt.title(\"pct_mortality vs pop in kcancer dataset (hierachical bayes)\")\nplt.legend()\nplt.show()\n```\n\n\n \n**Interpretation**:\n \nComparing two plots with green shrinkage (empirical bayes) and the orange shrinkage (hierachical bayes) of mean estimates, we could observe a larger shrinkage for the counties with least population in hierachical bayes estimations than empirical bayes estimates.\n \nThe hierarchy model pulls together the pct_mortality drastically for counties with small population size, while empirical bayes still shows strips of trends of pct_mortality that are not shrinked to a population mean for counties with small population size. The variations observed in hierachial bayes estimate are that there exists higher variance in the middle parts of population size. However, this doesn't seem to correlated with population size, nor due to the result of overfitting. It could be random variation instead of systematic variation. With hierachical bayes we can safely conclude that the cancer rate is almost the same for all counties regardless of population size, which also makes much more sense in real life.\n\n## Part III: Broader Impact Analysis\n\nStarting in 2020, major machine learning conferences are beginning to ask authors as well as reviewers to explicitly consider the broader impact of new machine learning methods. To properly evaluate the potential good or harm that a piece of technology (AI or not) can do to the general public, we need to be aware that no technology is deployed in ideal conditions or in perfectly neutral contexts. In order to assess the potential broader impact of technology, we need to analyze the social systems/institutions of which these technologies will become a part.\n\nTo help you analyze the broader impact of your technology, begin by considering the following questions:\n\nI. Identify the relevant socio-technical systems\n - In what social, political, economic system could the tech be deployed?\n - How would the tech be used in these systems (what role will it take in the decision making processes)?

\n \nII. Identify the stakeholders\n - Who are the users?\n - Who are the affected communities (are these the users)?\n \n ***Hint:*** users are typically decision makers who will use the technology as decision aids (e.g. doctors), whereas affected communities may be folks who are impacted by these decisions but who are not represented in the decision making process (e.g. patients).

\n \nIII. What types of harm can this tech do?\n - What kinds of failures can this tech have?\n - What kinds of direct harm can these failures cause?\n - What kinds of harm can the socio-technical system cause?\n \n ***Hint:*** many technical innovations have niche applications, they may sit in a long chain of decision making in a complex system. As such, it may seem, at first glance, that these technologies have no immediate real-life impact. In these cases, it\u2019s helpful to think about the impact of the entire system and then think about how the proposed innovations aid, hamper or change the goals or outcomes of this system.

\n \nIV. What types of good can this tech do?\n - What kinds of needs do these users/communities have?\n - What kinds of constraints do these users/communities have?

\n \n1. **(Computational Footprint)** In Homework #4, we considered the broader impact of the this hierarchical model for kidney cancer: we examined under what circumstances a hierachical model for kidney cancer is more preferable to a MLE model or a Bayesian model with hand-picked priors. In this problem, we compare hierarchical Bayes and empirical Bayes. \n\n In practical terms, what are the real-life advantages and drawbacks of implementing an empirical Bayesian model compared with an hierarchical Bayesian model?\n \n **Hint:** for example, compare how tractable it is to implement the inference algorithm for each model, compare how easy it is to for the inference method for each model to converge (would you know that your training has converged?). Consider also: how easily could a practitioner diagnose the training process and how could they evaluate the model fit and what kinds of computational resurces would be required to train each model.\n \n\n\n\n \n**Answers**:\n \nWhen put this model for kidney cancer and health care resources, we are sampling and estimating $\\theta$, which is the underlying kidney cancer rate for that county. The other components of the decision system in which the model will deployed will be citizens living in that county, patients receiving kidney cancer related healthcare or treatments, government, resources department, hospital, doctors etc.\n \nEnd user will be government, resources department, hospital, doctors etc;\nAffected communities will be citizens living in that county, patients receiving kidney cancer related healthcare or treatments.\n \nWhether it is more preferable to use a hiearchical model, an MLE model, a Bayesian model with hand-picked priors, or here, an empirical model depends more on a downstream task. \n \n \n* Real life advantages:\n * Easy to implement and compute, more tractable to implement the inference algorithm for empirical Bayesian model compared with an hierarchical Bayesian.\n * As we only need to look at evidence moments, it is more easy to understand and interpret to end user and community.\n * We don't need to hand pick prior that incorporate our prior belief if we have no experience in the field. It avoids the risk of a poor choice of prior.\n * It will converge when data size gets larger.\n \n \n \n**Drawbacks of implementing an empirical Bayesian model compared with an hierarchical Bayesian model:**\n * It may heavily depend on data quality and data size. Unlike how little poor choice of hierarchical model was, it has less effect on estimatation $\\theta$. However, the data quality in empirical model heavily decides our estimation.\n * It may not converge, or the mean estimate we observed may not shrink to a range. There's still overfitting in low population or small size data points. Those data points are unable to borrow statistical power from data points with more populations like in hierarchial models.\n * If not converged or shrink to a population mean, this will leads to difficulty for government and agency to figure out where to fund, which county has higher kidney cancer rate. \n * The needs for evidence and samples will leads to questions about where to put clinical trials, how many trials shall we obtain in terms of data points; Healthcare data points is unlike other data points to collect, it needs a lot of resource and invests to obtain.\n * If we overestimate the cancer rate, it's not serving anything or communities; it's not serving people who needed; The actual death rate is not that high; If we underestimate the cancer rate, we might under devote health care resources to needed people and county;\n * We might also missing out envrionment factors that affect the rate.\n * The algorithm may not always be tractable. If the population is non-conjugate prior, then it would be hard to sample from population to get needed data points for empirical bayesian model.\n * It's hard to diagnose issue within the data after we obtain the estimation.\n \n \n\n \n\n2. **(Mitigation of Potential Negative Impacts)** Based on your answers for the previous question and your broader impact analysis from HW#4, what information would the model designer/engineer need to disclose to the end-users to mitigate potentially negative impacts?\n\n **Hint:** consider how your end-user would be able to validate the model (how would they know if the model is working or has ceased to work once it has been deployed), how should the end-user choose between implementing a hiearchical Bayesian model or an empirical Bayesian model, how could the end-user identify and mitigate negative impacts (if they occur).\n\n\n \n**Answers**:\n \nThe potential downside and negative impacts, as mentioned briefly above, could be that \n- If we overestimate the cancer rate, it's not serving anything or communities; it's not serving people who needed; The actual death rate is not that high; If we underestimate the cancer rate, we might under devote health care resources to needed people and county;\n- If there exists systematic inequality, the model are helping to exaggerated it. For example, in rural counties they might already have low income, bad infrastructure, and a bad sampling model will help to systematically exaggerating the circumstance.\n \nTo mitigate negative impacts, end users should be able to validate the models by convergence check (traceplots etc), cross validation with human experts and experienced professionals. They should also treat end result with care and involve human decison making instead of fully taking computed results.\n\nFor patients and other affected communities, end users should give full disclosure of how model is being used and deployed, giving advice per treatment. Besides, other environmental factors should also be reviewed before making decision.\n \nThe next thing we would like to consider is that if any of these mistakes or negative impacts happening at random?\n\n- Data inbalance in healthcare is a huge problem (For example, gender data inequality, in many cases there's seldom woman data points, which is regarded as a commonly underpresented group.) The model may exaggerate this problem, and also overfit to one sub-population while be harmful to predictions in other subpopulation. \n- Model for these data are qualitatively different. Studies in man doesn't generalize to women.\n- For subgroups that are previously undersample, we will likely to remove them, as they look like outliers (Small data inside big data problem).\n- Thus, maybe consider using a mixture of several models to compare and cross check for estimation, and also allow small data size points to borrow statistical power from large data size points like what hierarchial model does would be a preferable choice.\n", "meta": {"hexsha": "780d7d80971cdc238a58e1a7aa03b2b9f99da7bd", "size": 571057, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW5/AM207_HW5_jiahuitang.ipynb", "max_stars_repo_name": "TangJiahui/AM207-Stochastic-Optimization", "max_stars_repo_head_hexsha": "4288efeb7c017d8d7fa1432ef0b4fcb2f1d42f1d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW5/AM207_HW5_jiahuitang.ipynb", "max_issues_repo_name": "TangJiahui/AM207-Stochastic-Optimization", "max_issues_repo_head_hexsha": "4288efeb7c017d8d7fa1432ef0b4fcb2f1d42f1d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW5/AM207_HW5_jiahuitang.ipynb", "max_forks_repo_name": "TangJiahui/AM207-Stochastic-Optimization", "max_forks_repo_head_hexsha": "4288efeb7c017d8d7fa1432ef0b4fcb2f1d42f1d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 458.3121990369, "max_line_length": 119360, "alphanum_fraction": 0.9266588099, "converted": true, "num_tokens": 13999, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.35220178204788966, "lm_q2_score": 0.3345894279828469, "lm_q1q2_score": 0.11784299278994272}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (12, 9)\nplt.rcParams[\"font.size\"] = 18\n```\n\n# Radioactivity\n\nLearning Objectives:\n\n- Explain how radioactivity was discovered\n- Explain the nuclear physical reason for radioactive decay\n- List the sources of natural and man made radiation\n- Read and understand a decay diagram\n- Calculate Q values for various types of decay\n- Describe the physics of various types of decay\n- State the radioactive decay law\n- Derive the radioactive decay law\n- Understand how incorporating sources impacts decay calcuations\n- Calculate decay with production for simple cases\n\n## Discovery of Radioactivity\n- Radioactivity was first discovered in 1896 by Henri Becquerel, while working on phosphorescent materials.\n- These materials glow in the dark after exposure to light, and he thought that the glow produced in cathode ray tubes by X-rays might be connected with phosphorescence.\n- He wrapped a photographic plate in black paper and placed various phosphorescent minerals on it.\n- All results were negative until he used uranium salts.\n\n\n
Becquerel in the lab
\n\n\n
Photographic plate made by Henri Becquerel showing effects of exposure to radioactivity.
\n\n\n**What is nuclear decay?**\nA spontaneous process where the protons and neutrons in a given nucleus are rearranged into a lower energy state.\nThe transition may involve levels of the same nucleus (gamma emission, internal conversion) or levels of a different nucleus (alpha, beta decay).\n\n**Why do nuclei decay?** A nucleus decays when it is **unstable**. It undergoes decay in order to become more stable (lower energy state).\n\n**Where do the unstable nuclei come from?**\n\n\n## Sources of Radiation\n\n> A chart of the public's exposure to ionizing radiation (displayed below) \n> shows that people generally receive a total annual dose of about 620 millirem. \n> Of this total, natural sources of radiation account for about 50 percent, \n> while man-made sources account for the remaining 50 percent. -- US NRC\n\n\n\n## Natural Sources\n\n\n\n### Terrestrial\n\nUranium, thorium, radium, etc. are all present naturally inside rocks and soil and are part of the four major decay chains. Others are 'primordial.' Primoidal radioactive isotopes include about 20 isotopes that are long lived and not part of any decay chain (K-40, Rubidium-87) Potassium has a quite radioactive isotope, $^{40}K$. (bananas)\n\n\n
Evolution of the earth's radiogenic heat (in the mantle).
\n\n### Internal\n\nMostly $^{40}K$ and $^{14}C$ inside your body.\n\n### Cosmic\nCommonly, cosmic radiation includes $^{14}C$, tritium ($^3H$), and others.\n\n


CC BY-SA 3.0, Link

\n\n\n\n\nAir showers ensuing from very-high-energy cosmic rays can enter Earth\u2019s atmosphere from multiple directions. Credit: Simon Swordy/NASA.\n\n\n## Man-Made Radiation\n\nOf approximately 3200 known nuclides:\n- **266** are stable, \n- **65** long-lived radioactive isotopes are found in nature, \n- and the remaining **~2900** have been made by humans.\n\nOne of the heaviest named elements is Livermorium (Z=116), its most stable isotope $^{293}Lv$ has half-life $t_{1/2} = 60 ms$.\n\n\n\n### Types of Decay \n\n\n```python\n# The below IFrame displays Page 121 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/121\", width=1000, height=1000)\n\n```\n\n\n\n\n\n\n\n\n\n\nAll elements smaller than Z=83 (Bismuth) have at least one stable isotope.\nExceptions: Technetium (Z=43) and Promethium (Z=61).\n\nOnce nucleus gets past a certain size, it is unstable\nThe largest stable nucleus is Pb-208, all isotopes larger than this are unstable\n\nNuclear conservation laws apply. The following are conserved:\n- charge\n- number of nucleons (proton + neutron)\n- total energy (mass + energy)\n- linear momentum (in inertial frame of reference)\n- angular momentum (spin)\n - alternatively: leptons (electrons + neutrinos)\n\n\n```python\n# The below IFrame displays Page 100 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/122\", width=1000, height=1000)\n```\n\n\n\n\n\n\n\n\n\n\n## Energetics of Decay\n\n\n\n\n### Alpha ($\\alpha$) Decay\n\nAn $\\alpha$ particle is emitted.\nThe daughter is left with 2 fewer neutrons and 2 fewer protons than the parent.\n\n\\begin{align}\n^{A}_{Z}P \\longrightarrow ^{A-4}_{Z-2}D^{2-} + ^4_2\\alpha\n\\end{align}\n\n\\begin{align}\n\\frac{Q}{c^2} = M\\left(^{A}_{Z}P\\right)-\\left[ M\\left(^{A-4}_{Z-2}D\\right) + M\\left(^{4}_{2}He\\right)\\right]\n\\end{align}\n\n\n\n### Gamma ($\\gamma$) Decay\n\nAn excited nucleus decays to its ground state by the emission of a gamma photon.\n\n\\begin{align}\n^{A}_{Z}P^* \\longrightarrow ^{A}_{Z}P + \\gamma\n\\end{align}\n\n\n\\begin{align}\nQ = E^*\n\\end{align}\n\n\n\n### Negatron ($\\beta -$) Decay\nA neutron changes into a proton in the nucleus (nuclear weak force). An electron ($\\beta -$) and an antineutrino ($\\bar{\\nu}$) are emitted.\n\n\\begin{align}\n^{A}_{Z}P \\longrightarrow ^{A}_{Z+1}D^{+} + ^0_{-1}e + \\bar{\\nu}\n\\end{align}\n\n\n\\begin{align}\n\\frac{Q}{c^2} = M\\left(^{A}_{Z}P\\right)- M\\left(^{A}_{Z+1}D\\right)\n\\end{align}\n\n\n\n### Positron ($\\beta +$) Decay\n\nA proton changes into a neutron in the nucleus (nuclear weak force). A positron ($\\beta +$) and a neutrino ($\\nu$) are emitted.\n\n\\begin{align}\n^{A}_{Z}P \\longrightarrow ^{A}_{Z-1}D^{-} + ^0_{+1}e + \\nu\n\\end{align}\n\n\n\\begin{align}\n\\frac{Q}{c^2} = M\\left(^{A}_{Z}P\\right)- \\left[ M\\left(^{A}_{Z-1}D\\right) + 2m_e\\right]\n\\end{align}\n\n\n\n### Electron Capture\n1. An orbital electron is absorbed by the nucleus,\n2. converts a nuclear proton into a neutron and a neutrino ($\\nu$), \n3. and typically leaves the nucleus in an excited state.\n\n\\begin{align}\n^{A}_{Z}P + \\left(^0_{-1}e\\right) \\longrightarrow ^{A}_{Z-1}D + \\nu\n\\end{align}\n\n\n\\begin{align}\n\\frac{Q}{c^2} = M\\left(^{A}_{Z}P\\right)- M\\left(^{A}_{Z-1}D\\right) \n\\end{align}\n\n\n\n### Proton Emission\nA proton is ejected from the nucleus.\n\n\\begin{align}\n^{A}_{Z}P \\longrightarrow ^{A-1}_{Z-1}D^{-} + ^1_1p\n\\end{align}\n\n\\begin{align}\n\\frac{Q}{c^2} = M\\left(^{A}_{Z}P\\right)-\\left[ M\\left(^{A-1}_{Z-1}D\\right) + M\\left(^{1}_{1}H\\right)\\right]\n\\end{align}\n\n\n\nThe decay of a proton rich nucleus A populates excited states of a daughter nucleus B by \u03b2+ emission or electron capture (EC). Those excited states that lie below the separation energy for protons (Sp) decay by \u03b3 emission towards the groundstate of daughter B. For the higher excited states a competitive decay channel of proton emission to the granddaughter C exists, called \u03b2-delayed proton emission.\n\n### Neutron Emission\nA neutron is ejected from the nucleus.\n\n\\begin{align}\n^{A}_{Z}P \\longrightarrow ^{A-1}_{Z}P + ^1_0n\n\\end{align}\n\n\\begin{align}\n\\frac{Q}{c^2} = M\\left(^{A}_{Z}P\\right)-\\left[ M\\left(^{A-1}_{Z}P\\right) + m_n\\right]\n\\end{align}\n\n### Internal Conversion\n\nThe excitation energy of a nucleus is used to eject an orbital electron (typically a K-shell) electron.\n\n\\begin{align}\n^{A}_{Z}P^* \\longrightarrow ^{A-1}_{Z}P^+ + ^0_{-1}e\n\\end{align}\n\n\\begin{align}\nQ = E* - BE^K_e\n\\end{align}\n\n\n\n### An Aside on The Nuclear Weak Force\n\nBeta ($\\beta \\pm$) decay is a consequence of the **weak force**, which is characterized by long decay times (thousands of years). Nucleons are composed of up quarks and down quarks, and **the weak force allows a quark to change type** by the exchange of a W boson and the creation of an electron/antineutrino or positron/neutrino pair. For example, **a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks.** \n\n\n\n## Radioactive Decay Law\n\n\n**The Law:** The probability that an unstable parent nucleus will decay spontaneously into one or more particles of lower mass/energy is independent of the past history of the nucleus and is the same for all radionuclides of the same type.\n\n### Decay Constant \nRadioactive decay takes place stochastically in a single atom, and quite predictably in a large group of radioactive atoms of the same type.\n\nThe probability that any one of the radionuclides in the sample decays in $\\Delta t$ is $\\frac{\\Delta N}{N}$. This **decay probability** $\\frac{\\Delta N}{N}$ per unit time for a time interval $\\Delta t$ should vary smoothly for large $\\Delta t$. \n\nThe statistically averaged **decay probability per unit time**, in the limit of infinitely small $\\Delta t$, approaches a constant $\\lambda$.\nThus:\n\n\\begin{align}\n \\lambda &= \\mbox{decay constant}\\\\\n &\\equiv \\lim_{\\Delta t \\to 0} \\frac{\\left(\\Delta N/N\\right)}{\\Delta t}\\\\\nN(t) &= \\mbox{expected number of nuclides in a sample at time t}\\\\\n\\implies \\mbox{ } -dN &= \\mbox{decrease in number of radionuclides}\\\\\n &= \\lambda N(t) dt\\\\\n\\implies \\mbox{ } \\frac{dN(t)}{dt} &=-\\lambda N(t)\\\\ \n\\end{align}\n\nThe above is a differential equation. The solution to this differential equation give a definition of N as a function of t. We can now describe the change in radioactive nuclides over time as:\n\n\\begin{align}\n \\frac{dN}{dt} &= -\\lambda N \\\\\n \\Rightarrow N_i(t) &= N_i(0)e^{-\\lambda t}\\\\\n\\end{align}\n\nwhere\n\n\\begin{align}\n N_i(t) &= \\mbox{number of isotopes i adjusted for decay}\\\\\n N_i(0)&= \\mbox{initial condition}\\\\\n \\end{align}\n\n\n```python\nimport math\ndef n_decay(t, n_initial=100, lam=1):\n \"\"\"This function describes the decay of an isotope\"\"\"\n return n_initial*math.exp(-lam*t)\n\n\n# This code plots the decay of an isotope\nimport numpy as np\ny = np.arange(6.0)\nx = np.arange(6.0)\nfor t in range(0,6):\n x[t] = t\n y[t] = n_decay(t)\n \n# creates a figure and axes with matplotlib\nfig, ax = plt.subplots()\nscatter = plt.scatter(x, y, color='blue', s=y*20, alpha=0.4) \nax.plot(x, y, color='red') \n\n# adds labels to the plot\nax.set_ylabel('N_i(t)')\nax.set_xlabel('Time')\nax.set_title('N_i')\n\n# adds tooltips\nimport mpld3\nlabels = ['{0}% remaining'.format(i) for i in y]\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\n\nmpld3.display()\n```\n\n\n\n\n\n\n\n\n
\n\n\n\n\n## Half Life\n\nAll dynamic processes which decay (or grow) exponentially can be characterized by their half life (or doubling time). In the case of radioactive decay\n\n\\begin{align}\n \\tau_{1/2}&= \\mbox{half-life}\\\\\n &=\\frac{ln(2)}{\\lambda} \\\\\n t &= \\mbox{time elapsed [s]}\\\\\n \\tau_{1/2} &= \\mbox{half-life [s]} \\\\\n\\end{align}\n\n\n\n```python\n# This code converts decay constant to half life\n\ndef half_life(lam):\n return math.log(2)/lam \nlam = 1.0\n\n# This code plots the decay of an isotope for various half lives\nimport numpy as np\ny = np.arange(8.0)\nx = np.arange(8.0)\n\nlives = np.arange(half_life(lam), 9*half_life(lam), half_life(lam))\nfor i, t in enumerate(lives):\n x[i] = float(t)\n y[i] = n_decay(t)\n \n# creates a figure and axes with matplotlib\nfig, ax = plt.subplots()\nscatter = plt.scatter(x, y, color='blue', s=y*20, alpha=0.4) \nax.plot(x, y, color='red') \n\n# adds labels to the plot\nax.set_ylabel('N_i(t)')\nax.set_xlabel('Time')\nax.set_title('N_i')\n\n# adds tooltips\nimport mpld3\nlabels = ['{0}% remaining'.format(i) for i in y]\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\n\nmpld3.display()\n```\n\n\n\n\n\n\n\n\n
\n\n\n\n\nAfter n half lives, only $\\frac{1}{2^n}$ of the original sample remain.\n\\begin{align}\nN(n\\tau_{1/2})= \\frac{1}{2^n}N_0\n\\end{align}\n\nIn a sample, $N_0$ has has been reduced to a fraction of $N_0$, $\\epsilon$. How many half lives have passed?\n\n\\begin{align}\nn = \\frac{-ln\\epsilon}{ln2}\n\\end{align}\n\nFinally, the radioactive decay law can be expressed using the half-life:\n\n\\begin{align}\nN(t) = N_0\\left(\\frac{1}{2}\\right)^{t/\\tau_{1/2}}\n\\end{align}\n\n## Decay by competing processes\n\n\\begin{align}\n\\frac{dN(t)}{dt} &= -\\lambda_1N(t) - \\lambda_2N(t) - \\cdots \\lambda_nN(t)\\\\\n &= -\\sum_{i=1}^n \\lambda_iN(t)\\\\\n &\\equiv-\\lambda N(t)\n\\end{align}\n\nA nuclide will decay by the $i^{th}$ mode with probability $f_i$.\n\n\\begin{align}\nf_i &= \\frac{\\mbox{decay rate by ith mode}}{\\mbox{decay rate by all modes}}\\\\\n &=\\frac{\\lambda_i}{\\lambda}\n\\end{align}\n\n## Decay with Production\n\nIn reactors, isotopes decay into one another and still others are born from fission. Thus, if there is production, we can rewrite the standard decay differential equation as:\n\n\n\\begin{align}\n\\frac{dN(t)}{dt} &= -\\mbox{rate of decay} + \\mbox{rate of production}\\\\\n\\implies N(t) &= N_0 e^{-\\lambda t} + \\int_0^t dt'Q(t')e^{-\\lambda (t-t')}\\\\\n\\end{align}\n\nIf the production rate is constant, this simplifies:\n\n\n\\begin{align}\nN(t) &= N_0 e^{-\\lambda t} + \\frac{Q_0}{\\lambda}\\left[1-e^{-\\lambda t}\\right]\\\\\n\\end{align}\n\n### Fuel Depletion\n\nDecays, fissions, and absorptions compete throughout the life of the reactor.\n\n\n\n#### Aside: Reaction Rates\n\nIn a reactor, this Q can be characterized via reaction rates.\n\n- The microscopic cross section $\\sigma_{i,j}$ is just the likelihood of the event per unit area.\n- The macroscopic cross section $\\Sigma_{i,j}$is just the likelihood of the event per unit area of a certain density of target isotopes.\n- The reaction rate is the macroscopic cross section times the flux of incident neutrons.\n\n\\begin{align}\nR_{i,j}(\\vec{r}) &= N_j(\\vec{r})\\int dE \\phi(\\vec{r},E)\\sigma_{i,j}(E)\\\\\nR_{i,j}(\\vec{r}) &= \\mbox{reactions of type i involving isotope j } [reactions/cm^2s]\\\\\nN_j(\\vec{r}) &= \\mbox{number of nuclei participating in the reactions }\\\\\nE &= \\mbox{energy}\\\\\n\\phi(\\vec{r},E)&= \\mbox{flux of neutrons with energy E at position i}\\\\\n\\sigma_{i,j}(E)&= \\mbox{cross section}\\\\\n\\end{align}\n\n\nWe said this can be written more simply as $R_x = \\sigma_x I N$, where I is intensity of the neutron flux. In the notation of the above equation, we can describe the production of an isotope by neutron absorption by another isotope as :\n\n\\begin{align}\n\\mbox{isotope i production via neutron absorption in m} = f_{im}\\sigma_{am}N_m \\phi\n\\end{align}\n\n\n### Total composition evolution\n\n\\begin{align}\n\\frac{dN_i}{dt} &= \\sum_{m=1}^{M}l_{im}\\lambda_mN_m + \\phi\\sum_{m=1}^{M}f_{im}\\sigma_mN_m - (\\lambda_i + \\phi\\sigma_i + r_i - c_i)N_i + F_i\\Big|_{i\\in [1,M]}\\\\\n\\end{align}\n\\begin{align}\nN_i &= \\mbox{atom density of nuclide i}\\\\\nM &= \\mbox{number of nuclides}\\\\\nl_{im} &= \\mbox{fraction of decays of nuclide m that result in formation of nuclide i}\\\\\n\\lambda_i &= \\mbox{radioactive decay constant of nuclide i}\\\\\n\\phi &= \\mbox{neutron flux, averaged over position and energy}\\\\\nf_{im} &= \\mbox{fraction of neutron absorption by nuclide m leading to the formation of nuclide i}\\\\\n\\sigma_m &= \\mbox{average neutron absorption cross section of nuclide m}\\\\\nr_i &= \\mbox{continuous removal rate of nuclide i from the system}\\\\\nc_i &= \\mbox{continuous feed rate of nuclide i}\\\\\nF_i &= \\mbox{production rate of nuclide i directly from fission}\\\\\n\\end{align}\n\n\n\n\n\n\n\n### Example: $^{135}Xe$\n\n**Discussion: What is interesting about Xenon?**\n \n \n$^{135}Xe$ is produced directly by fission and from the decay of iodine.\n\n\\begin{align}\n\\frac{dN_{xe}}{dt} &= \\sum_{m=1}^{M}l_{Xem}\\lambda_mN_m + \\phi\\sum_{m=1}^{M}f_{Xem}\\sigma_mN_m - (\\lambda_{Xe} + \\phi\\sigma_{Xe} + r_{Xe} - c_{Xe})N_{Xe} + F_{Xe}\\\\\n &= -\\lambda_{Xe}N_{Xe} - \\sigma_{aXe}\\phi N_{Xe} + \\lambda_IN_I + F_{Xe}\\\\\n &= -\\lambda_{Xe}N_{Xe} - \\sigma_{aXe}\\phi N_{Xe} + \\lambda_IN_I + \\gamma_{Xe}\\Sigma_f\\phi\\\\\n \\gamma_{Xe} &= 0.003\\\\\n \\gamma_{I} &= 0.061\\\\\n\\end{align}\n\n### Example: $^{239}Pu$ \n\n\n\\begin{align}\n\\frac{dN_{Pu}}{dt} &= \\sum_{m=1}^{M}l_{Pum}\\lambda_mN_m + \\phi\\sum_{m=1}^{M}f_{Pum}\\sigma_mN_m - (\\lambda_{Pu} + \\phi\\sigma_{Pu} + r_{Pu} - c_{Pu})N_{Pu} + F_{Pu}\\\\\n\\end{align}\n\n\nLet's formulate this equation together.\n\n\n$$\\mathrm{^{238}_{\\ 92}U \\ + \\ ^{1}_{0}n \\ \\longrightarrow \\ ^{239}_{\\ 92}U \\ \\xrightarrow [23.5\\ min]{\\beta^-} \\ ^{239}_{\\ 93}Np \\ \\xrightarrow [2.3565\\ d]{\\beta^-} \\ ^{239}_{\\ 94}Pu}$$\n\n\n- Decay of what nuclides result in the formation of $^{239}Pu$?\n- Does $^{239}Pu$ decay?\n- Is there a nuclide that becomes $^{239}Pu$ after it absorbs a neutron?\n- Does $^{239}Pu$ ever absorb neutrons?\n- Is $^{239}Pu$ ever produced directly from fission?\n\n\n## Burnable Poisons\n\n- Gadolinia ($Gd_2O_3$) or erbia ($Er_2O_3$) common\n- Natural Erbium consists of Er166, Er167, Er168 and Er170 primarily. Er167 has large thermal cross section.\n- Gd is an early life burnable poison, typically gone by 10\u201020 GWd\n- Boron also used widely.\n- Can be mixed with the fuel or a coating on the pellet.\n\n\n\n\\begin{align}\n\\frac{dN^P(t)}{dt} &= -g(t)\\sigma_{aP}N^P(t)\\phi\\\\\ng(t) &= \\frac{\\mbox{average flux inside BP}}{\\mbox{average flux in core}}\\\\\n\\sigma_{aP} &=\\mbox{neutron absorption cross section of the BP}\\\\\nN^P(t) &= \\mbox{number of atoms of the BP at time t}\n\\end{align}\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a1192cfd3651bd2f6219cf05bec5c7b5226fc888", "size": 61027, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "radioactivity/00-radioactivity.ipynb", "max_stars_repo_name": "katyhuff/npr247", "max_stars_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-12-17T06:07:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:14:51.000Z", "max_issues_repo_path": "radioactivity/00-radioactivity.ipynb", "max_issues_repo_name": "katyhuff/npr247", "max_issues_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-29T17:27:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T17:46:50.000Z", "max_forks_repo_path": "radioactivity/00-radioactivity.ipynb", "max_forks_repo_name": "katyhuff/npr247", "max_forks_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-25T20:00:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T03:05:26.000Z", "avg_line_length": 67.0626373626, "max_line_length": 4389, "alphanum_fraction": 0.572123814, "converted": true, "num_tokens": 5926, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2598256379609837, "lm_q2_score": 0.45326184801538616, "lm_q1q2_score": 0.11776904882397213}} {"text": "```\n# This mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive')\n\n# TODO: Enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment1/'\nFOLDERNAME = None\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# Now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# This downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd /content/drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content/drive/My\\ Drive/$FOLDERNAME\n```\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization, proposed by [1] in 2015.\n\nTo understand the goal of batch normalization, it is important to first recognize that machine learning methods tend to perform better with input data consisting of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features. This will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance, since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, they propose to insert into the network layers that normalize batches. At training time, such a layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# Setup cell.\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams[\"figure.figsize\"] = (10.0, 8.0) # Set default size of plots.\nplt.rcParams[\"image.interpolation\"] = \"nearest\"\nplt.rcParams[\"image.cmap\"] = \"gray\"\n\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\"Returns relative error.\"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(f\" means: {x.mean(axis=axis)}\")\n print(f\" stds: {x.std(axis=axis)}\\n\")\n```\n\n\n```\n# Load the (preprocessed) CIFAR-10 data.\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(f\"{k}: {v.shape}\")\n```\n\n# Batch Normalization: Forward Pass\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n\n# Means should be close to zero and stds close to one.\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n# Batch Normalization: Backward Pass\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-13 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n# Batch Normalization: Alternative Backward Pass\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hard part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n# Fully Connected Networks with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\n**Hint:** You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n# Batch Normalization for Deep Networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch Normalization and Initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train eight-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n\n```\n# Plot results of weight scale experiment.\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the weight initialization scale affect models with/without batch normalization differently, and why?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Batch Normalization and Batch Size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n \n # Try training a very deep net with batchnorm.\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization.\n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\n# Means should be close to zero and stds close to one.\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n\n```\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-12 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n# Layer Normalization and Batch Size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n[FILL THIS IN]\n\n", "meta": {"hexsha": "14f97c27e15913e6abe2d14c5a4e64300de780f2", "size": 36231, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2_colab/assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "Hira63S/CS231n", "max_stars_repo_head_hexsha": "f7e174272867979a8e63ba52de35fcc70d3f3449", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2_colab/assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "Hira63S/CS231n", "max_issues_repo_head_hexsha": "f7e174272867979a8e63ba52de35fcc70d3f3449", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment2_colab/assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "Hira63S/CS231n", "max_forks_repo_head_hexsha": "f7e174272867979a8e63ba52de35fcc70d3f3449", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.0801393728, "max_line_length": 852, "alphanum_fraction": 0.6042891447, "converted": true, "num_tokens": 6866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4378234991142019, "lm_q2_score": 0.26894142722948733, "lm_q1q2_score": 0.11774887672638164}} {"text": "```python\n%run ../../common/import_all.py\n\nfrom common.setup_notebook import set_css_style, setup_matplotlib, config_ipython\nconfig_ipython()\nsetup_matplotlib()\nset_css_style()\n```\n\n\n\n\n\n\n\n\n\n\n# (Some of) the most famous distributions\n\nLet's go look at some of the most famous/common distributions you can see around. Not exhaustive and maybe you could even say that what's most popular is a matter of who you are.\n\nNow that we're at it, let's also calculate expected value and variance (at least) for these ones. For what those things are, have a look at the notebook about moments of a distribution in this same section!\n\n## Easy-peasy: the Uniform\n\nGiven a continuous variable $X$ taking values in interval $\\in [a,b]$, a *uniform* distribution is one where every possible value has the same probability. Its pdf is simply\n\n$$\np(x) = \\frac{1}{b-a} \\ ,\n$$\n\nbecause you have 1 case over the total possible cases, which is the width of the interval.\n\nThe expected value is \n\n$$\n\\mathbb{E}[X] = \\int_a^b \\text{d} x \\ \\frac{1}{b-a} = \\frac{b+a}{2} \\ ,\n$$\n\nwhich, as expected (!), corresponds to the middle point of the interval because given that every point is equiprobable, this is where we fall by averaging values.\n\nThe variance is \n\n$$\n\\begin{align}\nVar[X] &= \\int_a^b \\text{d} x \\ x \\Big(x - \\frac{1}{b-a}\\Big)^2 \\\\\n&= \\int_a^b \\text{d} x \\ x^3 - 2x^2\\frac{1}{b-a} + \\frac{x}{(b-a)^2} \\\\ \n&= \\frac{b^4 - a^4}{4} - \\frac{2}{3} \\frac{b^3 - a^3}{b-a} + \\frac{b^2-a^2}{2(b-a)^2} \\\\\n&= \\frac{(b^2 - a^2)(b^2 + a^2)}{4} - \\frac{2}{3} \\frac{(b-a)(b^2 + ab + a^2)}{b-a} + \\frac{b+a}{2(b-a)} \\\\\n& = \\ \\frac{(b-a)^2}{12} \\ .\n\\end{align}\n$$\n\n## Success or failure: the Bernoulli\n\nLet's consider a binary variable $X \\in \\{0,1\\}$, so that it can take the two values 1 (which we'll call the *success*) or 0 (which we'll call the *failure*). The prototype of this is the result of flipping of a coin. Let's also call $\\mu$ the probability of the success so that, by definition\n\n$$\nP(X=1) = \\mu \\ ; P(X=0) = 1 - \\mu \\ ,\n$$\n\nso that the [pmf](probfunctions-histogram.ipynb#The-PMF) (it is a discrete variable) can be expressed as\n\n$$\np(x;\\mu) = \\mu^x(1-\\mu)^{1-x}\n$$\n\nbecause when we have $x=1$ we are left with $\\mu$ and when we have $x=0$ we are left with $1-\\mu.$\n\nSuch distribution has expected value\n\n$$\n\\mathbb{E}[X] = \\sum_{x \\in \\{0,1\\}} x \\mu^x(1-\\mu)^{1-x} = 0 + 1\\mu(1-\\mu)^0 = \\mu\n$$\n\nand variance\n\n$$\nVar[X] = \\sum_{x \\in \\{0,1\\}} x^2 \\mu^x(1-\\mu)^{1-x} - \\mu^2 = \\mu - \\mu^2 = \\mu(1-\\mu)\n$$\n\nThe Bernoulli distribution is a special case of a binomial distribution for a single observation, see below!\n\n## More successes and more failures: the Binomial\n\nThe binomial distribution describes the probability of observing $k$ occurrences of $x=1$ in a set of $n$ samples from a Bernoulli distribution. $\\mu$ is the probability of observing $x=1$. The pmf will be then\n\n$$\np(x;\\mu) = {{n}\\choose{k}} \\mu^k (1-\\mu)^{n-k} \\ ,\n$$\n\nbecause we have ${{n}\\choose{k}}$ ways of creating groups of $k$ from $n$ values and because each extraction is a Bernoulli.\n\nThe expected value is\n\n$$\n\\mathbb{E}[X] = n \\mu\n$$\n\nand the variance is\n\n$$\nVar[X] = n \\mu (1- \\mu)\n$$\n\nHead to [Wikipedia](https://en.wikipedia.org/wiki/Binomial_distribution) for the proofs.\n\n## Extending all that^: the Multinomial\n\nIt is a multivariate generalisation of the binomial and gives the distribution over counts $m_k$ for a $k$-state discrete variable to be in state $k$ given a total of observations $n$.\n\nAn example is the extraction of $n$ balls of $k$ different colours from a bag, replacing the extracted ball after each draw. The pmf reads\n\n$$\np(m_1, m_2, \\ldots, m_k, \\mu_1, \\mu_2, \\ldots, \\mu_k, n) = {{n}\\choose{m_1 m_2 \\ldots m_k}} \\mu_1^{m_1} \\mu_2^{m_2} \\ldots \\mu_k^{m_k}\n$$\n\nand we have\n\n$$\n\\mathbb{E}[m_k] = n \\mu_k \\ ,\n$$\n\n$$\nVar[m_k] = n \\mu_k(1-\\mu_k)\n$$\n\n## He majesty the Gaussian\n\nThe gaussian distribution (after C F Gauss) is also called a *normal* distribution or, in some cases, bell curve (from its shape). Let $\\mu$ be the expected value and $\\sigma$ the standard deviation,\n\n$$\np(x; \\mu, \\sigma) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} e^{-\\frac{1}{2 \\sigma^2} (x-\\mu)^2}\n$$\n\nIt is usually indicated as $\\mathcal N(\\mu, \\sigma^2)$, where the $\\mathcal N$ stands for \"normal\".\n\n## The Beta\n\nGiven a continuous variable $x \\in [0,1]$, the distribution is parametrized by $\\alpha, \\beta > 0$ which define its shape.\n\n$$\np(x; \\alpha, \\beta) = \\mathcal{N} x^{\\alpha-1}(1-x)^{\\beta-1} \\ ,\n$$\n\nwhere $\\mathcal{N}$ is the normalisation constant:\n\n$$\n\\mathcal{N} = \\frac{\\Gamma(\\alpha + \\beta)}{\\Gamma(\\alpha) \\Gamma(\\beta)} = \\frac{1}{\\int d u u^{\\alpha -1}(1-u)^{\\beta-1}} \\ ,\n$$\n\nwith $\\Gamma$ the gamma function (extension of the factorial to real and complex numbers), defined as\n\n$$\n\\Gamma(t) = \\int_0^\\infty x^{t-1} e^{-x} dx\n$$\n\nand \n\n$$\n\\frac{\\Gamma(\\alpha + \\beta)}{\\Gamma(\\alpha) \\Gamma(\\beta)} = \\frac{1}{B(\\alpha, \\beta)} \\ ,\n$$\n\n$B$ being the beta function.\n\nThe beta distribution is the conjugate prior of the Bernoulli distribution for which $\\alpha$ and $\\beta$ are the prior number of observations $x=1$ and $x=0$. When $\\alpha=\\beta=1$, it reduces to a uniform distribution.\n\n## The Student's t\n\n*Student* was the pseudonym of W Gosset, which we can all consider ourselves very grateful to, given all his work in statistics. He was working at the Guinness brewery in Dublin, produces various intellectual findings while working with beer data, but had to publish under a false name due to company's regulations, and he chose \"Student\". \n\nThis distribution arises when estimating the mean of a normally distributed population in situations where the sample size is mall and the population standard deviation is unknown. Hence, it describes a sample extracted from said population: the larger the sample, the more the distribution resembles the normal.\n\n$$\np(x; \\nu) = \\frac{\\Gamma(\\frac{\\nu+1}{2})}{\\sqrt{\\nu \\pi} \\Gamma(\\frac{\\nu}{2})} \\Big(1 + \\frac{x^2}{\\nu}\\Big)^{-\\frac{\\nu+1}{2}}\n$$\n\n$\\nu$ is the number of degrees of freedom. For $\\nu=1$, the distribution reduces to the Cauchy distribution.\n\n## The elegant Chi-squared, $\\chi^2$\n\nIt is the distribution (with $k$ degrees of freedom) of the sum of the squares of $k$ independent standardised normal variables $z_i$ (that is, normal variables standardised to have mean 0 and standard deviation 1). It is a special case of the $\\Gamma$ distribution.\n\n$$\nQ = \\sum_1^k z_i^2 \\ ,\n$$\n\nSo \n\n$$\nQ \\sim \\chi^2(k)\n$$\n\nand depends on the degrees of freedom.\n\n## The Poisson: events in time or space\n\nIt is a discrete probability distribution and describes the probability that a given number of events occurs in a fixed interval of time and/or space if they are known to occur with a certain (known) average rate and independently of the time and/or distance of the last event.\n\nAn example is the mail you receive per day. Suppose on average you receive 4 mails per day. Assuming that the events \"mail arriving\" are independent, then it is reasonable to assume that the number of mails received each day follows a Poissonian. Other examples are the number of people in a queue at a given time of the day or the number of goals scored in a world cup match, see below.\n\n$$\nP(k) = \\frac{\\lambda^k e^{-\\lambda}}{k!} \\ ,\n$$\n\nwhere $k = 0, 1, 2, \\ldots$ is the number of events in an interval and $\\lambda$ the average number of such events in the same interval.\n\nThe expected value is \n\n$$\n\\mathbb{E}[k] = \\sum_{k \\geq 0} k \\frac{\\lambda^k e^{-\\lambda}}{k!} = \\sum_{k \\geq 1} \\lambda \\frac{\\lambda^{k-1}}{(k-1)!} e^{-\\lambda} = \\lambda e^{-\\lambda} e^\\lambda = \\lambda\n$$\n\nand the variance is\n\n$$\n\\begin{align*}\nVar[k] &= \\mathbb{E}[k^2] - \\mathbb{E}^2[k] \\\\\n &= \\sum^{k \\geq 0} k^2 \\frac{\\lambda^k e^{-\\lambda}}{k!} - \\lambda^2 \\\\\n &= \\lambda e^{-\\lambda} \\sum_{k \\geq 1} k \\frac{\\lambda^{k-1}}{(k-1)!} - \\lambda^2 \\\\\n &= \\lambda e^{-\\lambda} \\Big[ \\sum_{k \\geq 1} (k-1) \\frac{\\lambda^{k-1}}{(k-1)!} + \\sum_{k \\geq 1} \\frac{\\lambda^{k-1}}{(k-1)!} - \\lambda^2 \\Big] \\\\\n &= \\lambda \\Big[ \\lambda \\sum_{k \\geq 2} \\frac{1}{(k-2)!} \\lambda^{k-2} + \\sum_{k \\geq 1} \\frac{1}{(k-1)!} \\lambda^{k-1} - \\lambda^2 \\Big] \\\\\n &= \\lambda e^{-\\lambda} \\Big[ \\lambda \\sum_{k \\geq 2} \\frac{1}{(k-2)!} \\lambda^{k-2} + \\sum_{k \\geq 1} \\frac{1}{(k-1)!} \\lambda^{k-1} - \\lambda^2\\Big] \\\\\n &= \\lambda e^{-\\lambda} \\Big[ \\lambda \\sum_{i \\geq 0} \\frac{1}{i!}\\lambda^i + \\sum_{j \\geq 0} \\frac{1}{j!} \\lambda^j \\Big] \\\\\n &= \\lambda e^{- \\lambda} [\\lambda e^\\lambda + e^\\lambda] - \\lambda^2 = \\lambda^2 + \\lambda - \\lambda^2 = \\lambda\n\\end{align*}\n$$\n\nSo expected value and variance are the same and equal to the average rate of occurrence.\n\nThe Poisson distribution is appropriate if \n\n* the events are independent, _i.e._, the occurrence of one of them does not affect the probability that a second one occurs;\n* the rate st which events occur is constant;\n* two events cannot occur at the same time;\n* the probability of an occurrence of an event in an interval is proportional to the length of the interval\n\n**Example**\n\nKnowing from historical data that the average number of goals scored in a world football match is 2.5, and because the phenomenon can be described by a Poissonian, we have\n\n$$\nP(k \\text{ goals in a match}) = \\frac{2.5^k e^{-2.5}}{k!} \\ ,\n$$\n\nand we can calculate the expected value and teh variance as above.\n\nAn example of a phenomenon which violates the Poissonian assumptions would be the number of students arriving at the student union: the rate is not constant (as it is low during class time and high between class times) and events are co-occurring (students tend to come in groups).\n\n## The Dirichlet\n\nIt is a continuous multivariate distribution and the generalisation of the beta distribution, typically denoted as $\\text{Dir}(\\alpha)$, $\\alpha$ being the parametrising vector such that $\\alpha = (\\alpha_i), \\alpha_i \\in \\mathbb{R}, \\alpha_i > 0 \\forall i$. It is usually used as a prior in bayesian statistics as it is the conjugate prior of the multinomial distribution.\n\nA Dirichlet distribution of order $k \\geq 2$ with parameters $\\alpha_i$ has the probability density function\n\n$$\nf(x_1, \\ldots, x_k; \\alpha_1, \\ldots, \\alpha_k) = \\frac{1}{B(\\alpha)} \\Pi_{i=1}^k x_i^{\\alpha_i - 1}\n$$\n\nwith $B$ being the beta function in $\\mathbb{R}^{k-1}$ and $x$ living on the open $(k-1)$-dimensional simplex $x_1, \\ldots, x_k > 0$, $x_1 + \\ldots + x_{k-1} < 1$, $x_k = 1 - x_1 - \\ldots - x_{k-1}$.\n\n## The power of the power-law\n\nThe power-law is a great one, a deserves a bit of commentary.\n\n### What's a power law, in general\n\nA power law is, in general, a mathematical function of type\n\n$$\nf(x) \\propto x^\\alpha \\ ,\n$$\n\nso that in a log-log plot it will appear as a straight line because\n\n$$\nf(x) = Ax^\\alpha \\Rightarrow \\log(f(x)) = \\log A + \\alpha \\log x \\ .\n$$\n\nLet's look at it.\n\n\n```python\n# Plotting a power function \n\nx = np.array([i for i in np.arange(0.1, 1.1, 0.01)])\ny = np.array([item**-0.3 for item in x])\n\nplt.plot(x, y, label='$y = x^{-0.3}$')\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.xticks([i for i in np.arange(0, 1.1, 0.1)])\nplt.title(\"A power law\")\nplt.legend()\nplt.show();\n```\n\n## Power law distributions\n\nNewman's paper in [[1]](#1) is a great source about the topic and lots of what will be presented here is re-elaborated from there. It is also a very clearly written and enjoyable paper. The Wikipedia page [[3]](#3) on the topic is also quite well written.\n\nWe talk of data distributed power law when the quantity we are measuring has a probability which is a power of the quantity itself, so that a power-law distribution has a probability density function typically written as\n\n$$\np(x) = A x^{-\\alpha} \\ ,\n$$\n\nwhere $A$ is the normalisation constant.\n\nWe put a minus sign in front of the exponent (with respect to what we wrote above for a generic power-law function) as we're thinking of a decreasing relation.\n\nThere must be a minimum value $x_{min}$, otherwise it wouldn't be normalisable (the area under the curve would diverge, see figure above).\n\nAs for the exponent, we need to have $\\alpha > 1$ for it to be a pdf, hence integrable, because by definition,\n\n$$\n\\begin{align}\n1 &= \\int_{x_{min}}^\\infty p(x) \\text{d} x \\\\\n&= A \\int_{x_{min}}^\\infty x^{-\\alpha} \\text{d} x \\\\\n&= \\frac{A}{-\\alpha + 1} \\Big[x^{-\\alpha + 1}\\Big]_{x_{min}}^\\infty \\\\\n\\end{align}\n$$\n\nwhich is only non-diverging when $\\alpha > 1$. From here, the normalisation constant is $A = (\\alpha - 1) \\ x_{min}^{\\alpha-1}$.\n\n### Where you find them\n\nMany observed phenomena in several disciplines follow power-law distributions, examples are the Zipf in _linguistics_, the Pareto in _economics_, the _Taylor_ in ecology, ... In a typical situation though, it is the tail of a distribution which is power-law. \n\nIn [[5]](#5) Mandelbrot and Taleb exquisitely talk about when using a gaussian paradigm for a power law phenomenon can lead to disaster in finance.\n\n### What makes them interesting\n\n* the absence of a typical _scale_: while other distributions (the gaussian being the king example) will have a typical value which can be used as representative of the distribution (along with an error), power laws are scale-free: values can span several orders of magnitude\n\n* _scale invariance_: if the independent variable is rescaled by a factor, the law only gets affected by proportional scaling, so a consequence, this means that all power laws with the same exponents are rescaled versions of each other: \n\n$$p(cx) = A (cx)^{-\\alpha} = A c^{-\\alpha} x^{-\\alpha} \\ ,$$\n\n* _long tail_: in a power law, the tail is fat, meaning that the frequency of the highly frequent items is higher than it would be in other distributions This is brilliantly explained in this blog post by Panos Ipeirotis [[4]](#4), where he points out that despite the fact that a power law is often described as a distribution where \"there are a lot of very unfrequent items and a few very frequent ones\", this is a distortion of reality: what is striking of a power law is that actually the very frequent items occur much more frequently than they would in other distributions.\n\n### Moments\n\nThe _mean_ of a power law is only defined when $\\alpha > 2$:\n\n$$\n\\begin{align}\n\\mathrm{E}[x] = \\bar x &= A \\int_{x_{min}}^\\infty x x^{-\\alpha} \\text{d} x \\\\\n&= A \\int_{x_{min}}^\\infty x^{-\\alpha + 1} \\text{d} x \\\\\n&= \\frac{A}{-\\alpha + 2} \\Big[x^{-\\alpha + 2}\\Big]_{x_{min}}^\\infty\n\\end{align}\n$$\n\nThe _variance_ of a power law is only defined when $\\alpha > 3$:\n\n$$\n\\begin{align}\n\\mathrm{Var}[x] &= A \\int_{x_{min}}^\\infty (x-\\bar x)^2 x^{-\\alpha} \\text{d} x \\\\\n&= A \\int_{x_{min}}^\\infty x^{-\\alpha + 2} + (\\bar x)^2 x^{-\\alpha} - 2 x^{-\\alpha-1} \\bar x \\text{d} x\n\\end{align}\n$$\n\n### Plotting it\n\nA simple plot in log-log scale of the histogram of data helps the eye spot potential power-law behaviours, but this alone is a very unreliable way of identifying the distribution. Because there is much less data in the tail than there is in the head of the distribution, the tail will appear very noisy. \n\nA better way is to plot the data histogrammed, but with logarithmic binning, and in log-log scale. This helps reduce the noisy tail problem. \n\n## References\n\nTo be honest these are all about power-laws.\n\n1. M E Newman, [Power laws, Pareto distributions and Zipf\u2019s law](https://arxiv.org/abs/cond-mat/0412004), *Contemporary physics* 46.5, 2005\n2. J Alstott, E Bullmore, D Plenz, [powerlaw: a Python package for analysis of heavy-tailed distributions](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0085777), *PloS ONE* 9.1, 2014\n3. [Wikipedia on power laws](https://en.wikipedia.org/wiki/Power_law)\n4. P Ipeirotis, [Misunderstandings of power laws](http://www.behind-the-enemy-lines.com/2008/01/misunderstandings-of-power-law.html)\n5. B Mandelbrot, N N Taleb, [How the Finance Gurus Get Risk All Wrong](http://archive.fortune.com/magazines/fortune/fortune_archive/2005/07/11/8265256/index.htm), Fortune magazine, Jul 2005\n\n\n```python\n\n```\n", "meta": {"hexsha": "514867ba93ebdae8311e68fcb49a8df11463c863", "size": 80100, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "prob-stats-data-analysis/foundational/famous-distributions.ipynb", "max_stars_repo_name": "walkenho/tales-science-data", "max_stars_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-11T09:39:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-11T09:39:10.000Z", "max_issues_repo_path": "prob-stats-data-analysis/foundational/famous-distributions.ipynb", "max_issues_repo_name": "walkenho/tales-science-data", "max_issues_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prob-stats-data-analysis/foundational/famous-distributions.ipynb", "max_forks_repo_name": "walkenho/tales-science-data", "max_forks_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 120.0899550225, "max_line_length": 54516, "alphanum_fraction": 0.8238327091, "converted": true, "num_tokens": 5544, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4035668680822513, "lm_q2_score": 0.2909808600663598, "lm_q1q2_score": 0.11743023436886066}} {"text": "---\nlayout: page\ntitle: Risco\nnav_order: 6\n---\n\n[](https://colab.research.google.com/github/icd-ufmg/icd-ufmg.github.io/blob/master/_lessons/06-risco.ipynb)\n\n# Risco\n{: .no_toc .mb-2 }\n\nEntendendo a import\u00e2ncia da m\u00e9dia\n{: .fs-6 .fw-300 }\n\n{: .no_toc .text-delta }\nResultados Esperados\n\n1. Revisar conceitos de Probabilidade ligados a m\u00e9dia\n1. Entendimento da lei dos grandes n\u00fameros\n1. Entendimento de um erro quadrado\n1. Entendimento inicial do teorema central do limite\n\n---\n**Sum\u00e1rio**\n1. TOC\n{:toc}\n---\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport scipy.stats as ss\n```\n\n\n```python\nplt.style.use('seaborn-colorblind')\n\nplt.rcParams['figure.figsize'] = (16, 10)\nplt.rcParams['axes.labelsize'] = 20\nplt.rcParams['axes.titlesize'] = 20\nplt.rcParams['legend.fontsize'] = 20\nplt.rcParams['xtick.labelsize'] = 20\nplt.rcParams['ytick.labelsize'] = 20\nplt.rcParams['lines.linewidth'] = 4\n```\n\n\n```python\nplt.ion()\n```\n\n\n```python\ndef despine(ax=None):\n if ax is None:\n ax = plt.gca()\n # Hide the right and top spines\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n\n # Only show ticks on the left and bottom spines\n ax.yaxis.set_ticks_position('left')\n ax.xaxis.set_ticks_position('bottom')\n```\n\n\n```python\nnp.random.seed(98)\n```\n\n## Introdu\u00e7\u00e3o\n\nVamos iniciar esta aula com um conjunto de dados sint\u00e9ticos. Os mesmos v\u00e3o ser compostos de uma amostra de uma distribui\u00e7\u00e3o Normal. Para fazer tal amostra, use a biblioteca `numpy.random`. Abaixo, nossos dados s\u00e3o gerados a partir de uma distribui\u00e7\u00e3o Beta.\n\n\n```python\ndata = np.random.beta(3, 2, size=50)\ndata\n```\n\n\n\n\n array([0.42286448, 0.43952776, 0.56410659, 0.52424368, 0.87894774,\n 0.26145645, 0.74002066, 0.42867438, 0.8295208 , 0.52435508,\n 0.44742948, 0.47979578, 0.60098265, 0.76409969, 0.23832042,\n 0.75521248, 0.85631425, 0.43665272, 0.1120301 , 0.58452727,\n 0.66005119, 0.18514747, 0.47336954, 0.4862915 , 0.69066877,\n 0.80754802, 0.64454602, 0.28349298, 0.90644721, 0.74051758,\n 0.77360182, 0.4609454 , 0.57215881, 0.5802893 , 0.51898876,\n 0.91762035, 0.81602591, 0.6226551 , 0.46126971, 0.94869495,\n 0.83256912, 0.67050373, 0.6758471 , 0.66876075, 0.78683386,\n 0.26062848, 0.35454389, 0.65312894, 0.78611324, 0.48682953])\n\n\n\nAgora, vamos visualizar os dados. Observe alguns pontos:\n\n1. Dos dados apenas n\u00e3o sabemos a distribui\u00e7\u00e3o.\n1. Parece um pouco com uma coisa multimodal, mas n\u00e3o \u00e9.\n1. Sabemos que n\u00e3o \u00e9! Pois geramos uma beta.\n\n\n```python\nplt.subplot(2, 1, 1)\nplt.hist(data, edgecolor='k');\nplt.ylabel('# Linhas')\nplt.xlabel('X')\ndespine()\n\nplt.subplot(2, 1, 2)\nplt.scatter(data, np.ones(len(data)), s=80, edgecolor='k')\nax = plt.gca()\nax.set_yticklabels([])\nplt.xlabel('X')\ndespine()\n\nplt.tight_layout(pad=0)\n```\n\n### Lei dos Grandes N\u00fameros\n\n**Do Wikipedia:** A lei dos grandes n\u00fameros (LGN) \u00e9 um teorema fundamental da teoria da probabilidade, que descreve o resultado da realiza\u00e7\u00e3o da mesma experi\u00eancia repetidas vezes. De acordo com a LGN, a m\u00e9dia aritm\u00e9tica dos resultados da realiza\u00e7\u00e3o da mesma experi\u00eancia repetidas vezes tende a se aproximar do valor esperado \u00e0 medida que mais tentativas se sucederem. Em outras palavras, quanto mais tentativas s\u00e3o realizadas, mais a probabilidade da m\u00e9dia aritm\u00e9tica dos resultados observados ir\u00e1 se aproximar da probabilidade real.\n\nA prova da lei dos grandes n\u00fameros requer um conhecimento da [Desigualdade de Chebyshev](https://en.wikipedia.org/wiki/Chebyshev%27s_inequality). N\u00e3o vamos provar no curso. Os que fizerem probabilidade devem conhecer a prova. Caso n\u00e3o tenha feito, n\u00e3o se preocupe, n\u00e3o vamos precisar fazer a prova na m\u00e3o. Precisamos apenas saber do enunciado:\n\nConsiderando $X_1, X_2, \\cdots X_n$ uma sequ\u00eancia infinita de vari\u00e1veis aleat\u00f3rias i.i.d. com valor esperado $E[X_i] = E[X_{i-1}]$ = $\\mu$. Al\u00e9m do mais, a m\u00e9dia amostral de cada VA \u00e9: $\\overline{X}_n=\\frac1n(X_1+\\cdots+X_n)$.\n\nA lei dos grandos n\u00fameros fala que:\n$$\\lim_{n \\to \\infty} P\\left ( \\left| \\overline{X}_n - \\mu \\right | < \\varepsilon \\right ) = 1$$\n\n__Como interpretar a lei:__ Com dados suficientes, a probabilidade de que a m\u00e9dia dos meus dados $\\overline{X}_n$ se aproxime da m\u00e9dia real da popula\u00e7\u00e3o $\\mu$ dentro de um fator $\\varepsilon>0$ tende para um. Note que $\\varepsilon>0$, ou seja, nunca teremos a m\u00e9dia real da popula\u00e7\u00e3o. Sempre estamos dentro de erro, pequeno, mas positivo.\n\nAbaixos vamos ver a mesma! Do primeiro plot observe como a m\u00e9dia converge com mais amostras.\n\n\n```python\nmu = 3 / (3 + 5)\nxax = np.arange(2, 10000)\nyax = []\ndiff = []\nfor size in xax:\n data = np.random.beta(3, 2, size=size)\n yax.append(data.mean())\n diff.append((mu - data.mean())**2) # Diferen\u00e7a ao quadrado\nplt.plot(xax, yax)\nplt.ylabel('M\u00e9dia Amostral')\nplt.xlabel('Tamanho da Amostra')\ndespine()\n```\n\nAbaixo temos o erro ao quadrado. Note que nunca \u00e9 zero. A lei n\u00e3o fala isto, fala que a dist\u00e2ncia fica dentro de um epsilon. A lei dos grandes n\u00fameros n\u00e3o fala em erro ao quadrado, foi feito aqui pois liga com a defini\u00e7\u00e3o de Risco de um estimador (mais na frente).\n\n\n```python\nplt.plot(xax, diff)\nplt.ylabel('(M\u00e9dia Amostral - M\u00e9dia Real)$^2$')\nplt.xlabel('Tamanho da Amostra')\ndespine()\n```\n\n### Valor Esperado do Estimador\n\nObserve que nosso ponto de partida foi uma \u00fanica amostra. Para amostras diferentes da mesma popula\u00e7\u00e3o nosso estimador pode ser comportar diferente. Como podemos entender tal efeito? Simples, vamos gerar v\u00e1rias amostras! Para cada uma, temos um estimador diferente. \n\n**Nota\u00e7\u00e3o:** $X_1, X_2, X_3, X_4, \\cdots, X_n \\sim Beta(3, 2)$\n\nPor clareza, vamos assumir que cada amostra tem um tamanho igual (e.g., 50). Podemos ent\u00e3o entender cada amostra como sendo gerado de uma mesma vari\u00e1vel aleat\u00f3ria (a beta). O sinal $\\sim$ significa amostre dados. Assim, cada amostra \u00e9 independente e distribu\u00edda de forma identica (todas s\u00e3o geradas de uma Beta(3, 2)).\n\nAbaixo mostro o histograma de 1000 estimadores da m\u00e9dia amostral. Como que chegamos no gr\u00e1fico?\n1. Gere $X_1, X_2, X_3, X_4, \\cdots, X_{100}$\n1. Compute a m\u00e9dia de cada $X_i$\n1. Plote\n\n### Um estimador Bom\n\n\n```python\nestimativas = []\nfor _ in range(1000):\n data = np.random.beta(3, 2, size=50)\n estimativas.append(data.mean())\n```\n\nCom tais amostras podemos explorar os conceitos de vi\u00e9s e variancia de um estimador. Note que n\u00e3o estamos falando dos dados, sim dos valores estimados. A intui\u00e7\u00e3o aqui \u00e9 que: Cada amostra de dados leva para uma estimativa diferente. Qu\u00e3o pr\u00f3ximo ser\u00e3o tais estimativas de um valor real? Como tais estimativas variam?\n\nNo gr\u00e1fico abaixo \u00e9 poss\u00edvel ver que o estimador fica em torno do valor real $\\mu = 0.6$.\n\n\n```python\nplt.hist(estimativas, edgecolor='k');\nplt.xlabel(r'$\\hat{\\theta}$')\nplt.ylabel('Freq em 1000 amostras')\nplt.xlim(.5, .7)\ndespine()\n```\n\nAgora vamos fazer um estimador com amostras de tamanho 10000. Observe como estamos mais perto do valor real. \u00c9 esperado, pela lei dos grandes n\u00fameros, quanto maior a quantidade de dados mais perto ficamos.\n\n\n```python\nestimativas = []\nfor _ in range(1000):\n data = np.random.beta(3, 2, size=30)\n estimativas.append(data.mean())\n```\n\n\n```python\nplt.hist(estimativas, edgecolor='k');\nplt.xlabel(r'$\\hat{\\theta}$')\nplt.ylabel('Freq em 1000 amostras')\nplt.xlim(.5, .7)\ndespine()\n```\n\nO primeiro gr\u00e1fico acima tem um vi\u00e9s baixo e vari\u00e2ncia alta. Note como o plot \u00e9 mais disperso do que o segundo. Tal segundo, em contrapartida, tem vari\u00e2ncia baixa. Este \u00e9 um efeito de aumentar o n\u00famero de pontos. Vamos explorar um pouco mais tal efeito agora.\n\n### (Detour) Um estimador Ruim\n\nEm suma, os dois plots giram em torno do valor real, ou seja, n\u00e3o s\u00e3o viesados. Parece \u00f3bvio pois voc\u00ea j\u00e1 sabe que a m\u00e9dia da amostra \u00e9 um bom estimador da m\u00e9dia da popula\u00e7\u00e3o. Isto foi definido na lei dos grandes n\u00fameros acima.\n\nDe qualquer forma, nada impede uma estimativa de outros estimadores tolos. Abaixo temos o estimador $\\hat{\\theta_b}$. O mesmo \u00e9 a m\u00e9dia geom\u00e9trica de tr\u00eas pontos. Observe como o estimador \u00e9 viesado!\n\n$\\hat{\\theta_b} = \\sqrt{X_1 * X_2 * X_3}$\n\n\n```python\nestimativas = []\nfor _ in range(1000):\n data = np.random.beta(3, 2, size=300)\n estimativas.append(np.sqrt(data.prod()))\n```\n\nNote que o mesmo tem alto vi\u00e9s e alta vari\u00e2ncia.\n\n\n```python\nplt.hist(estimativas, edgecolor='k', bins=200);\nplt.xlabel(r'$\\hat{\\theta}$')\nplt.ylabel('Freq em 1000 amostras')\nplt.xlim(0, 1e-34)\ndespine()\n```\n\n## Caracterizando um Estimador\n\nLembre-se de nossas suposi\u00e7\u00f5es at\u00e9 agora: assumimos que existe um valor real da popula\u00e7\u00e3o. Tal valor \u00e9 a nossa m\u00e9dia $\\mu = 0.6$. Podemos chamar tal m\u00e9dia de nosso objetivo $\\theta^{*} = \\mu$. Nosso modelo estima esse par\u00e2metro. Para tal, usamos a vari\u00e1vel $\\hat{\\theta}$ para denotar nossa estimativa. Gostar\u00edamos de usar os dados coletados para determinar o valor que $\\hat{\\theta}$ deveria ter.\n\n### Uma Fun\u00e7\u00e3o de Perda\n\nPara decidir com precis\u00e3o qual \u00e9 o melhor valor de $\\hat{\\theta}$, definimos uma fun\u00e7\u00e3o de perda. Uma fun\u00e7\u00e3o de perda \u00e9 uma fun\u00e7\u00e3o matem\u00e1tica que leva em uma estimativa $\\hat{\\theta}$ e os pontos em nosso conjunto de dados. Tal fun\u00e7\u00e3o tem uma \u00fanica resposta real $L(\\hat{\\theta}) \\in \\mathbb{R}$. Qu\u00e3o menor esse valor, melhor \u00e9 a nossa estimativa.\n\nAlgumas fun\u00e7\u00f5es de perda s\u00e3o:\n\n$$L(\\hat{\\theta}) = \\frac{1}{n}\\sum_{i=1}^{n} (\\hat{\\theta} - x_i)^2$$\n\ne \n\n$$L(\\hat{\\theta}) = \\frac{1}{n}\\sum_{i=1}^{n} |\\hat{\\theta} - x_i|$$\n\nA primeira define um erro quadrado m\u00e9dio. A segunda define o erro absoluto m\u00e9dio. Vamos entender as duas:\n\n\n```python\ndef mse(theta, data):\n return ((data - theta) ** 2).mean()\n```\n\n\n```python\ndef mae(theta, data):\n return np.abs(data - theta).mean()\n```\n\n\n```python\ndata = np.random.beta(3, 2, size=50)\n```\n\nObserve como o MSE, erro quadrado m\u00e9dio, tem valor m\u00ednimo na m\u00e9dia 0.6.\n\n\n```python\nxax = np.arange(0.01, 1.2, 0.01)\nyax = []\nfor theta in xax:\n yax.append(mse(theta, data))\nplt.plot(xax, yax)\ndespine()\nplt.ylabel('MSE')\nplt.xlabel(r'$\\hat{\\theta}$')\n```\n\n**Valor \u00f3timo do MSE** Podemos provar que o MSE retorna a m\u00e9dia. Para isto, basta derivar a fun\u00e7\u00e3o e igualar a derivada a zero.\n\n\\begin{align}\nL(\\hat{\\theta}) = \\frac{1}{n}\\sum_{i=1}^{n} (\\hat{\\theta} - x_i)^2 \\\\\n\\end{align}\n\nVamos derivar. Podemos fazer uso da regra da cadeia ou usar a expans\u00e3o da forma quadr\u00e1tica. Vou seguir a segunda linha para ser mais expl\u00edcito.\n\n\\begin{align}\nL(\\hat{\\theta}) = \\frac{1}{n}\\sum_{i=1}^{n} \\hat{\\theta}^2 - 2\\theta x_i + x_i^2 \\\\\n{\\delta L \\over \\delta\\hat{\\theta}} = \\frac{1}{n}\\sum_{i=1}^{n} \\hat{2\\theta} - 2x_i\n\\end{align}\n\nAgora podemos igualar para zero.\n\n\\begin{align}\n{\\delta L \\over \\delta\\theta} = \\sum_{i=1}^{n} \\hat{2\\theta} - 2x_i \\\\\n\\sum_{i=1}^{n} \\hat{2\\theta} - 2x_i = 0 \\\\\n\\sum_{i=1}^{n} \\hat{2\\theta} = \\sum_{i=1}^{n} 2x_i \\\\\n2 n \\hat{\\theta} = 2 \\sum_{i=1}^{n} x_i \\\\\n\\hat{\\theta} = n^{-1} \\sum_{i=1}^{n} x_i = \\bar{x}\n\\end{align}\n\n*Assim, um estimador de MSE retorna a m\u00e9dia, sempre.*\n\n**MAE, mean absolute error, ou erro m\u00e9dio absoluto** \n\nO mesmo n\u00e3o \u00e9 verdade para o MAE. Primeiro, observe a forma da fun\u00e7\u00e3o abaixo. O valor \u00f3timo, menor, n\u00e3o \u00e9 mais a m\u00e9dia.\n\n\n```python\nxax = np.arange(0.01, 1.2, 0.01)\nyax = []\nfor theta in xax:\n yax.append(mae(theta, data))\nplt.plot(xax, yax)\ndespine()\nplt.ylabel('MAE')\nplt.xlabel(r'$\\hat{\\theta}$')\n```\n\nPodemos mostrar que ao mimizar o MAE, retornamos a mediana:\n\n\\begin{align}\nL(\\hat{\\theta}) = n^{-1}\\sum_{i=1}^{n} |\\hat{\\theta} - x_i| \\\\\n\\text{Vamos derivar. Fa\u00e7a uso de:} \\\\\n\\frac{\\delta |x|}{\\delta x} = sign(x)\n\\end{align}\n\nSign indica o sinal de x. Por exemplo: Sign(-20) = -1. Sign(99) = 1. Sign(0) = 0.\n\nPrimeiro vamos quebrar a express\u00e3o em tr\u00eas. (1) Valores menores do que theta; (2) Valores maiores; e, (3) Valores iguais. Al\u00e9m do mais, vamos assumir que a derivada da fun\u00e7\u00e3o absoluta existe em zero. Al\u00e9m de assumir que existe, assumimos que a mesma \u00e9 igual a zero.\n\n\\begin{aligned} L(\\hat{\\theta}) &=\\sum_{i=1}^{n}\\left|\\hat{\\theta}-x_{i}\\right| \\\\ &=\\sum_{\\theta x_i}|\\hat{\\theta}-x_i| \\end{aligned}\n\nDerivando ficamos com tr\u00eas partes. Lembre-se que a derivada da fun\u00e7\u00e3o absoluto \u00e9 apenas o sinal (acima).\n\n\\begin{aligned} {\\delta L \\over \\delta \\hat{\\theta}} = \\sum_{\\theta x_i}1\n\\end{aligned}\n\nIgualando com zero chegamos em uma express\u00e3o esquisita.\n\n$$\\sum_{\\theta x_i}1$$\n\nComo interpretar a mesma? Observe que de um lado contamos os n\u00fameros negativos. Do outro, os positivos. O valor \u00f3timo tem contagens iguais dos dois lados. Que n\u00famero m\u00e1gico separa m\u00e9tade dos valores de um lado e metade dos outros? A mediana!\n\n*Ao minimizar o MAE ficamos com a mediana!*\n\n## Risco\n\n\nO risco de um estimador \u00e9 o valor esperado da perda do modelo em pontos escolhidos aleatoriamente. No nosso cen\u00e1rio, a popula\u00e7\u00e3o consiste de uma distribui\u00e7\u00e3o Beta. Usamos a vari\u00e1vel aleat\u00f3ria $X$ para representar uma amostra: $X \\sim Beta(3, 2)$. O risco ent\u00e3o visa responder, qual a m\u00e9dia do erro para tal amostra? Vamos definir o risco usando MSE.\n\nUsando nossa nota\u00e7\u00e3o, o risco $R(\\hat{\\theta})$ do nosso estimador \u00e9:\n\n$$R(\\hat{\\theta}) = \\mathbb{E}[(X - \\hat{\\theta})^2]$$\n\nNa express\u00e3o acima, usamos a perda MSE que fornece o erro interno da expectativa. Podemos definir outros riscos, mas esse \u00e9 bem caracterizado (conseguimos entender o mesmo). O risco \u00e9 uma fun\u00e7\u00e3o de $\\hat{\\theta}$, pois podemos mudar $\\hat{\\theta}$ como quisermos.\n\nDiferentemente da perda, o uso do risco nos permite argumentar sobre a precis\u00e3o do modelo na popula\u00e7\u00e3o em geral. Se nosso modelo atingir um risco baixo, ele far\u00e1 previs\u00f5es precisas sobre pontos da popula\u00e7\u00e3o a longo prazo. Por outro lado, se nosso modelo tiver um risco alto, ele geralmente ter\u00e1 um desempenho ruim nos dados da popula\u00e7\u00e3o.\n\nNaturalmente, gostar\u00edamos de escolher o valor de $\\hat{\\theta}$ que torna o risco o mais baixo poss\u00edvel. Usamos a vari\u00e1vel $\\theta^{*}$ para representar o valor \u00f3timo. \n\n\nSe\u00e7\u00e3o sendo escrita. Leia: https://www.textbook.ds100.org/ch/12/prob_risk.html. Use o translate caso necessite!\n", "meta": {"hexsha": "5462ff386ba9f3946e2b8adc4ec1609b65007bf7", "size": 297257, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_lessons/.ipynb_checkpoints/06-risco-checkpoint.ipynb", "max_stars_repo_name": "icd-ufmg/icd-ufmg.github.io", "max_stars_repo_head_hexsha": "5bc96e818938f8dec09dc93d786e4b291d298a02", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-02-25T18:25:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-20T19:22:24.000Z", "max_issues_repo_path": "_lessons/.ipynb_checkpoints/06-risco-checkpoint.ipynb", "max_issues_repo_name": "thiagomrs/icd-ufmg.github.io", "max_issues_repo_head_hexsha": "f72c0eca5a0f97d83be214aff52715c986b078a7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_lessons/.ipynb_checkpoints/06-risco-checkpoint.ipynb", "max_forks_repo_name": "thiagomrs/icd-ufmg.github.io", "max_forks_repo_head_hexsha": "f72c0eca5a0f97d83be214aff52715c986b078a7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-06-05T20:49:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:21:44.000Z", "avg_line_length": 350.9527744982, "max_line_length": 48464, "alphanum_fraction": 0.9300504277, "converted": true, "num_tokens": 4623, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47268347662043286, "lm_q2_score": 0.24798742068237778, "lm_q1q2_score": 0.11721955616628017}} {"text": "```python\n# This mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive')\n\n# TODO: Enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment1/'\nFOLDERNAME = None\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# Now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# This downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd /content/drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content/drive/My\\ Drive/$FOLDERNAME\n```\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization, proposed by [1] in 2015.\n\nTo understand the goal of batch normalization, it is important to first recognize that machine learning methods tend to perform better with input data consisting of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features. This will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance, since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, they propose to insert into the network layers that normalize batches. At training time, such a layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```python\n# Setup cell.\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams[\"figure.figsize\"] = (10.0, 8.0) # Set default size of plots.\nplt.rcParams[\"image.interpolation\"] = \"nearest\"\nplt.rcParams[\"image.cmap\"] = \"gray\"\n\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\"Returns relative error.\"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(f\" means: {x.mean(axis=axis)}\")\n print(f\" stds: {x.std(axis=axis)}\\n\")\n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n\n\n\n```python\n# Load the (preprocessed) CIFAR-10 data.\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(f\"{k}: {v.shape}\")\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n# Batch Normalization: Forward Pass\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```python\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n\n# Means should be close to zero and stds close to one.\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 5.49560397e-17 9.71445147e-18]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```python\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n# Batch Normalization: Backward Pass\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```python\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-13 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.7029261167605239e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n# Batch Normalization: Alternative Backward Pass\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hard part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```python\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 7.202578502291986e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 2.00x\n\n\n# Fully Connected Networks with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\n**Hint:** You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`.\n\n\n```python\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.2611955101340957\n W1 relative error: 1.10e-04\n W2 relative error: 3.35e-06\n W3 relative error: 3.92e-10\n b1 relative error: 1.39e-09\n b2 relative error: 5.55e-09\n b3 relative error: 8.26e-11\n beta1 relative error: 7.85e-09\n beta2 relative error: 1.89e-09\n gamma1 relative error: 7.47e-09\n gamma2 relative error: 3.35e-09\n \n Running check with reg = 3.14\n Initial loss: 5.884829928987633\n W1 relative error: 1.98e-06\n W2 relative error: 2.29e-06\n W3 relative error: 6.29e-10\n b1 relative error: 2.78e-09\n b2 relative error: 2.22e-08\n b3 relative error: 2.10e-10\n beta1 relative error: 6.32e-09\n beta2 relative error: 3.48e-09\n gamma1 relative error: 5.94e-09\n gamma2 relative error: 4.14e-09\n\n\n# Batch Normalization for Deep Networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```python\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340974\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.314000; val_acc: 0.267000\n (Iteration 21 / 200) loss: 2.039365\n (Epoch 2 / 10) train acc: 0.384000; val_acc: 0.280000\n (Iteration 41 / 200) loss: 2.041103\n (Epoch 3 / 10) train acc: 0.493000; val_acc: 0.308000\n (Iteration 61 / 200) loss: 1.753903\n (Epoch 4 / 10) train acc: 0.530000; val_acc: 0.307000\n (Iteration 81 / 200) loss: 1.246584\n (Epoch 5 / 10) train acc: 0.573000; val_acc: 0.313000\n (Iteration 101 / 200) loss: 1.320590\n (Epoch 6 / 10) train acc: 0.634000; val_acc: 0.338000\n (Iteration 121 / 200) loss: 1.157329\n (Epoch 7 / 10) train acc: 0.684000; val_acc: 0.326000\n (Iteration 141 / 200) loss: 1.138006\n (Epoch 8 / 10) train acc: 0.771000; val_acc: 0.323000\n (Iteration 161 / 200) loss: 0.664357\n (Epoch 9 / 10) train acc: 0.805000; val_acc: 0.344000\n (Iteration 181 / 200) loss: 0.819612\n (Epoch 10 / 10) train acc: 0.790000; val_acc: 0.322000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696060\n (Epoch 6 / 10) train acc: 0.535000; val_acc: 0.345000\n (Iteration 121 / 200) loss: 1.557987\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.304000\n (Iteration 141 / 200) loss: 1.432189\n (Epoch 8 / 10) train acc: 0.628000; val_acc: 0.340000\n (Iteration 161 / 200) loss: 1.033917\n (Epoch 9 / 10) train acc: 0.661000; val_acc: 0.342000\n (Iteration 181 / 200) loss: 0.885684\n (Epoch 10 / 10) train acc: 0.701000; val_acc: 0.331000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```python\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch Normalization and Initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train eight-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```python\nnp.random.seed(231)\n\n# Try training a very deep net with batchnorm.\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n\n\n E:\\Mo\\Univercity\\internship\\cs231n\\assignments\\assignment2\\cs231n\\layers.py:156: RuntimeWarning: overflow encountered in exp\n ex = np.exp(x)\n E:\\Mo\\Univercity\\internship\\cs231n\\assignments\\assignment2\\cs231n\\layers.py:157: RuntimeWarning: invalid value encountered in true_divide\n p = (ex.T / np.sum(ex, axis=1)).T\n E:\\Mo\\Univercity\\internship\\cs231n\\assignments\\assignment2\\cs231n\\layers.py:158: RuntimeWarning: divide by zero encountered in log\n loss = -np.sum(np.log(p[np.arange(num_train), y])) / num_train\n\n\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```python\n# Plot results of weight scale experiment.\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs. weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the weight initialization scale affect models with/without batch normalization differently, and why?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Batch Normalization and Batch Size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```python\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n \n # Try training a very deep net with batchnorm.\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```python\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```python\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization.\n\n# Simulate the forward pass for a two-layer network.\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\n# Means should be close to zero and stds close to one.\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n\n# Now means should be close to beta and stds close to gamma.\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```python\n# Gradient check batchnorm backward pass.\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n# You should expect to see relative errors between 1e-12 and 1e-8.\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.4336160411201157e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and Batch Size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```python\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n[FILL THIS IN]\n\n", "meta": {"hexsha": "7dfbfc7656b56f4337e8e8012e07ad4032032c92", "size": 432679, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignments/assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "mirzaim/cs231n", "max_stars_repo_head_hexsha": "d982c7f023a1cedd961b4104b3e652ce3c43e738", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2021-11-01T12:45:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T09:25:18.000Z", "max_issues_repo_path": "assignments/assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "surajiitd/cs231n", "max_issues_repo_head_hexsha": "d982c7f023a1cedd961b4104b3e652ce3c43e738", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignments/assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "surajiitd/cs231n", "max_forks_repo_head_hexsha": "d982c7f023a1cedd961b4104b3e652ce3c43e738", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2021-11-08T10:59:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-28T18:47:17.000Z", "avg_line_length": 365.4383445946, "max_line_length": 106908, "alphanum_fraction": 0.9244890554, "converted": true, "num_tokens": 9331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.2450850131323717, "lm_q1q2_score": 0.11680253003956215}} {"text": "```python\nfrom IPython.display import Image \nImage('../../../python_for_probability_statistics_and_machine_learning.jpg')\n```\n\n\n\n\n \n\n \n\n\n\n[Python for Probability, Statistics, and Machine Learning](https://www.springer.com/fr/book/9783319307152)\n\n\n```python\nfrom __future__ import division\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n WARNING: pylab import has clobbered these variables: ['f', 'clf', 'test']\n `%matplotlib` prevents importing * from pylab and numpy\n\n\n\n```python\nfrom pprint import pprint\nimport textwrap\nimport sys, re\nold_displayhook = sys.displayhook\ndef displ(x):\n if x is None: return\n print \"\\n\".join(textwrap.wrap(repr(x).replace(' ',''),width=80))\n\nsys.displayhook=displ\n```\n\nSo far, we have considered parametric methods that reduce inference\nor prediction to parameter-fitting. However, for these to work, we had to\nassume a specific functional form for the unknown probability distribution of\nthe data. Nonparametric methods eliminate the need to assume a specific\nfunctional form by generalizing to classes of functions.\n\n## Kernel Density Estimation\n\nWe have already made heavy use of this method with the histogram, which is a\nspecial case of kernel density estimation. The histogram can be considered the\ncrudest and most useful nonparametric method, that estimates the underlying\nprobability distribution of the data.\n\nTo be formal and place the histogram on the same footing as our earlier\nestimations, suppose that $\\mathscr{X}=[0,1]^d$ is the $d$ dimensional unit\ncube and that $h$ is the *bandwidth* or size of a *bin* or sub-cube. Then,\nthere are $N\\approx(1/h)^d$ such bins, each with volume $h^d$, $\\lbrace\nB_1,B_2,\\ldots,B_N \\rbrace$. With all this in place, we can write the histogram\nhas a probability density estimator of the form,\n\n$$\n\\hat{p}_h(x) = \\sum_{k=1}^N \\frac{\\hat{\\theta}_k}{h} I(x\\in B_k)\n$$\n\n where\n\n$$\n\\hat{\\theta}_k=\\frac{1}{n} \\sum_{j=1}^n I(X_j\\in B_k)\n$$\n\n is the fraction of data points ($X_k$) in each bin, $B_k$. We want to\nbound the bias and variance of $\\hat{p}_h(x)$. Keep in mind that we are trying\nto estimate a function of $x$, but the set of all possible probability\ndistribution functions is extremely large and hard to manage. Thus, we need\nto restrict our attention to the following class of probability distribution of\nso-called Lipschitz functions,\n\n$$\n\\mathscr{P}(L) = \\lbrace p\\colon \\vert p(x)-p(y)\\vert \\le L \\Vert x-y\\Vert, \\forall \\: x,y \\rbrace\n$$\n\n Roughly speaking, these are the density\nfunctions whose slopes (i.e., growth rates) are bounded by $L$.\nIt turns out that the bias of the histogram estimator is bounded in the\nfollowing way,\n\n$$\n\\int\\vert p(x)-\\mathbb{E}(\\hat{p}_h(x))\\vert dx \\le L h\\sqrt{d}\n$$\n\n Similarly, the variance is bounded by the following,\n\n$$\n\\mathbb{V}(\\hat{p}_h(x)) \\le \\frac{C}{n h^d}\n$$\n\n for some constant $C$. Putting these two facts together means that the\nrisk is bounded by,\n\n$$\nR(p,\\hat{p}) = \\int \\mathbb{E}(p(x) -\\hat{p}_h(x))^2 dx \\le L^2 h^2 d + \\frac{C}{n h^d}\n$$\n\n This upper bound is minimized by choosing\n\n$$\nh = \\left(\\frac{C}{L^2 n d}\\right)^\\frac{1}{d+2}\n$$\n\n In particular, this means that,\n\n$$\n\\sup_{p\\in\\mathscr{P}(L)} R(p,\\hat{p}) \\le C_0 \\left(\\frac{1}{n}\\right)^{\\frac{2}{d+2}}\n$$\n\n where the constant $C_0$ is a function of $L$. There is a theorem\n[[wasserman2004all]](#wasserman2004all) that shows this bound in tight, which basically means\nthat the histogram is a really powerful probability density estimator for\nLipschitz functions with risk that goes as\n$\\left(\\frac{1}{n}\\right)^{\\frac{2}{d+2}}$. Note that this class of functions\nis not necessarily smooth because the Lipschitz condition admits step-wise and\nother non-smooth functions. While this is a reassuring result, we typically do\nnot know which function class (Lipschitz or not) a particular probability\nbelongs to ahead of time. Nonetheless, the rate at which the risk changes with\nboth dimension $d$ and $n$ samples would be hard to understand without this\nresult. [Figure](#fig:nonparametric_001) shows the probability distribution\nfunction of the $\\beta(2,2)$ distribution compared to computed histograms for\ndifferent values of $n$. The box plots on each of the points show how the\nvariation in each bin of the histogram reduces with increasing $n$. The risk\nfunction $R(p,\\hat{p})$ above is based upon integrating the squared difference\nbetween the histogram (as a piecewise function of $x$) and the probability\ndistribution function. \n\n**Programming Tip.**\n\nThe corresponding IPython notebook has the complete source code that generates\n[Figure](#fig:nonparametric_001); however, the following snippet\nis the main element of the code.\n\n\n```python\ndef generate_samples(n,ntrials=500):\n phat = np.zeros((nbins,ntrials))\n for k in range(ntrials):\n d = rv.rvs(n) \n phat[:,k],_=histogram(d,bins,density=True) \n return phat\n```\n\nThe code uses the `histogram` function from Numpy.\nTo be consistent with the risk function $R(p,\\hat{p})$, we have to make sure\nthe `bins` keyword argument is formatted correctly using a sequence of\nbin-edges instead of just a single integer. Also, the `density=True` keyword\nargument normalizes the histogram appropriately so that the comparison between\nit and the probability distribution function of the simulated beta distribution\nis correctly scaled.\n\n\n\n\n\n
\n\n

The box plots on each of the points show how the variation in each bin of the histogram reduces with increasing $n$.

\n\n\n\n\n\n## Kernel Smoothing\n\nWe can extend our methods to other function classes using kernel functions.\nA one-dimensional smoothing kernel is a smooth function $K$ with \nthe following properties,\n\n$$\n\\begin{align*}\n\\int K(x) dx &= 1 \\\\\\\n\\int x K(x) dx &= 0 \\\\\\\n0< \\int x^2 K(x) dx &< \\infty \\\\\\\n\\end{align*}\n$$\n\n For example, $K(x)=I(x)/2$ is the boxcar kernel, where $I(x)=1$\nwhen $\\vert x\\vert\\le 1$ and zero otherwise. The kernel density estimator is\nvery similar to the histogram, except now we put a kernel function on every\npoint as in the following,\n\n$$\n\\hat{p}(x)=\\frac{1}{n}\\sum_{i=1}^n \\frac{1}{h^d} K\\left(\\frac{\\Vert x-X_i\\Vert}{h}\\right)\n$$\n\n where $X\\in \\mathbb{R}^d$. [Figure](#fig:nonparametric_002) shows an\nexample of a kernel density estimate using a Gaussian kernel function,\n$K(x)=e^{-x^2/2}/\\sqrt{2\\pi}$. There are five data points shown by the\nvertical lines in the upper panel. The dotted lines show the individual $K(x)$\nfunction at each of the data points. The lower panel shows the overall kernel\ndensity estimate, which is the scaled sum of the upper panel.\n\nThere is an important technical result in [[wasserman2004all]](#wasserman2004all) that\nstates that kernel density estimators are minimax in the sense we\ndiscussed in the maximum likelihood the section ref{ch:stats:sec:mle}. In\nbroad strokes, this means that the analogous risk for the kernel\ndensity estimator is approximately bounded by the following factor,\n\n$$\nR(p,\\hat{p}) \\lesssim n^{-\\frac{2 m}{2 m+d}}\n$$\n\n for some constant $C$ where $m$ is a factor related to bounding\nthe derivatives of the probability density function. For example, if the second\nderivative of the density function is bounded, then $m=2$. This means that\nthe convergence rate for this estimator decreases with increasing dimension\n$d$.\n\n\n\n
\n\n

The upper panel shows the individual kernel functions placed at each of the data points. The lower panel shows the composite kernel density estimate which is the sum of the individual functions in the upper panel.

\n\n\n\n\n\n### Cross-Validation\n\nAs a practical matter, the tricky part of the kernel density estimator (which\nincludes the histogram as a special case) is that we need to somehow compute\nthe bandwith $h$ term using data. There are several rule-of-thumb methods that\nfor some common kernels, including Silverman's rule and Scott's rule for\nGaussian kernels. For example, Scott's factor is to simply compute $h=n^{\n-1/(d+4) }$ and Silverman's is $h=(n (d+2)/4)^{ (-1/(d+4)) }$. Rules of\nthis kind are derived by assuming the underlying probability density\nfunction is of a certain family (e.g., Gaussian), and then deriving the\nbest $h$ for a certain type of kernel density estimator, usually equipped\nwith extra functional properties (say, continuous derivatives of a\ncertain order). In practice, these rules seem to work pretty well,\nespecially for uni-modal probability density functions. Avoiding these\nkinds of assumptions means computing the bandwith from data directly and that is where\ncross validation comes in.\n\nCross-validation is a method to estimate the bandwidth from the data itself.\nThe idea is to write out the following Integrated Squared Error (ISE),\n\n$$\n\\begin{align*}\n\\texttt{ISE}(\\hat{p}_h,p)&=\\int (p(x)-\\hat{p}_h(x))^2 dx\\\\\\\n &= \\int \\hat{p}_h(x)^2 dx - 2\\int p(x) \\hat{p}_h dx + \\int p(x)^2 dx \n\\end{align*}\n$$\n\n The problem with this expression is the middle term [^last_term],\n\n[^last_term]: The last term is of no interest because we are\nonly interested in relative changes in the ISE.\n\n$$\n\\int p(x)\\hat{p}_h dx\n$$\n\n where $p(x)$ is what we are trying to estimate with $\\hat{p}_h$. The\nform of the last expression looks like an expectation of $\\hat{p}_h$ over the\ndensity of $p(x)$, $\\mathbb{E}(\\hat{p}_h)$. The approach is to\napproximate this with the mean,\n\n$$\n\\mathbb{E}(\\hat{p}_h) \\approx \\frac{1}{n}\\sum_{i=1}^n \\hat{p}_h(X_i)\n$$\n\n The problem with this approach is that $\\hat{p}_h$ is computed using\nthe same data that the approximation utilizes. The way to get around this is\nto split the data into two equally sized chunks $D_1$, $D_2$; and then compute\n$\\hat{p}_h$ for a sequence of different $h$ values over the $D_1$ set. Then,\nwhen we apply the above approximation for the data ($Z_i$) in the $D_2$ set,\n\n$$\n\\mathbb{E}(\\hat{p}_h) \\approx \\frac{1}{\\vert D_2\\vert}\\sum_{Z_i\\in D_2} \\hat{p}_h(Z_i)\n$$\n\n Plugging this approximation back into the integrated squared error\nprovides the objective function,\n\n$$\n\\texttt{ISE}\\approx \\int \\hat{p}_h(x)^2 dx-\\frac{2}{\\vert D_2\\vert}\\sum_{Z_i\\in D_2} \\hat{p}_h(Z_i)\n$$\n\n Some code will make these steps concrete. We will need some tools from\nScikit-learn.\n\n\n```python\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.neighbors.kde import KernelDensity\n```\n\n The `train_test_split` function makes it easy to split and\nkeep track of the $D_1$ and $D_2$ sets we need for cross validation. Scikit-learn\nalready has a powerful and flexible implementation of kernel density estimators.\nTo compute the objective function, we need some\nbasic numerical integration tools from Scipy. For this example, we\nwill generate samples from a $\\beta(2,2)$ distribution, which is\nimplemented in the `stats` submodule in Scipy.\n\n\n```python\nimport numpy as np\nnp.random.seed(123456)\n```\n\n\n```python\nfrom scipy.integrate import quad\nfrom scipy import stats\nrv= stats.beta(2,2)\nn=100 # number of samples to generate\nd = rv.rvs(n)[:,None] # generate samples as column-vector\n```\n\n**Programming Tip.**\n\nThe use of the `[:,None]` in the last line formats the Numpy array returned by\nthe `rvs` function into a Numpy vector with a column dimension of one. This is\nrequired by the `KernelDensity` constructor because the column dimension is\nused for different features (in general) for Scikit-learn. Thus, even though we\nonly have one feature, we still need to comply with the structured input that\nScikit-learn relies upon. There are many ways to inject the additional\ndimension other than using `None`. For example, the more cryptic, `np.c_`, or\nthe less cryptic `[:,np.newaxis]` can do the same, as can the `np.reshape`\nfunction.\n\n\n\n The next step is to split the data into two halves and loop over\neach of the $h_i$ bandwidths to create a separate kernel density estimator\nbased on the $D_1$ data,\n\n\n```python\ntrain,test,_,_=train_test_split(d,d,test_size=0.5)\nkdes=[KernelDensity(bandwidth=i).fit(train) \n for i in [.05,0.1,0.2,0.3]]\n```\n\n**Programming Tip.**\n\nNote that the single underscore symbol in Python refers to the last evaluated\nresult. the above code unpacks the tuple returned by `train_test_split` into\nfour elements. Because we are only interested in the first two, we assign the\nlast two to the underscore symbol. This is a stylistic usage to make it clear\nto the reader that the last two elements of the tuple are unused.\nAlternatively, we could assign the last two elements to a pair of dummy\nvariables that we do not use later, but then the reader skimming the code may\nthink that those dummy variables are relevant.\n\n\n\n The last step is to loop over the so-created kernel density estimators\nand compute the objective function.\n\n\n```python\nimport numpy as np\nfor i in kdes:\n f = lambda x: np.exp(i.score_samples(x))\n f2 = lambda x: f(x)**2\n print 'h=%3.2f\\t %3.4f'%(i.bandwidth,quad(f2,0,1)[0]\n -2*np.mean(f(test)))\n```\n\n h=0.05\t -1.1323\n h=0.10\t -1.1336\n h=0.20\t -1.1330\n h=0.30\t -1.0810\n\n\n**Programming Tip.**\n\nThe lambda functions defined in the last block are necessary because\nScikit-learn implements the return value of the kernel density estimator as a\nlogarithm via the `score_samples` function. The numerical quadrature function\n`quad` from Scipy computes the $\\int \\hat{p}_h(x)^2 dx$ part of the objective\nfunction.\n\n\n```python\n%matplotlib inline\n\nfrom __future__ import division\nfrom matplotlib.pylab import subplots\nfig,ax=subplots()\nxi = np.linspace(0,1,100)[:,None]\nfor i in kdes:\n f=lambda x: np.exp(i.score_samples(x))\n f2 = lambda x: f(x)**2\n _=ax.plot(xi,f(xi),label='$h$='+str(i.bandwidth))\n\n_=ax.set_xlabel('$x$',fontsize=28)\n_=ax.set_ylabel('$y$',fontsize=28)\n_=ax.plot(xi,rv.pdf(xi),'k:',lw=3,label='true')\n_=ax.legend(loc=0)\nax2 = ax.twinx()\n_=ax2.hist(d,20,alpha=.3,color='gray')\n_=ax2.axis(ymax=50)\n_=ax2.set_ylabel('count',fontsize=28)\nfig.tight_layout()\n#fig.savefig('fig-statistics/nonparametric_003.png')\n```\n\n\n\n
\n\n

Each line above is a different kernel density estimator for the given bandwidth as an approximation to the true density function. A plain histogram is imprinted on the bottom for reference.

\n\n\n\n\n\nScikit-learn has many more advanced tools to automate this kind of\nhyper-parameter (i.e., kernel density bandwidth) search. To utilize these\nadvanced tools, we need to format the current problem slightly differently by\ndefining the following wrapper class.\n\n\n```python\nclass KernelDensityWrapper(KernelDensity):\n def predict(self,x):\n return np.exp(self.score_samples(x))\n def score(self,test):\n f = lambda x: self.predict(x)\n f2 = lambda x: f(x)**2\n return -(quad(f2,0,1)[0]-2*np.mean(f(test)))\n```\n\n This is tantamount to reorganizing the above previous code \ninto functions that Scikit-learn requires. Next, we create the\ndictionary of parameters we want to search over (`params`) below\nand then start the grid search with the `fit` function,\n\n\n```python\nfrom sklearn.grid_search import GridSearchCV\nparams = {'bandwidth':np.linspace(0.01,0.5,10)}\nclf = GridSearchCV(KernelDensityWrapper(), param_grid=params,cv=2)\nclf.fit(d)\nprint clf.best_params_\n```\n\n {'bandwidth': 0.17333333333333334}\n\n\n The grid search iterates over all the elements in the `params`\ndictionary and reports the best bandwidth over that list of parameter values.\nThe `cv` keyword argument above specifies that we want to split the data\ninto two equally-sized sets for training and testing. We can\nalso examine the values of the objective function for each point\non the grid as follow,\n\n\n```python\nfrom pprint import pprint\npprint(clf.grid_scores_)\n```\n\n [mean: 0.60758, std: 0.07695, params: {'bandwidth': 0.01},\n mean: 1.06325, std: 0.03866, params: {'bandwidth': 0.064444444444444443},\n mean: 1.11859, std: 0.02093, params: {'bandwidth': 0.11888888888888888},\n mean: 1.13187, std: 0.01397, params: {'bandwidth': 0.17333333333333334},\n mean: 1.12007, std: 0.01043, params: {'bandwidth': 0.22777777777777777},\n mean: 1.09186, std: 0.00794, params: {'bandwidth': 0.28222222222222221},\n mean: 1.05391, std: 0.00601, params: {'bandwidth': 0.33666666666666667},\n mean: 1.01126, std: 0.00453, params: {'bandwidth': 0.39111111111111108},\n mean: 0.96717, std: 0.00341, params: {'bandwidth': 0.44555555555555554},\n mean: 0.92355, std: 0.00257, params: {'bandwidth': 0.5}]\n\n\n**Programming Tip.**\n\nThe `pprint` function makes the standard output prettier. The only reason for\nusing it here is to get it to look good on the printed page. Otherwise, the\nIPython notebook handles the visual rendering of output embedded in the\nnotebook via its internal `display` framework.\n\n\n\n Keep in mind that the grid search examines multiple folds for cross\nvalidation to compute the above means and standard deviations. Note that there\nis also a `RandomizedSearchCV` in case you would rather specify a distribution\nof parameters instead of a list. This is particularly useful for searching very\nlarge parameter spaces where an exhaustive grid search would be too\ncomputationally expensive. Although kernel density estimators are easy to\nunderstand and have many attractive analytical properties, they become\npractically prohibitive for large, high-dimensional data sets.\n\n## Nonparametric Regression Estimators\n\nBeyond estimating the underlying probability density, we can use nonparametric\nmethods to compute estimators of the underlying function that is generating the\ndata. Nonparametric regression estimators of the following form are known as\nlinear smoothers,\n\n$$\n\\hat{y}(x) = \\sum_{i=1}^n \\ell_i(x) y_i\n$$\n\n To understand the performance of these smoothers,\nwe can define the risk as the following,\n\n$$\nR(\\hat{y},y) = \\mathbb{E}\\left( \\frac{1}{n} \\sum_{i=1}^n (\\hat{y}(x_i)-y(x_i))^2 \\right)\n$$\n\n and find the best $\\hat{y}$ that minimizes this. The problem with\nthis metric is that we do not know $y(x)$, which is why we are trying to\napproximate it with $\\hat{y}(x)$. We could construct an estimation by using the\ndata at hand as in the following,\n\n$$\n\\hat{R}(\\hat{y},y) =\\frac{1}{n} \\sum_{i=1}^n (\\hat{y}(x_i)-Y_i)^2\n$$\n\n where we have substituted the data $Y_i$ for the unknown function\nvalue, $y(x_i)$. The problem with this approach is that we are using the data\nto estimate the function and then using the same data to evaluate the risk of\ndoing so. This kind of double-dipping leads to overly optimistic estimators.\nOne way out of this conundrum is to use leave-one-out cross validation, wherein\nthe $\\hat{y}$ function is estimated using all but one of the data pairs,\n$(X_i,Y_i)$. Then, this missing data element is used to estimate the above\nrisk. Notationally, this is written as the following,\n\n$$\n\\hat{R}(\\hat{y},y) =\\frac{1}{n} \\sum_{i=1}^n (\\hat{y}_{(-i)}(x_i)-Y_i)^2\n$$\n\n where $\\hat{y}_{(-i)}$ denotes computing the estimator without using\nthe $i^{th}$ data pair. Unfortunately, for anything other than relatively small\ndata sets, it quickly becomes computationally prohibitive to use leave-one-out\ncross validation in practice. We'll get back to this issue shortly, but let's\nconsider a concrete example of such a nonparametric smoother.\n\n## Nearest Neighbors Regression\n
\n\nThe simplest possible nonparametric regression method is the $k$-nearest\nneighbors regression. This is easier to explain in words than to write out in\nmath. Given an input $x$, find the closest one of the $k$ clusters that\ncontains it and then return the mean of the data values in that cluster. As a\nunivariate example, let's consider the following *chirp* waveform,\n\n$$\ny(x)=\\cos\\left(2\\pi\\left(f_o x + \\frac{BW x^2}{2\\tau}\\right)\\right)\n$$\n\n This waveform is important in high-resolution radar applications.\nThe $f_o$ is the start frequency and $BW/\\tau$ is the frequency slope of the\nsignal. For our example, the fact that it is nonuniform over its domain is\nimportant. We can easily create some data by sampling the\nchirp as in the following,\n\n\n```python\nimport numpy as np\nfrom numpy import cos, pi\nxi = np.linspace(0,1,100)[:,None]\nxin = np.linspace(0,1,12)[:,None]\nf0 = 1 # init frequency\nBW = 5\ny = cos(2*pi*(f0*xin+(BW/2.0)*xin**2))\n```\n\n We can use this data to construct a simple nearest neighbor\nestimator using Scikit-learn,\n\n\n```python\nfrom sklearn.neighbors import KNeighborsRegressor\nknr=KNeighborsRegressor(2) \nknr.fit(xin,y)\n```\n\n\n\n\n KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',\n metric_params=None, n_jobs=1, n_neighbors=2, p=2,\n weights='uniform')\n\n\n\n**Programming Tip.**\n\nScikit-learn has a fantastically consistent interface. The `fit` function above\nfits the model parameters to the data. The corresponding `predict` function\nreturns the output of the model given an arbitrary input. We will spend a lot\nmore time on Scikit-learn in the machine learning chapter. The `[:,None]` part\nat the end is just injecting a column dimension into the array in order to\nsatisfy the dimensional requirements of Scikit-learn.\n\n\n```python\nfrom matplotlib.pylab import subplots\nfig,ax=subplots()\nyi = cos(2*pi*(f0*xi+(BW/2.0)*xi**2))\n_=ax.plot(xi,yi,'k--',lw=2,label=r'$y(x)$')\n_=ax.plot(xin,y,'ko',lw=2,ms=11,color='gray',alpha=.8,label='$y(x_i)$')\n_=ax.fill_between(xi.flat,yi.flat,knr.predict(xi).flat,color='gray',alpha=.3)\n_=ax.plot(xi,knr.predict(xi),'k-',lw=2,label='$\\hat{y}(x)$')\n_=ax.set_aspect(1/4.)\n_=ax.axis(ymax=1.05,ymin=-1.05)\n_=ax.set_xlabel(r'$x$',fontsize=24)\n_=ax.legend(loc=0)\nfig.set_tight_layout(True)\n#fig.savefig('fig-statistics/nonparametric_004.png')\n```\n\n\n\n
\n\n

The dotted line shows the chirp signal and the solid line shows the nearest neighbor estimate. The gray circles are the sample points that we used to fit the nearest neighbor estimator. The shaded area shows the gaps between the estimator and the unsampled chirp.

\n\n\n\n\n\n [Figure](#fig:nonparametric_004) shows the sampled signal (gray\ncircles) against the values generated by the nearest neighbor estimator (solid\nline). The dotted line is the full unsampled chirp signal, which increases in\nfrequency with $x$. This is important for our example because it adds a\nnon-stationary aspect to this problem in that the function gets progressively\nwigglier with increasing $x$. The area between the estimated curve and the\nsignal is shaded in gray. Because the nearest neighbor estimator uses only two\nnearest neighbors, for each new $x$, it finds the two adjacent $X_i$ that\nbracket the $x$ in the training data and then averages the corresponding $Y_i$\nvalues to compute the estimated value. That is, if you take every adjacent pair\nof sequential gray circles in the Figure, you find that the horizontal solid line \nsplits the pair on the vertical axis. We can adjust the number of\nnearest neighbors by changing the constructor,\n\n\n```python\nknr=KNeighborsRegressor(3) \nknr.fit(xin,y)\n```\n\n\n\n\n KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',\n metric_params=None, n_jobs=1, n_neighbors=3, p=2,\n weights='uniform')\n\n\n\n\n```python\nfig,ax=subplots()\n_=ax.plot(xi,yi,'k--',lw=2,label=r'$y(x)$')\n_=ax.plot(xin,y,'ko',lw=2,ms=11,color='gray',alpha=.8,label='$y(x_i)$')\n_=ax.fill_between(xi.flat,yi.flat,knr.predict(xi).flat,color='gray',alpha=.3)\n_=ax.plot(xi,knr.predict(xi),'k-',lw=2,label='$\\hat{y}(x)$')\n_=ax.set_aspect(1/4.)\n_=ax.axis(ymax=1.05,ymin=-1.05)\n_=ax.set_xlabel(r'$x$',fontsize=24)\n_=ax.legend(loc=0)\nfig.set_tight_layout(True)\n#fig.savefig('fig-statistics/nonparametric_005.png')\n```\n\n which produces the following corresponding [Figure](#fig:nonparametric_005).\n\n\n\n
\n\n

This is the same as [Figure](#fig:nonparametric_004) except that here there are three nearest neighbors used to build the estimator.

\n\n\n\n\n\nFor this example, [Figure](#fig:nonparametric_005) shows that with\nmore nearest neighbors the fit performs poorly, especially towards the end of\nthe signal, where there is increasing variation, because the chirp is not\nuniformly continuous.\n\nScikit-learn provides many tools for cross validation. The following code\nsets up the tools for leave-one-out cross validation,\n\n\n```python\nfrom sklearn.cross_validation import LeaveOneOut\nloo=LeaveOneOut(len(xin))\n```\n\n The `LeaveOneOut` object is an iterable that produces a set of\ndisjoint indices of the data --- one for fitting the model (training set)\nand one for evaluating the model (testing set), as shown\nin the next short sample,\n\n\n```python\npprint(list(LeaveOneOut(3)))\n```\n\n [(array([1, 2]), array([0])),\n (array([0, 2]), array([1])),\n (array([0, 1]), array([2]))]\n\n\n The next block loops over the disjoint sets of training and test\nindicies iterates provided by the `loo` variable to evaluate\nthe estimated risk, which is accumulated in the `out` list.\n\n\n```python\nout=[]\nfor train_index, test_index in loo:\n _=knr.fit(xin[train_index],y[train_index])\n out.append((knr.predict(xi[test_index])-y[test_index])**2)\n\nprint 'Leave-one-out Estimated Risk: ',np.mean(out),\n```\n\n Leave-one-out Estimated Risk: 1.03517136627\n\n\n The last line in the code above reports leave-one-out's estimated\nrisk. \n\nLinear smoothers of this type can be rewritten in using the following matrix,\n\n$$\n\\mathscr{S} = \\left[ \\ell_i(x_j) \\right]_{i,j}\n$$\n\n so that\n\n$$\n\\hat{\\mathbf{y}} = \\mathscr{S} \\mathbf{y}\n$$\n\n where $\\mathbf{y}=\\left[Y_1,Y_2,\\ldots,Y_n\\right]\\in \\mathbb{R}^n$\nand $\\hat{ \\mathbf{y}\n}=\\left[\\hat{y}(x_1),\\hat{y}(x_2),\\ldots,\\hat{y}(x_n)\\right]\\in \\mathbb{R}^n$.\nThis leads to a quick way to approximate leave-one-out cross validation as the\nfollowing,\n\n$$\n\\hat{R}=\\frac{1}{n}\\sum_{i=1}^n\\left(\\frac{y_i-\\hat{y}(x_i)}{1-\\mathscr{S}_{i,i}}\\right)^2\n$$\n\n However, this does not reproduce the approach in the code above\nbecause it assumes that each $\\hat{y}_{(-i)}(x_i)$ is consuming one fewer\nnearest neighbor than $\\hat{y}(x)$.\n\nWe can get this $\\mathscr{S}$ matrix from the `knr` object as in the following,\n\n\n```python\n_= knr.fit(xin,y) # fit on all data\nS=(knr.kneighbors_graph(xin)).todense()/float(knr.n_neighbors)\n```\n\n The `todense` part reformats the sparse matrix that is\nreturned into a regular Numpy `matrix`. The following shows a subsection\nof this $\\mathcal{S}$ matrix,\n\n\n```python\nprint S[:5,:5]\n```\n\n [[ 0.33333333 0.33333333 0.33333333 0. 0. ]\n [ 0.33333333 0.33333333 0.33333333 0. 0. ]\n [ 0. 0.33333333 0.33333333 0.33333333 0. ]\n [ 0. 0. 0.33333333 0.33333333 0.33333333]\n [ 0. 0. 0. 0.33333333 0.33333333]]\n\n\n The sub-blocks show the windows of the the `y` data that are being\nprocessed by the nearest neighbor estimator. For example,\n\n\n```python\nprint np.hstack([knr.predict(xin[:5]),(S*y)[:5]])#columns match\n```\n\n [[ 0.55781314 0.55781314]\n [ 0.55781314 0.55781314]\n [-0.09768138 -0.09768138]\n [-0.46686876 -0.46686876]\n [-0.10877633 -0.10877633]]\n\n\n Or, more concisely checking all entries for approximate equality,\n\n\n```python\nprint np.allclose(knr.predict(xin),S*y)\n```\n\n True\n\n\n which shows that the results from the nearest neighbor\nobject and the matrix multiply match.\n\n**Programming Tip.**\n\nNote that because we formatted the returned $\\mathscr{S}$ as a Numpy matrix, we\nautomatically get the matrix multiplication instead of default element-wise\nmultiplication in the `S*y` term.\n\n \n\n\n## Kernel Regression\n\nFor estimating the probability density, we started with the histogram and moved\nto the more general kernel density estimate. Likewise, we can also extend\nregression from nearest neighbors to kernel-based regression using the\n*Nadaraya-Watson* kernel regression estimator. Given a bandwith $h>0$, the\nkernel regression estimator is defined as the following,\n\n$$\n\\hat{y}(x)=\\frac{\\sum_{i=1}^n K\\left(\\frac{x-x_i}{h}\\right) Y_i}{\\sum_{i=1}^n K \\left( \\frac{x-x_i}{h} \\right)}\n$$\n\n Unfortunately, Scikit-learn does not implement this\nregression estimator; however, Jan Hendrik Metzen makes a compatible\nversion available on `github.com`.\n\n\n```python\nxin = np.linspace(0,1,20)[:,None]\ny = cos(2*pi*(f0*xin+(BW/2.0)*xin**2)).flatten()\n```\n\n\n```python\nfrom kernel_regression import KernelRegression\n```\n\n This code makes it possible to internally optimize over the bandwidth\nparameter using leave-one-out cross validation by specifying a grid of\npotential bandwidth values (`gamma`), as in the following,\n\n\n```python\nkr = KernelRegression(gamma=np.linspace(6e3,7e3,500))\nkr.fit(xin,y)\n```\n\n\n\n\n KernelRegression(gamma=6002.0040080160325, kernel='rbf')\n\n\n\n [Figure](#fig:nonparametric_006) shows the kernel estimator (heavy\nblack line) using the Gaussian kernel compared to the nearest neighbor\nestimator (solid light black line). As before, the data points are shown as\ncircles. [Figure](#fig:nonparametric_006) shows that the kernel estimator can\npick out the sharp peaks that are missed by the nearest neighbor estimator. \n\n\n\n
\n\n

The heavy black line is the Gaussian kernel estimator. The light black line is the nearest neighbor estimator. The data points are shown as gray circles. Note that unlike the nearest neighbor estimator, the Gaussian kernel estimator is able to pick out the sharp peaks in the training data.

\n\n\n\n\n\nThus, the difference between nearest neighbor and kernel estimation is that the\nlatter provides a smooth moving averaging of points whereas the former provides\na discontinuous averaging. Note that kernel estimates suffer near the\nboundaries where there is mismatch between the edges and the kernel\nfunction. This problem gets worse in higher dimensions because the data\nnaturally drift towards the boundaries (this is a consequence of the *curse of\ndimensionality*). Indeed, it is not possible to simultaneously maintain local\naccuracy (i.e., low bias) and a generous neighborhood (i.e., low variance). One\nway to address this problem is to create a local polynomial regression using\nthe kernel function as a window to localize a region of interest. For example,\n\n$$\n\\hat{y}(x)=\\sum_{i=1}^n K\\left(\\frac{x-x_i}{h}\\right) (Y_i-\\alpha - \\beta x_i)^2\n$$\n\n and now we have to optimize over the two linear parameters $\\alpha$\nand $\\beta$. This method is known as *local linear regression*\n[[loader2006local]](#loader2006local), [[hastie2013elements]](#hastie2013elements). Naturally, this can be\nextended to higher-order polynomials. Note that these methods are not yet\nimplemented in Scikit-learn.\n\n\n```python\nfig,ax=subplots()\n#fig.set_size_inches((12,4))\n_=ax.plot(xi,kr.predict(xi),'k-',label='kernel',lw=3)\n_=ax.plot(xin,y,'o',lw=3,color='gray',ms=12)\n_=ax.plot(xi,yi,'--',color='gray',label='chirp')\n_=ax.plot(xi,knr.predict(xi),'k-',label='nearest')\n_=ax.axis(ymax=1.1,ymin=-1.1)\n_=ax.set_aspect(1/4.)\n_=ax.axis(ymax=1.05,ymin=-1.05)\n_=ax.set_xlabel(r'$x$',fontsize=24)\n_=ax.set_ylabel(r'$y$',fontsize=24)\n_=ax.legend(loc=0)\n#fig.savefig('fig-statistics/nonparametric_006.png')\n```\n\n## Curse of Dimensionality\n\n\n```python\nsys.displayhook= old_displayhook\n```\n\n\n\n\n\n\n\nThe so-called curse of dimensionality occurs as we move into higher and higher\ndimensions. The term was coined by Bellman in 1961 while he was studying\nadaptive control processes. Nowadays, the term is vaguely refers to anything\nthat becomes more complicated as the number of dimensions increases\nsubstantially. Nevertheless, the concept is useful for recognizing\nand characterizing the practical difficulties of high-dimensional analysis and\nestimation.\n\nConsider the volume of an $n$-dimensional sphere,\n\n\n
\n\n$$\n\\begin{equation}\nV_s(n,r)= \\begin{cases}\n \\pi^{n/2} \\frac{r^n}{(n/2)!} & \\texttt{ if $n$ is even} \\\\\\\n 2^n\\pi^{(n-1)/2} \\frac{r^{ (n-1) }}{(n-1)!}((n-1)/2)! & \\texttt{ if $n$ is odd}\n\\end{cases} \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n Further, consider the sphere $V_s(n,1/2)$ enclosed by an $n$\ndimensional unit cube. The volume of the cube is always equal to one, but\n$\\lim_{n\\rightarrow\\infty} V_s(n,1/2) = 0$. What does this mean? It means that\nthe volume of the cube is pushed away from its center, where the embedded\nhypersphere lives. Specifically, the distance from the center of the cube to\nits vertices in $n$ dimensions is $\\sqrt{n}/2$, whereas the distance from the\ncenter of the inscribing sphere is $1/2$. This diagonal distance goes to\ninfinity as $n$ does. For a fixed $n$, the tiny spherical region at the center\nof the cube has many long spines attached to it, like a hyper-dimensional sea\nurchin or porcupine.\n\nWhat are the consequences of this? For methods that rely on nearest\nneighbors, exploiting locality to lower bias becomes intractable. For\nexample, suppose we have an $n$ dimensional space and a point near the\norigin we want to localize around. To estimate behavior around this\npoint, we need to average the unknown function about this point, but\nin a high-dimensional space, the chances of finding neighbors to\naverage are slim. Looked at from the opposing point of view, suppose\nwe have a binary variable, as in the coin-flipping problem. If we have\n1000 trials, then, based on our earlier work, we can be confident\nabout estimating the probability of heads. Now, suppose we have 10\nbinary variables. Now we have $2^{ 10 }=1024$ vertices to estimate.\nIf we had the same 1000 points, then at least 24 vertices would not\nget any data. To keep the same resolution, we would need 1000 samples\nat each vertex for a grand total of $1000\\times 1024 \\approx 10^6$\ndata points. So, for a ten fold increase in the number of variables,\nwe now have about 1000 more data points to collect to maintain the\nsame statistical resolution. This is the curse of dimensionality.\n\nPerhaps some code will clarify this. The following code generates samples in\ntwo dimensions that are plotted as points in [Figure](#fig:curse_of_dimensionality_001) with the inscribed circle in two\ndimensions. Note that for $d=2$ dimensions, most of the points are contained\nin the circle.\n\n\n```python\nimport numpy as np\nv=np.random.rand(1000,2)-1/2.\n```\n\n\n```python\nfrom matplotlib.patches import Circle\nfrom matplotlib.pylab import subplots\nfig,ax=subplots()\nfig.set_size_inches((5,5))\n_=ax.set_aspect(1)\n_=ax.scatter(v[:,0],v[:,1],color='gray',alpha=.3)\n_=ax.add_patch(Circle((0,0),0.5,alpha=.8,lw=3.,fill=False))\n#fig.savefig('fig-statistics/curse_of_dimensionality_001.pdf')\n```\n\n\n\n
\n\n

Two dimensional scatter of points randomly and independently uniformly distributed in the unit square. Note that most of the points are contained in the circle. Counter to intuition, this does not persist as the number of dimensions increases.

\n\n\n\n\n\n The next code block describes the core computation in\n[Figure](#fig:curse_of_dimensionality_002). For each of the dimensions, we\ncreate a set of uniformly distributed random variates along each dimension\nand then compute how close each $d$ dimensional vector is to the origin.\nThose that measure one half are those contained in the hypersphere. The\nhistogram of each measurment is shown in the corresponding panel in the\n[Figure](#fig:curse_of_dimensionality_002). The dark vertical line shows the threshold value. Values to the left\nof this indicate the population that are contained in the hypersphere. Thus,\n[Figure](#fig:curse_of_dimensionality_002) shows that as $d$ increases,\nfewer points are contained in the inscribed hypersphere. The following\ncode paraphrases the content of [Figure](#fig:curse_of_dimensionality_002)\n\n\n```python\nfor d in [2,3,5,10,20,50]:\n v=np.random.rand(5000,d)-1/2.\n hist([np.linalg.norm(i) for i in v])\n```\n\n\n```python\nsiz = [2,3,5,10,20,50]\nfig,axs=subplots(3,2,sharex=True)\nfig.set_size_inches((10,6))\n#fig.set_size_inches((10,8))\nfor ax,k in zip(axs.flatten(),siz):\n v=np.random.rand(5000,k)-1/2.\n _=ax.hist([np.linalg.norm(i) for i in v],color='gray',normed=True);\n _=ax.vlines(0.5,0,ax.axis()[-1]*1.1,lw=3)\n _=ax.set_title('$d=%d$'%k,fontsize=20)\n _=ax.tick_params(labelsize='small',top=False,right=False)\n _=ax.spines['top'].set_visible(False)\n _=ax.spines['right'].set_visible(False)\n _=ax.spines['left'].set_visible(False)\n _=ax.yaxis.set_visible(False)\n _=ax.axis(ymax=3.5)\n\nfig.set_tight_layout(True)\n#fig.savefig('fig-statistics/curse_of_dimensionality_002.pdf')\n```\n\n\n\n
\n\n

Each panel shows the histogram of lengths of uniformly distributed $d$ dimensional random vectors. The population to the left of the dark vertical line are those that are contained in the inscribed hypersphere. This shows that fewer points are contained in the hypersphere with increasing dimension.

\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1515ebbf2813d77e86aaab515b1ea3f9ec50b480", "size": 387163, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapters/statistics/notebooks/Nonparametric.ipynb", "max_stars_repo_name": "nsydn/Python-for-Probability-Statistics-and-Machine-Learning", "max_stars_repo_head_hexsha": "d3e0f8ea475525a694a975dbfd2bf80bc2967cc6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 570, "max_stars_repo_stars_event_min_datetime": "2016-05-05T19:08:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T05:09:19.000Z", "max_issues_repo_path": "chapters/statistics/notebooks/Nonparametric.ipynb", "max_issues_repo_name": "crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning", "max_issues_repo_head_hexsha": "6fd69459a28c0b76b37fad79b7e8e430d09a86a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-05-12T22:18:58.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-06T14:37:06.000Z", "max_forks_repo_path": "chapters/statistics/notebooks/Nonparametric.ipynb", "max_forks_repo_name": "crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning", "max_forks_repo_head_hexsha": "6fd69459a28c0b76b37fad79b7e8e430d09a86a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 276, "max_forks_repo_forks_event_min_datetime": "2016-05-27T01:42:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T11:20:27.000Z", "avg_line_length": 207.9285714286, "max_line_length": 114721, "alphanum_fraction": 0.9002900587, "converted": true, "num_tokens": 10987, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3522017684487512, "lm_q2_score": 0.33111973962899144, "lm_q1q2_score": 0.11662095786562082}} {"text": "```python\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom auxiliary.auxiliary_figures import (get_figure1, get_figure2, get_figure3)\nfrom auxiliary.auxiliary_tables import (get_table1, get_table2, get_table3, get_table4)\nfrom auxiliary.auxiliary_data import process_data\nfrom auxiliary.auxiliary_visuals import (background_negative_green, p_value_star)\nfrom auxiliary.auxiliary_extensions import (get_flexible_table4, get_figure1_extension1, get_figure2_extension1,\n get_bias, get_figure1_extension2, get_figure2_extension2)\nimport warnings\nwarnings.filterwarnings('ignore')\nplt.rcParams['figure.figsize'] = [12, 6]\n```\n\nThe code below is needed to automatically enumerate the equations used in this notebook. \n\n\n```javascript\n%%javascript \nMathJax.Hub.Config({\n TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n});\nMathJax.Hub.Queue(\n [\"resetEquationNumbers\", MathJax.InputJax.TeX],\n [\"PreProcess\", MathJax.Hub],\n [\"Reprocess\", MathJax.Hub]\n);\n```\n\n\n \n\n\n---\n# Replication of Angrist (1990)\n---\n\nThis notebook replicates the core results of the following paper: \n\n> Angrist, Joshua. (1990). [Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records](https://www.jstor.org/stable/2006669?seq=1#metadata_info_tab_contents). *American Economic Review*. 80. 313-36.\n\nIn the following just a few notes on how to read the remainder:\n\n- In this excerpt I replicate the Figures 1 to 3 and the Tables 1 to 4 (in some extended form) while I do not consider Table 5 to be a core result of the paper which is why it cannot be found in this notebook. \n- I follow the example of Angrist keeping his structure throughout the replication part of this notebook.\n- The naming and order of appearance of the figures does not follow the original paper but the published [correction](https://economics.mit.edu/files/7769). \n- The replication material including the partially processed data as well as some replication do-files can be found [here](https://economics.mit.edu/faculty/angrist/data1/data/angrist90).\n\n# 1. Introduction\n---\n\nFor a soft introduction to the topic, let us have a look at the goal of Angrist's article. Already in the first few lines Angrist states a clear-cut aim for his paper by making the remark that \"yet, academic research has not shown conclusively that Vietnam (or other) veterans are worse off economically than nonveterans\". He further elaborates on why research had yet been so inconclusive. He traces it back to the flaw that previous research had solely tried to estimate the effect of veteran status on subsequent earnings by comparing the latter across individuals differing in veteran status. He argues that this naive estimate might likely be biased as it is easily imaginable that specific types of men choose to enlist in the army whose unobserved characteristics imply low civilian earnings (self-selcetion on unobservables).\n\nAngrist avoids this pitfall by employing an instrumental variable strategy to obtain unbiased estimates of the effect of veteran status on earnings. For that he exploits the random nature of the Vietnam draft lottery. This lottery randomly groups people into those that are eligible to be forced to join the army and those that are not. The idea is that this randomly affects the veteran status without being linked to any unobserved characteristics that cause earnings. This allows Angrist to obtain an estimate of the treatment effect that does not suffer from the same shortcomings as the ones of previous studies. \n\nHe finds that Vietnam era veterans are worse off when it comes to long term annual real earnings as opposed to those that have not served in the army. In a secondary point he traces this back to the loss of working experience for veterans due to their service by estimating a simple structural model. \n\nIn the following sections I first walk you through the identification idea and empirical strategy. Secondly, I replicate and explain the core findings of the paper with a rather extensive elaboration on the different data sources used and some additional visualizations. Thirdly, I critically assess the paper followed by my own two extensions concluding with some overall remarks right after. \n\n# 2. Identification and Empirical Approach\n---\n\nAs already mentioned above the main goal of Angrist's paper is to determine the causal effect of veteran status on subsequent earnings. He believes for several reasons that conventional estimates that only compare earnings by veteran status are biased due to unobservables that affect both the probability of serving in the military as well as earnings over lifetime. This is conveniently shown in the causal graph below. Angrist names two potential reasons why this might be likely. First of all, he makes the point that probably people with few civilian opportunities (lower expected earnings) are more likely to register for the army. Without a measure for civilian opportunities at hand a naive estimate of the effect of military service on earnings would not be capable of capturing the causal effect. Hence, he believes that there is probably some self-selection into treatment on unobservables by individuals. In a second point, Angrist states that the selection criteria of the army might be correlated with unobserved characteristics of individuals that makes them more prone to receiving future earnings pointing into a certain direction. \n\nEconometrically spoken, Angrist argues with the following linear regression equation representing a version of the right triangle in the causal graph:\n\n\\begin{align}\n y_{cti} = \\beta_c + \\delta_t + s_i \\alpha + u_{it}.\n\\end{align}\n\nHe argues that estimating the above model with the real earnings $y_{cti}$ for an individual $i$ in cohort $c$ at time $t$ being determined by cohort and time fixed effects ($\\beta_c$ and $\\delta_t$) as well an individual effect for veteran status is biased. This is for the above given reasons that the indicator for veteran status $s_i$ is likely to be correlated with the error term $u_{it}$. \n\nAngrist's approach to avoid bias is now to employ an instrumental variable approach which is based on the accuracy of the causal graph below. \n\n
\n\n
\n\nThe validity of this causal graph rests on the crucial reasoning that there is no common cause of the instrument (Draft Lottery) and the unobserved variables (U). Angrist provides the main argument that the draft lottery was essentially random in nature and hence is not correlated with any personal characteristics and therefore not linked to any unobservables that might determine military service and earnings. As will be later explained in more detail, the Vietnam draft lottery determined randomly on the basis of the birth dates whether a person is eligible to be drafted by the army in the year following the lottery. The directed graph from Draft Lottery to Military Service is therefore warranted as the fact of having a lottery number rendering a person draft-eligible increases the probability of joining the military as opposed to a person that has an excluded lottery number. \n\nThis argumentation leads Angrist to use the probability of being a veteran conditional on being draft-eligible in the lottery as an instrument for the effect of veteran status on earnings. In essence this is the Wald estimate which is equal to the following formula:\n\n\\begin{align*}\n\\hat{\\alpha}_{IV, WALD} = \\frac{E[earnings \\mid eligible = 1] - E[earnings \\mid eligible = 0]}{E[veteran \\mid eligible = 1] - E[veteran \\mid eligible = 0]}\n\\end{align*}\n\nThe nominator equals to the estimated $\\alpha$ from equation (1) while the denominator can be obtained by a first stage regression which regresses veteran status on draft-eligibility. It reduces to estimating the difference in conditional probabilities of being a veteran $prob(veteran \\mid eligible = 1) - prob(veteran \\mid eligible = 0)$. Estimates for this are obtained by Angrist through weighted least squares (WLS). This is done as Angrist does not have micro data but just grouped data (for more details see the data section in the replication). In order to obtain the estimates of the underlying micro level data it is necessary to adjust OLS by the size of the respective groups as weights. The above formula is also equivalent to a Two Stage Least Squares (2SLS) procedure in which earnings are regressed on the fitted values from a first stage regression of veteran status on eligibility. \n\nIn a last step, Angrist generalizes the Wald grouping method to more than just one group as instrument. There are 365 lottery numbers that were split up into two groups (eligible and non-eligible) for the previous Wald estimate. Those lottery numbers can also be split up even further into many more subgroups than just two, resulting in many more dummy variables as instruments. Angrist splits the lottery numbers into intervals of five which determine a group $j$. By cohort $c$ he estimates for each group $j$ the conditional probability of being a veteran $p_{cj}$. This first stage is again run by WLS. The resulting estimate $\\hat p_{cj}$ is then used to conduct the second stage regression below. \n\n\\begin{align}\n\\bar y_{ctj} = \\beta_c + \\delta_t + \\hat p_{cj} \\alpha + \\bar u_{ctj}\n\\end{align}\n\nThe details and estimation technique will be further explained when presenting the results in the replication section below. \n\n# 3. Replication\n---\n\n## 3.1 Background and Data\n\n### The Vietnam Era Draft Lottery\n\nBefore discussing how the data looks like it is worthwhile to understand how the Vietnam era draft lottery was working in order to determine to which extent it might actually serve as a valid instrument. During the Vietnam war there were several draft lotteries. They were held in the years from 1970 to 1975. The first one took place at the end of 1969 determining which men might be drafted in the following year. This procedure of determining the lottery numbers for the following year continued until 1975. The table below shows for which years there were lotteries drawn and which birth years were affected by them in the respective year. For more details have a look [here](https://www.sss.gov/history-and-records/vietnam-lotteries/). \n\n| **Year** | **Cohorts** | **Draft-Eligibility Ceiling**|\n|--------------|---------------|------------------------------|\n| 1970 | 1944-50 | 195 |\n| 1971 | 1951 | 125 |\n| 1972 | 1952 | 95 |\n| 1973 | 1953 | 95 |\n| 1974 | 1954 | 95 |\n| 1975 | 1955 | 95 |\n| 1976 | 1956 | 95 |\n\nThe authority of drafting men for the army through the lottery expired on June 30, 1973 and already before no one was drafted anymore. The last draft call took place on December 7, 1972. \n\nThe general functioning of those seven lotteries was that every possible birthday (365 days) was randomly assigned a number between 1 and 365 without replacement. Taking the 1969 lottery this meant that the birthdate that had the number 1 assigned to, it caused every man born on that day in the years 1944 to 1950 to be drafted first if it came to a draft call in the year 1970. In practice, later in the same year of the draft lottery, the army announced draft-eligibility ceilings determining up to which draft lottery number was called in the following year. In 1970, this means that every man having a lottery number of below 195 was called to join the army. As from 1973 on nobody was called anymore, the numbers for the ceiling are imputed from the last observed one which was 95 in the year 1972. Men with lottery numbers below the ceiling for their respective year are from here on called \"draft-eligible\". \n\nBeing drafted did not mean that one actually had to serve in the army, though. Those drafted had to pass mental and physical tests which in the end decided who had to join. Further it should be mentioned that Angrist decides to only use data on those that turned 19 when being at risk of induction which includes men born between 1950 and 1953.\n\n### The Data\n\n#### Continuous Work History Sample (CWHS)\n\nThis administrative data set constitutes a random one percent sample draw of all possible social security numbers in the US. For the years from 1964 until 1984 it includes the **FICA** (social security) earnings history censored to the Social Security maximum taxable amount. It further includes FICA taxable earnings from self-employment. For the years from 1978 on it also has a series on total earnings (**Total W-2**) including for instance cash payments but excluding earnings from self-employment. This data set has some confidentiality restrictions which means that only group averages and variances were available. This means that Angrist cannot rely on micro data but has to work with sample moment which is a crucial factor for the exact implementation of the IV method. A group is made of by year of earnings, year of birth, ethnicity and five consecutive lottery numbers. The statistics collected for those also include the number of people in the group, the fraction of them having taxable earnings equal and above the taxable maximum and the fraction having zero earnings. \n\nRegarding the actual data sets available for replication we have the data set `cwhsa` which consists of the above data for the years from 1964 to 1977 and then `cwhsb` which consists of the CWHS for the years after.\n\nAbove that Angrist provides the data set `cwhsc_new` which includes the **adjusted FICA** earnings. For those Angrist employed a strategy to approximate the underlying uncensored FICA earnings from the reported censored ones. All of those three different earnings variables are used repeatedly throughout the replication. \n\n\n```python\nprocess_data(\"cwhsa\")\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
earningsearnings variancesample sizefraction zero earnings
ethnicitybirth yearyearlottery interval
1446411691.0300291480.599976182.00.170
21535.4300541359.020020187.00.187
31818.0100101604.420044210.00.171
41636.3800051626.270020208.00.231
51889.8000491639.609985207.00.184
........................
25377693643.7399904273.60009853.00.415
704127.4902345623.08984455.00.473
714712.4599614588.27978576.00.316
724676.9399415321.14013785.00.353
734651.8701174989.02002083.00.241
\n

20440 rows \u00d7 4 columns

\n
\n\n\n\nThe above earnings data only consists of FICA earnings. The lottery intervals from 1 to 73 are equivalent to intervals of five consecutive lottery numbers. Consequently, the variable lottery interval equals to one for the lottery numbers 1 to 5 and so on. The ethnicity variable is encoded as 1 for a white person and 2 for a nonwhite person.\n\n\n```python\nprocess_data(\"cwhsb\")\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
earningsearnings variancesample sizefraction zero earnings
data sourceethnicitybirth yearyearlottery interval
TAXAB14478110625.587052.471790.179
211546.468032.551820.198
311401.167508.272090.196
410899.997342.602060.189
511667.147507.562070.159
...........................
TOTAL25384696846.439117.49530.396
7011357.8914734.47550.455
718695.869613.24760.368
7214013.2414182.30840.274
7310742.7118095.78830.506
\n

20440 rows \u00d7 4 columns

\n
\n\n\n\nAs stated above this data now consists of earnings from 1978 to 1984 for FICA (here encoded as \"TAXAB\") and Total W-2 (encoded as \"TOTAL\").\n\n#### Survey of Income and Program Participation (SIPP) and the Defense Manpower Data Center (DMDC)\n\nThroughout the paper it is necessary to have a measure of the fraction of people serving in the military. For this purpose the above two data sources are used. \n\nThe **SIPP** is a longitudinal survey of around 20,000 households in the year 1984 for which is determined whether the persons in the household are Vietnam war veterans. The survey also collected data on ethnicity and birth data which made it possible to match the data to lottery numbers. The **DMDC** on the other hand is an administrative record which shows the total number of new entries into the army by ethnicity, cohort and lottery number per year from mid 1970 until the end of 1973. \nThose sources are needed for the results in Table 3 and 4. A combination of those two are matched to the earnings data of the CWHS which constitutes the data set `chwsc_new` below.\n\n\n```python\ndata_cwhsc_new = process_data(\"cwhsc_new\")\ndata_cwhsc_new\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
earningsprobability of serving
data sourceethnicitybirth yearyearlottery interval
ADJ1507418853.9404300.352700
7519062.6396480.352700
76110096.0556640.352700
77110916.0722660.352700
78111738.4443360.352700
.....................
TOTAL253843710562.3574220.111818
578988.2958980.082410
409857.1953120.111429
118690.8398440.088025
239709.9853520.073750
\n

12818 rows \u00d7 2 columns

\n
\n\n\n\nThis data set now also includes the adjusted FICA earnings which are marked by \"ADJ\" as well as the probability of serving in the military conditional on being in a group made up by ethnicity, birth cohort and lottery interval. \n\nBelow we have a short look at how the distribution of the different earnings measures look like. In the table you see the real earnings in 1978 dollar terms for the years from 1974 to 1984 for FICA and adjusted FICA as well as the years 1978 until 1984 for Total W-2. \n\n\n```python\nfor data in [\"ADJ\", \"TAXAB\", \"TOTAL\"]:\n ax = sns.kdeplot(data_cwhsc_new.loc[data, \"earnings\"], color=np.random.choice(\n np.array([sns.color_palette()]).flatten(), 4))\nax.set_xlim(xmax=20000)\nax.legend([\"Adjusted FICA\", \"FICA\", \"TOTAL W-2\"], loc=\"upper left\")\nax.set_title(\"Kernel Density of the different Earning Measures\")\n```\n\nFor a more detailed description of the somewhat confusing original variable names in the data sets please refer to the appendix at the very bottom of the notebook.\n\n## 3.2 Establishing the Validity of the Instrument\n\nIn order to convincingly pursue the identification strategy outlined above it is necessary to establish an effect of draft eligibility (the draft lottery) on veteran status and to argue that draft eligibility is exogenous to any unobserved factor affecting both veteran status and subsequent earnings. As argued before one could easily construct reasonable patterns of unobservables that both cause veteran status and earnings rendering a naive regression of earnings on veteran status as biased. \n\nThe first requirement for IV to be valid holds as it is clearly observable that draft-eligibility has an effect on veteran status. The instrument is hence **relevant**. For the second part Angrist argues that the draft lottery itself is random in nature and hence not correlated with any unobserved characteristics (**exogenous**) a man might have that causes him to enroll in the army while at the same time making his earnings likely to go into a certain direction irrespective of veteran status. \n\nOn the basis of this, Angrist now shows that subsequent earnings are affected by draft eligibility. This is the foundation to find a nonzero effect of veteran status on earnings. Going back to the causal diagram from before, Angrist argued so far that there is no directed graph from Draft Lottery to the unobservables U but only to Military Service. Now he further establishes the point that there is an effect of draft-eligibility (Draft Lottery) that propagates through Military Service onto earnings (Wages). \n\nIn order to see this clearly let us have a look at **Figure 1** of the paper below. For white and nonwhite men separately the history of average FICA earnings in 1978 dollar terms is plotted. This is done by year within cohort across those that were draft-eligible and those that were not. The highest two lines represent the 1950 cohort going down to the cohort of men born in 1953. There is a clearly observable pattern among white men in the cohorts from 1950 to 52 which shows persistently lower earnings for those draft-eligible starting the year in which they could be drafted. This cannot be seen for those born in 1953 which is likely due to the fact that nobody was actually drafted in 1973 which would have otherwise been \"their\" year. For nonwhite men the picture is less clear. It seems that for cohorts 50 to 52 there is slightly higher earnings for those ineligible but this does not seem to be persistent over time. The cohort 1953 again does not present a conclusive image. Observable in all lines, though, is that before the year of conscription risk there is no difference in earnings among the group which is due to the random nature of the draft lottery.\n\n\n```python\n# read in the original data sets\ndata_cwhsa = pd.read_stata(\"data/cwhsa.dta\")\ndata_cwhsb = pd.read_stata(\"data/cwhsb.dta\")\ndata_cwhsc_new = pd.read_stata(\"data/cwhsc_new.dta\")\ndata_dmdc = pd.read_stata(\"data/dmdcdat.dta\")\ndata_sipp = pd.read_stata(\"data/sipp2.dta\")\n```\n\n\n```python\nget_figure1(data_cwhsa, data_cwhsb)\n```\n\nA more condensed view of the results in Figure 1 is given in **Figure 2**. It depicts the differences in earnings between the red and the black line in Figure 1 by cohort and ethnicity. This is just included for completeness as it does not provide any further insight in comparison to Figure 1.\n\n\n```python\nget_figure2(data_cwhsa, data_cwhsb)\n```\n\nA further continuation of this line of argument is resulting in **Table 1**. Angrist makes the observations from the figures before even further fine-grained and explicit. In Table 1 Angrist estimates the expected difference in average FICA and Total W-2 earnings by ethnicity within cohort and year of earnings. In the table below for white men we can observe that there is no significant difference to the five percent level for the years before the year in which they might be drafted. This changes for the cohorts from 1950 to 52 in the years 1970 to 72, respectively. There we can observe a significantly lower income for those eligible in comparison to those ineligible. This seems to be persistent for the cohorts 1950 and 52 while less so for those born in 1951 and 1953. It should further be noted that Angrist reports that the quality of the Total W-2 earnings data was low in the first years (it was launched in 1972) explaining the inconlusive estimations in the periods at the beginning.\n\nTo focus the attention on the crucial points I mark all the negative estimates in different shades of green with more negative ones being darker. This clearly emphasizes the verbal arguments brought up before.\n\n\n```python\ntable1 = get_table1(data_cwhsa, data_cwhsb)\ntable1[\"white\"].style.applymap(background_negative_green)\n```\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
type FICA TOTAL
byr 50 51 52 53 50 51 52 53
year Statistic
66Average-21.810000
Standard Error14.990000
67Average-8.02000013.170000
Standard Error18.21000016.450000
68Average-14.90000012.340000-8.960000
Standard Error24.20000019.50000019.250000
69Average-2.10000018.79000011.420000-4.090000
Standard Error34.58000026.47000022.78000018.340000
70Average-233.870000-44.830000-5.07000032.940000
Standard Error39.72000036.70000029.38000024.200000
71Average-325.950000-298.210000-29.42000027.680000
Standard Error46.63000041.78000040.26000030.350000
72Average-203.580000-197.450000-261.6000002.130000
Standard Error55.42000051.18000046.89000042.920000
73Average-226.650000-228.860000-357.780000-56.580000
Standard Error67.84000061.64000056.26000054.810000
74Average-243.040000-155.460000-402.740000-15.060000
Standard Error81.45000075.33000068.38000068.150000
75Average-295.240000-99.210000-304.590000-28.300000
Standard Error94.42000089.79000085.01000079.630000
76Average-314.220000-86.870000-370.780000-145.510000
Standard Error106.620000102.94000098.30000093.080000
77Average-262.640000-274.230000-396.970000-85.510000
Standard Error117.910000112.260000111.180000107.140000
78Average-205.400000-203.880000-467.100000-65.3200001059.400000233.270000175.360000-1974.550000
Standard Error132.710000127.040000127.300000123.1900002159.3400001609.4400001567.940000912.110000
79Average-263.610000-60.530000-236.90000089.280000-1588.720000523.690000-580.860000-557.940000
Standard Error160.590000152.390000153.920000148.7000001575.6100001590.540000736.750000750.140000
80Average-339.160000-267.980000-312.110000-93.880000-1028.12000085.630000-581.320000-428.730000
Standard Error183.250000175.310000178.230000170.740000756.860000599.870000309.170000341.540000
81Average-435.830000-358.320000-342.89000034.390000-589.670000-71.610000-440.530000-109.540000
Standard Error210.590000203.670000206.880000199.070000299.430000423.400000265.080000245.250000
82Average-320.200000-117.310000-235.12000029.490000-305.540000-72.760000-514.71000018.720000
Standard Error235.860000229.140000232.380000222.660000345.490000372.160000296.570000281.910000
83Average-349.580000-314.060000-437.740000-96.370000-512.940000-896.550000-915.71000030.160000
Standard Error261.670000253.270000257.550000248.770000441.220000426.380000395.260000318.120000
84Average-484.390000-398.460000-436.060000-228.680000-1143.320000-809.200000-767.240000-164.210000
Standard Error286.830000279.260000281.930000272.260000492.270000380.960000376.060000366.100000
\n\n\n\nFor the nonwhite males there is no clear cut pattern. Only few cells show significant results which is why Angrist in the following focuses on white males when constructing IV estimates. For completeness I present Table 1 for nonwhite males below although it is somewhat less important for the remainder of the paper.\n\n\n```python\ntable1[\"nonwhite\"].style.applymap(background_negative_green)\n```\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
type FICA TOTAL
byr 50 51 52 53 50 51 52 53
year Statistic
66Average-11.880000
Standard Error27.690000
67Average12.910000-4.030000
Standard Error34.23000030.660000
68Average-29.540000-6.290000-12.040000
Standard Error44.51000037.40000035.050000
69Average-5.13000067.8000003.450000-42.420000
Standard Error66.85000053.41000043.42000036.490000
70Average-99.82000062.25000024.750000-0.950000
Standard Error78.60000075.74000062.27000045.000000
71Average-164.810000-144.310000-25.08000018.230000
Standard Error92.75000086.50000085.19000060.790000
72Average-188.880000-156.720000-208.28000060.440000
Standard Error113.610000105.730000104.28000092.830000
73Average-85.730000-134.890000-175.680000115.590000
Standard Error137.790000127.080000129.090000119.480000
74Average-179.350000-96.710000-181.420000216.590000
Standard Error165.090000160.130000155.650000145.200000
75Average-190.350000-236.150000-183.730000111.640000
Standard Error189.320000186.810000185.880000166.950000
76Average-105.340000-333.790000-308.910000-46.400000
Standard Error214.710000215.410000216.540000199.360000
77Average112.430000-206.880000-251.130000153.510000
Standard Error238.500000240.490000248.540000233.510000
78Average163.670000-108.610000-424.930000381.910000-1145.0700002978.240000-4676.250000-482.800000
Standard Error272.670000269.280000279.480000275.7700002395.6200002869.6800001393.1300002206.090000
79Average187.040000-210.310000-391.710000312.0400004005.4200001545.070000-494.790000-1043.330000
Standard Error317.210000323.080000324.830000326.3300002721.2800002191.1500002683.8900001660.240000
80Average203.2500004.810000-212.660000344.080000790.240000376.470000-292.700000288.700000
Standard Error363.100000368.410000372.530000370.320000648.170000533.690000441.000000416.500000
81Average534.520000313.200000-305.860000717.820000802.590000415.980000-272.360000784.410000
Standard Error413.580000419.190000429.110000433.730000524.630000745.170000492.870000503.150000
82Average285.160000175.470000-262.570000810.470000326.040000-244.340000-160.220000675.160000
Standard Error461.290000471.650000476.750000486.300000608.970000647.840000590.010000564.100000
83Average96.070000419.560000-177.340000543.640000315.480000254.330000-53.640000462.350000
Standard Error512.620000538.170000531.510000523.260000720.000000767.600000643.490000638.970000
84Average-76.870000-223.190000-123.400000641.350000-287.440000-718.610000-288.100000827.400000
Standard Error548.220000562.880000568.600000568.200000804.100000771.590000721.010000716.810000
\n\n\n\n## 3.3 Measuring the Effect of Military Service on Earnings\n\n### 3.3.1 Wald-estimates \n\nAs discussed in the identification section a simple OLS regression estimating the model in equation (1) might suffer from bias due to elements of $s_i$ that are correlated with the error term $u_{it}$. This problem can be to a certain extent circumvented by the grouping method proposed by Abraham Wald (1940). Grouping the data by the instrument which is draft eligibility status makes it possible to uncover the effect of veteran status on earnings. \nAn unbiased estimate of $\\alpha$ can therefore be found by adjusting the difference in mean earnings across eligibility status by the difference in probability of becoming a veteran conditional on being either draft eligible or not. This verbal explanation is translated in the following formula:\n\n\\begin{equation}\n \\hat\\alpha = \\frac{\\bar y^e - \\bar y^n}{\\hat{p}(V|e) - \\hat{p}(V|n)}\n\\end{equation}\n\nThe variable $\\bar y$ captures the mean earnings within a certain cohort and year further defined by the superscript $e$ or $n$ which indicates draft-eligibility status. The above formula poses the problem that the conditional probabilities of being a veteran cannot be obtained from the CWHS data set alone. Therefore in **Table 2** Angrist attempts to estimate them from two other sources. First from the SIPP which has the problem that it is a quite small sample. And secondly, he matches the CWHS data to the DMDC. Here it is problematic, though, that the amount of people entering the army in 1970 (which is the year when those born 1950 were drafted) is only collected for the second half of the year. This is the reason why Angrist has to go with the estimates from the SIPP for the cohort of 1950 while taking the bigger sample of the matched DMDC/CWHS for the birth years 1951 to 53. The crucial estimates needed for the denominator of equation (3) are presented in the last column of Table 2 below. It can already be seen that the differences in earnings by eligibility that we found in Table 1 will be scaled up quite a bit to obtain the estimates for $\\hat{\\alpha}$. We will come back to that in Table 3.\n\n
\nNote: The cohort 1950 for the DMDC/CWHS could not be replicated as the data for cohort 1950 from the DMDC set is missing in the replication data. Above that the standard errors for the estimates coming form SIPP differ slightly from the published results but are equal to the results from the replication code.\n
\n\n\n```python\ntable2 = get_table2(data_cwhsa, data_dmdc, data_sipp)\ntable2[\"white\"]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SampleP(Veteran)P(Veteran|eligible)P(Veteran|ineligible)P(V|eligible) - P(V|ineligible)
Data SetCohortStatistic
SIPP (84)1950Value3510.26730.35270.19340.1594
Standard Error0.01360.02150.01660.0272
1951Value3590.19730.28310.14690.1362
Standard Error0.01240.02300.01390.0269
1952Value3360.15540.23100.12570.1053
Standard Error0.01110.02450.01190.0273
1953Value3900.12980.21920.11260.1066
Standard Error0.01020.03130.01040.0330
DMDC/CWHS1951Value167680.11760.20710.07080.1362
Standard Error0.00250.00530.00240.0059
1952Value177030.15150.26830.11020.1581
Standard Error0.00270.00650.00270.0071
1953Value177490.13430.15480.12680.0280
Standard Error0.00260.00530.00290.0060
\n
\n\n\n\n\n```python\ntable2[\"nonwhite\"]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SampleP(Veteran)P(Veteran|eligible)P(Veteran|ineligible)P(V|eligible) - P(V|ineligible)
Data SetCohortStatistic
SIPP (84)1950Value700.16250.19570.13550.0603
Standard Error0.02810.04490.03530.0571
1951Value630.17030.20140.15140.0500
Standard Error0.02830.04970.03400.0603
1952Value520.13320.14490.12880.0161
Standard Error0.02650.05250.03080.0609
1953Value550.17490.22470.16420.0605
Standard Error0.02970.07620.03210.0827
DMDC/CWHS1951Value52580.07940.11730.05990.0574
Standard Error0.00370.00760.00400.0086
1952Value54930.09530.14390.07940.0644
Standard Error0.00400.00950.00420.0104
1953Value53030.09250.09840.09040.0080
Standard Error0.00400.00790.00460.0092
\n
\n\n\n\nIn the next step Angrist brings together the insights gained so far from his analysis. **Table 3** presents again differences in mean earnings across eligibility status for different earnings measures and within cohort and year. The values in column 1 and 3 are directly taken from Table 1. In column 2 we now encounter the adjusted FICA measure for the first time. As a reminder, it consists of the scaled up FICA earnings as FICA earnings are only reported to a certain maximum amount. The true average earnings are likely to be higher and Angrist transformed the data to account for this. We can see that the difference in mean earnings is most often in between the one of pure FICA earnings and Total W-2 compensation. In column three there is again the probability difference from the last column of Table 2. As mentioned before the measure is taken from the SIPP sample for the cohort of 1950 and the DMDC/CWHS sample for the other cohorts. Angrist decides to exclude cohort 1953 and nonwhite males as for those draft eligibility does not seem to be an efficient instrument (see Table 1 and Figure 1 and 2). Although Angrist does not, in this replication I also present Table 3 for nonwhites to give the reader a broader picture. Further Angrist focuses his derivations only on the years 1981 to 1984 as those are the latest after the Vietnam war for which there was data avalaible. Effects in those years are most likely to represent long term effects. \n\nLet us now look at the most crucial column of Table 3 which is the last one. It captures the Wald estimate for the effect of veteran status on adjusted FICA earnings in 1978 dollar terms per year and cohort from equation (3). So this is our $\\hat{\\alpha}$ per year and cohort.\nFor white males the point estimates indicate that the annual loss in real earnings due to serving in the military was around 2000 dollars. Looking at the high standard errors, though, only few of the estimates are actually statistically significant. In order to see this more clearly I added a star to those values in the last column that are statistically significant to the five percent level.\n\n
\nNote: In the last column I obtain slightly different standard errors than in the paper. The same is the case, though, in the replication code my replication is building up on.\n
\n\n\n```python\ntable3 = get_table3(data_cwhsa, data_cwhsb, data_dmdc, data_sipp, data_cwhsc_new)\np_value_star(table3[\"white\"], slice(None), (\"\", \"Service Effect in 1978 $\"))\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
First LevelDraft Eligibility Effects in Current $
Second LevelFICA EarningsAdjusted FICA EarningsTotal W-2 EarningsP(V|eligible) - P(V|ineligible)Service Effect in 1978 $
CohortYearStatistic
19501981Value-435.8-487.8-589.70.159-2195.3*
Standard Error210.6237.6299.40.0271069.5
1982Value-320.2-396.1-305.5-1679
Standard Error235.9281.7345.51194.1
1983Value-349.6-450.1-512.9-1849.3
Standard Error261.7302.0441.21240.7
1984Value-484.4-638.8-1143.3-2517.1
Standard Error286.8336.6492.31326.3
19511981Value-358.3-428.8-71.60.136-2258.3*
Standard Error203.7216.7423.40.0271141.2
1982Value-117.3-278.6-72.8-1382.1
Standard Error229.1251.5372.21247.5
1983Value-314.1-452.2-896.6-2174.4
Standard Error253.3277.7426.41335.3
1984Value-398.5-573.4-809.2-2644.3
Standard Error279.3308.0381.01420.3
19521981Value-342.9-392.7-440.50.105-2675.1
Standard Error206.9220.3265.10.0271500.6
1982Value-235.1-255.3-514.7-1638.2
Standard Error232.4254.0296.61630.1
1983Value-437.7-500.1-915.7-3110
Standard Error257.6283.3395.31761.9
1984Value-436.1-560.1-767.2-3340.9
Standard Error281.9310.8376.11853.8
\n
\n\n\n\nLooking at nonwhite males now, we observe what we already expected. All of the Wald estimates are actually far away from being statistically significant. \n\n\n```python\np_value_star(table3[\"nonwhite\"], slice(None), (\"\", \"Service Effect in 1978 $\"))\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
First LevelDraft Eligibility Effects in Current $
Second LevelFICA EarningsAdjusted FICA EarningsTotal W-2 EarningsP(V|eligible) - P(V|ineligible)Service Effect in 1978 $
CohortYearStatistic
19501981Value534.5654.0802.60.067780.5
Standard Error413.6495.2524.60.0575891.3
1982Value285.2335.4326.03758.5
Standard Error461.3529.8609.05937.0
1983Value96.1169.1315.51836.3
Standard Error512.6551.6720.05990.4
1984Value-76.9-65.1-287.4-677.8
Standard Error548.2601.9804.16269.8
19511981Value313.2401.5416.00.055760.5
Standard Error419.2446.6745.20.066407.4
1982Value175.5228.1-244.33081.9
Standard Error471.6524.4647.87087.0
1983Value419.6398.9254.35224.8
Standard Error538.2558.8767.67318.6
1984Value-223.2-293.5-718.6-3687.0
Standard Error562.9598.1771.67513.4
19521981Value-305.9-316.5-272.40.016-14104.0
Standard Error429.1454.8492.90.06120262.8
1982Value-262.6-502.6-160.2-21092.7
Standard Error476.8524.1590.021993.9
1983Value-177.3-275.9-53.6-11221.1
Standard Error531.5546.6643.522235.2
1984Value-123.4-99.8-288.1-3892.0
Standard Error568.6600.3721.023420.2
\n
\n\n\n\n### 3.3.2 More complex IV estimates\n\nIn the next step Angrist uses a more generalized version of the Wald estimate for the given data. While in the previous analysis the mean earnings were compared solely on the basis of two groups (eligibles and ineligibles, which were determined by the lottery numbers), in the following this is extended to more complex subgroups. The grouping is now based on intervals of five consecutive lottery numbers. As explained in the section on idenficication this boils down to estimating the model described in equation (2). \n\n\\begin{equation*}\n\\bar y_{ctj} = \\beta_c + \\delta_t + \\hat p_{cj} \\alpha + \\bar u_{ctj}\n\\end{equation*}\n\n$\\bar y_{ctj}$ captures the mean earnings by cohort $c$, in year $t$ for group $j$. $\\hat p_{cj}$ depicts the estimated probability of being a veteran conditional on being in cohort $c$ and group $j$. We are now interested in obtaining an estimate of $\\alpha$. In our current set up $\\alpha$ corresponds to a linear combination of the many different possible Wald estimates when comparing each of the subgroups in pairs. With this view in mind Angrist restricts the treatment effect to be same (i.e. equal to $\\alpha$) for each comparison of subgroups. The above equation is equivalent to the second stage of the 2SLS estimation. Angrist estimates the above model using the mean real earnings averaged over the years 1981 to 84 and the cohorts from 1950 to 53. In the first stage Angrist has to estimate $\\hat p_{cj}$ which is done again by using a combination of the SIPP sample and the matched DWDC/CWHS data set. With this at hand Angrist shows how the equation (2) looks like if it was estimated by OLS. The following **Figure 3** is also called Visual Instrumental Variables (VIV). In order to arrive there he takes the residuals from an OLS regression of $\\bar y_{ctj}$ and $\\hat p_{cj}$ on cohort and time dummies, respectively. Then he performs another OLS regression of the earnings residuals on the probability residuals. This is depicted in Figure 3 below. The slope of the regression line corresponds to an IV estimate of $\\alpha$. The slope amounts to an estimate of -2384 dollars which serves as a reference for the treatment effect measured by another, more efficient method described below the Figure. \n\n\n```python\nget_figure3(data_cwhsc_new)\n```\n\nWe now shortly turn back to a remark from before. Angrist is forced to only work with sample means due to confidentiality restrictions on the underlying micro data. For the Wald estimates it is somewhat easily imaginable that this does not pose any problem. For the above estimation of $\\alpha$ using 2SLS this is less obvious. Angrist argues, though, that there is a Generalized Method of Moments (GMM) interpretation of the 2SLS approach which allows him to work with sample moments alone. Another important implication thereof is that he is not restricted to using only one sample to obtain the sample moments. In our concrete case here, it is therefore unproblematic that the earnings data is coming from another sample than the conditional probabilities of being a veteran as both of the samples are drawn from the same population. This is a characteristic of the GMM. \n\nIn the following, Angrist estimates equation (2) by using the more efficient approach of Generalized Least Squares (GLS) as opposed to OLS. The GLS is more efficient if there is correlation between the residuals in a regression model. Angrist argues that this is the case in the above model equation and that this correlation can be estimated. GLS works such that coming from the estimated covariance matrix $\\hat\\Omega$ of the residuals, the regressors as well as the dependent variable are transformed using the upper triangle of the Cholesky decomposition of $\\hat\\Omega^{-1}$. Those transformed variables are then used to run a regular OLS model with nonrobust standard errors. The resulting estimate $\\hat\\alpha$ then is the most efficient one (if it is true that there is correlation between the residuals). \n\nAngrist states that the optimal weigthing matrix $\\Omega$ resulting in the most efficient estimate of $\\hat\\alpha$ looks the following:\n\n\\begin{equation}\n \\Omega = V(\\bar y_{ctj}) + \\alpha^2 V(\\hat p_{cj}).\n\\end{equation}\n\nAll of the three elements on the right hand side can be estimated from the data at hand. \n\nNow we have all the ingredients to have a look at the results in **Table 4**. In practice, Angrist estimates two models in the above manner based on the general form of the above regression equation. Model 1 allows the treatment effect to vary by cohort while Model 2 collapses them into a scalar estimate of $\\alpha$. \nThe results for white men in Model 1 show that for each of the three earnings measures as dependent variable only few are statistically significant to the five percent level (indicated by a star added by me again). A look at Model 2 reveals, though, that the combined treatment effect is significant and it amounts to a minus of 2000 dollar (we look again at real earnings in 1978 dollar terms) annualy for those having served in the army. For cohort 1953 we obtain insignificant estimates which was to be expected given that actually nobody was drafted in that year.\n\n
\nNote: The results are again a bit different to those in the paper. The same is the case, though, in the replication code my replication is building up on.\n
\n\n\n```python\ntable4 = get_table4(data_cwhsc_new)\np_value_star(table4[\"white\"], (slice(None), slice(None), [\"Value\", \"Standard Error\"]), (slice(None)))\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
FICA Taxable EarningsAdjusted FICA EarningsTotal W-2 Compensation
ModelCohortStatistic
Model 11950Value-1709.2-2093.7-1895
Standard Error946.81109.21336.9
1951Value-1457.1-1983.7-2431.4*
Standard Error954.71036.51155.4
1952Value-1724.0*-1943.0*-2058.7*
Standard Error863.3927.51004.8
1953Value1223.8900.7-488.6
Standard Error3232.53506.63947.4
Chi Squared578.3630.3569.5
Model 21950-53Value-1562.9*-1920.4*-2094.5*
Standard Error521.7576.8649.1
Chi Squared579.1631569.7
\n
\n\n\n\nAngrist also reports those estimates for nonwhite men which are not significant. This was already expected as the the instrument was not clearly correlated with the endogenous variable of veteran status. \n\n\n```python\np_value_star(table4[\"nonwhite\"], (slice(None), slice(None), [\"Value\", \"Standard Error\"]), (slice(None)))\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
FICA Taxable EarningsAdjusted FICA EarningsTotal W-2 Compensation
ModelCohortStatistic
Model 11950Value3893.73871.95711.8
Standard Error5355.16246.97206.5
1951Value-891.3-333.42609.0
Standard Error4399.64667.14887.1
1952Value-3182.9-3457.7-3068.0
Standard Error3994.94194.94222.7
1953Value-5928.3-8571.5-6325.8
Standard Error10302.310652.611393.0
Chi Squared616.7681.7693.6
Model 21950-53Value-643.3-999.7367.8
Standard Error2406.82602.62733.8
Chi Squared618.4683.4695.6
\n
\n\n\n\nThis table concludes the replication of the core results of the paper. Summing up, Angrist constructed a causal graph for which he employs a plausible estimation strategy. Using his approach he concludes with the main result of having found a negative effect of serving in the military during the Vietnam era on subsequent earnings for white male in the United States. \n\nAngrist provides some interpretation of the found effect and some concerns that might arise when reading his paper. I will discuss some of his points in the following critical assessment.\n\n# 4. Critical Assessment\n---\n\nConsidering the time back then and the consequently different state of research, the paper was a major contribution to instrumental variable estimation of treatment effects. More broadly, the paper is very conclusive and well written. Angrist discusses caveats quite thoroughly which makes the whole argumentation at first glance very concise. Methodologically, the paper is quite complex as due to the kind of data available. Angrist is quite innovative in that regard as he comes up with the two sample IV method in this paper which allows him to pratically follow his identification strategy. The attempt to explain the mechanisms behind the negative treatment effect found by him makes the paper comprehensive and shows the great sense of detail Angrist put into this paper. \n\nWhile keeping in mind the positive sides of his paper, in hindsight, Angrist is a bit too vocal about the relevance and accuracy of his findings. Given our knowledge about the local average treatment effect (**LATE**) we encountered in our lecture, Angrist only identifies the average treatment effect of the compliers (those that enroll for the army if they are draft-eligible but do not if they are not) if there is individual level treatment heterogeneity and if the causal graph from before is accurate. Hence, the interpretation of the results gives only limited policy implications. For the discussion of veteran compensation the group of those who were induced by the lottery to join the military are not crucial. As there is no draft lottery anymore, what we are interested in is how to compensate veterans for their service who \"voluntarily\" decided to serve in the military. This question cannot be answered by Angrist's approach given the realistic assumption that there is treatment effect heterogeneity (which also Angrist argues might be warranted). \n\nA related difficulty of interpretation arises because in the second part, Angrist uses an overidentified model. As already discussed before this amounts to a linear combination of the average treatment effects of subgroups. This mixes the LATEs of several subgroups making the policy implications even more blurred as it is not clear what the individual contributions of the different subgroups are. In this example here this might not make a big difference but should be kept in mind when using entirely different instrumental variables to identify the LATE. \n\nIn a last step, there are several possible scenarios to argue why **the given causal graph might be violated**. Angrist himself delivers one of them. After the lottery numbers were drawn, there was some time in between the drawing and the announcement of the draft-eligibility ceiling. This provoked behavioral responses of some individuals with low numbers to volunteer for the army in order to get better terms of service as well as enrolling in university which rendered them ineligible for the army. In our data, it is unobservable to see the fraction of individuals in each group to join university. If there was actually some avoidance behavior for those with low lottery numbers, then the instrument would be questionable as there would be a path from the Draft Lottery to unobservables (University) which affects earnings. At the same time there is also clearly a relation between University and Military Service. \n\nRosenzweig and Wolpin (2000) provide a causal graph that draws the general interpretability of the results in Angrist (1990) further into question. Let us look at the causal graph below now imagining that there was no directed graph from Draft Lottery to Civilian Experience. Their argument is that Military Service reduces Schooling and Civilian Experience which lowers Wages while affecting Wages directly and increasing them indirectly by reducing Schooling and increasing work experience. Those subtle mechanism are all collapsed into one measure by Angrist which gives an only insufficiently shallow answer to potentially more complex policy questions. Building up on this causal graph, Heckman (1997) challenges the validity of the instrument in general by making the point that there might be a directed graph from Draft Lottery to Civilian Experience. The argument goes as follows: Employers, after learning about their employees' lottery numbers, decrease the training on the job for those with a high risk of being drafted. If this is actually warranted the instrument Draft Lottery cannot produce unbiased estimates anymore. \n\n\n\nMorgan and Winship (2014) add to this that the bias introduced by this is further affected by how strongly Draft Lottery affects Military Service. Given the factor that the lottery alone does not determine military service but that there are tests, might cause the instrument to be rather weak and therefore a potential bias to be rather strong. \n\n# 5. Extensions\n---\n\n## 5.1 Treatment effect with different years of earning\n\nIn the calculation of the average treatment in Table 4 Angrist chooses to calculate it for earnings in the years from 1981 to 84. While he plausibly argues that this most likely constitutes a long term effect (as those are the last years for which he has data) in comparison to earlier years, it does not give a complete picture. Looking at Table 1 again we can see that for the earnings differences in the years 81 to 84 quite big estimates are calculated. Assuming that the difference in probability of serving given eligibility versus noneligibility stays somewhat stable across the years, we would expect some heterogeneity in average treatment effects depending on which years we use the earnings data of. Angrist, though, does not investigate this although he has the data for it at hand. For example from a policy perspective one could easily argue that a look at the average treatment effect for earlier years (close to the years in which treatment happens) might be more relevant than the one for years after. This is because given the long time between the actual service and the earnings data of 1981 to 84 it is likely that second round effects are driving some of the results. These might be initially caused by veteran status but for later years the effect of veteran status might mainly act by means of other variables. For instance veterans after the war might be forced to take simple jobs due to their lack of work experience and from then on their path is determined by the lower quality of the job that they had to take right after war. For policy makers it might be of interest to see what happens to veterans right after service to see what needs to be done in order to stop second round effects from happening in the first place. \n\nTo give a more wholesome image, I estimate the results for Table 4 for different years of earnings of white men. As mentioned before the quality of the Total W-2 data set is rather low and the adjusted FICA is more plausible than the FICA data. This is why I only use the adjusted FICA data in the following. For the adjusted FICA I have data for Table 4 for the years from 1974 to 1984. For each possible four year range within those ten years I estimate Model 1 and 2 from Table 4 again. \n\nBelow I plot the average treatment effects obtained. On the x-axis I present the starting year of the range of the adjusted FICA data used. For starting value 74 it means that the average treatment effect is calculated for earnings data of the years 1974 to 77. The results at the starting year 81 are equivalent to the ones found by Angrist in Table 4 for white men. \n\n\n```python\n# get the average treatment effects of Model 1 and 2 with adjusted FICA earnings for several different ranges of four years\nresults_model1 = np.empty((8, 4))\nresults_model2 = np.array([])\nfor number, start_year in enumerate(np.arange(74, 82)):\n years = np.arange(start_year, start_year + 4)\n flex_table4 = get_flexible_table4(data_cwhsc_new, years, [\"ADJ\"], [50, 51, 52, 53])\n results_model1[number, :] = flex_table4[\"white\"].loc[(\"Model 1\", slice(None), \"Value\") , :].values.flatten()\n results_model2 = np.append(results_model2, flex_table4[\"white\"].loc[(\"Model 2\", slice(None), \"Value\") , :].values)\n```\n\n\n```python\n# Plot the effects for white men in Model 1 and 2 (colors apart from Cohort 1950 are random, execute again to change them)\nget_figure1_extension1(results_model1, results_model2)\n```\n\nThe pattern is more complex than what we can see in the glimpse of Table 4 in the paper. We can see that there is quite some heterogeneity in average treatment effects across cohorts when looking at the data for early years. This changes when using data of later years. Further the fact of being a veteran does seem to play a role for the cohort 1953 right after the war but the treatment effect becomes insignificant when looking at later years. This is interesting as the cohort of 1953 was the one for which no one was drafted (remember that in 1973 no one was drafted as the last call was in December 1972). \n\nAnother observation is linked to the fact that draft eligibility does not matter for those born in 1953. These people appear to have voluntarily joined the army as no one of them could have possibly been drafted. This cannot be said for the cohorts before. Employers can only observe whether a person is a veteran and when they are born (and not if they are compliers or not). A theory could be that employers act on the loss of experience for initial wage setting for every army veteran right after the war. The fact that the cohort of 1953 could only be volunteers but not draftees could give them a boost in social status to catch up again in the long run, though. This mechanism might explain to a certain extent why we observe the upward sloping line for the cohort of 1953 (but not for the other groups).\n\nAs discussed in the critical assessment, we actually only capture the local average treatment effect of the compliers. Those are the ones who join the army when they are draft-eligible but do not when they are not. The identifying assumption for the LATE requires that everyone is a complier. This is probably not warranted for the cohort of 1953. In that year it is easily imaginable that there are both defiers and compliers which means that we do not capture the LATE for cohort 1953 in Model 1 and for cohort 1950-53 in Model 2 but something else we do not really know how to interpret. This might be another reason why we observe this peculiar pattern for the cohort of 1953. Following up on this remark I estimate the Model 2 again excluding the cohort of 1953 to focus on the cohorts for which the assumptions for LATE are likely to hold. \n\n\n```python\nresults_model2_53 = np.array([])\nfor number, start_year in enumerate(np.arange(74, 82)):\n years = np.arange(start_year, start_year + 4)\n flex_table4 = get_flexible_table4(data_cwhsc_new, years, [\"ADJ\"], [50, 51, 52])\n results_model2_53 = np.append(results_model2_53, flex_table4[\"white\"].loc[(\"Model 2\", slice(None), \"Value\") , :].values)\n```\n\n\n```python\nget_figure2_extension1(results_model2, results_model2_53)\n```\n\nWe can see that for later years the treatment effect is a bit lower when excluding the cohort of 1953. It confirms the findings of Angrist with the advantage of making it possible to attach a clearer interpretation to it.\n\nFollowing the above path, it would also be interesting to vary the amount of instruments used by more than just the two ways Angrist has shown. It would be interesting to break down the interval size of lottery numbers further. Unfortunately I could no find a way to do that with the already pre-processed data I have at hand.\n\n## 5.2 Bias Quantification\n\nIn the critical assessment I argued that the simple Wald estimate might be biased because employers know their employees' birth date and hence their draft eligibility. The argument was that employers invest less into the human capital of those that might be drafted. This would cause the instrument of draft eligibility to not be valid and hence suffer from bias. This bias can be calculated in the following way for a binary instrument:\n\n\\begin{align}\n \\frac{E[Y|Z=1] - E[Y|Z=0]}{E[D|Z=1] - E[D|Z=0]} = \\delta + \\frac{E[\\epsilon|Z=1] - E[\\epsilon|Z=0]}{E[D|Z=1] - E[D|Z=0]}\n\\end{align}\n\nWhat has been done in the last column of Table 3 (the Wald estimate) is that Angrist calculated the left hand side of this equation. This calculation yields an unbiased estimate of the treatment effect of $D$ (veteran status) on $Y$ (earnings) $\\delta$ if there is no effect of the instrument $Z$ (draft eligibility) on $Y$ through means of unobservables $\\epsilon$. In our argumentation this assumption does not hold which means that $E[\\epsilon|Z=1] - E[\\epsilon|Z=0]$ is not equal to zero as draft eligibility affects $Y$ by the behavioral change of employers to make investing into human capital dependent on draft eligibility. Therefore the left hand side calculation is not equal to the true treatment effect $\\delta$ but has to be adjusted by the bias $\\frac{E[\\epsilon|Z=1] - E[\\epsilon|Z=0]}{E[D|Z=1] - E[D|Z=0]}$.\n\nIn this section I run a thought experiment in which I quantify this bias. The argumentation here is rather heuristic because I lack the resources to really find a robust estimate of the bias but it gives a rough idea of whether the bias might matter economically. My idea is the following. In order to get a measure of $E[\\epsilon|Z=1] - E[\\epsilon|Z=0]$ I have a look at estimates for the effect of work experience on earnings. Remember that the expected difference in earnings due to a difference in draft eligibility is caused by a loss in human capital for those draft eligible because they might miss out on on-the-job-training. This loss in on-the-job-training could be approximated by a general loss in working experience. For an estimate of that effect I rely on Keane and Wolpin (1997) who work with a sample for young men between 14 and 21 years old from the year 1979. The effect of working experience on real earnings could be at least not far off of the possible effect in our sample of adjusted FICA real earnings of 19 year old men for the years 1981 to 1984. Remember that lottery participants find out about whether they are draft eligible or not at the end of the year before they might be drafted. I assume that draft dates are spread evenly over the draft year. One could then argue that on average a draft eligible person stays in his job for another half a year after having found out about the eligibility and before being drafted. Hence, for on average half a year an employer might invest less into the human capital of this draft eligible man. I assume now that employers show a quite moderate behavioral response. During the six months of time, the employees only receive a five month equivalent of human capital gain (or work experience gain) as opposed to the six months they stay in the company. This means they loose one month of work experience on average in comparison to those that are not draft eligible. \n\nTo quantify this one month loss of work experience I take estimates from Keane and Wolpin (1997). For blue collar workers they roughly estimate the gain in real earnings in percent from an increase in a year of blue collar work experience to be 4.6 percent (actually their found effect depends on the years of work experience but I simplify this for my rough calculations). For white collar workers the equivalent estimate amounts to roughly 2.7 percent. I now take those as upper and lower bounds, calculate their one month counterparts and quantify the bias in the Wald estimates of the last column of Table 3. The bias $\\frac{E[\\epsilon|Z=1] - E[\\epsilon|Z=0]}{E[D|Z=1] - E[D|Z=0]}$ is then roughly equal to the loss in annual real earnings due to one month less of work experience divided by the difference in probability of being a veteran conditional on draft eligibility. \n\nThe first table below depicts how the bias changes by cohort across the different years of real earnings with increasing estimates of how a loss in experience affects real earnings. Clearly with increasing estimates of how strong work experience contributes to real earnings, the bias gets stronger. This is logical as it is equivalent to an absolute increase in the nominator. Above that the bias is stronger for later years of earnings as the real earnings increase by year. Further the slope is steeper for later cohorts as the denominator is smaller for later cohorts. Given the still moderate assumption of a loss of one month of work experience we can see that the bias does not seem to be negligible economically especially when taking the blue collar percentage estimate. \n\n\n```python\n# Calculate the bias, the true delta and the orginal Wald estimate for a ceratain interval of working experience effect\ninterval = np.linspace(0.025, 0.05, 50)/12\nbias, true_delta, wald = get_bias(data_cwhsa, data_cwhsb, data_dmdc, data_sipp, data_cwhsc_new, interval)\n```\n\n\n```python\n# plot the bias by cohort\nget_figure1_extension2(bias, interval)\n```\n\nTo get a sense of how the size of the bias relates to the size of the previously estimated Wald coefficients, let us have look at the figure below. It shows for each cell consisting of a cohort and year combination, the Wald estimate from Table 3 as the horizontal line and the true $\\delta$ depending on the weight of the loss in work experience as the upward sloping line. Given that our initial estimates of the Wald coefficients are in a range of only a few thousands, an estimated bias of roughly between 200 and 500 dollars cannot be characterized as incosiderable. Further given Angrist's policy question concerning Veteran compensation, even an estimate that is higher by 200 dollars makes a big difference when it is about compensating thousands of veterans. \n\n\n```python\n# plot the the true delta (accounted for the bias) compared to the original Wald estimate\nget_figure2_extension2(true_delta, wald, interval)\n```\n\n# 6. Conclusion\n---\n\nRegarding the overall quality and structure of Angrist (1990), reading it is a real treat. The controversy after its publication and the fact that it is highly cited clearly show how important its contribution was and still is. It is a great piece of discussion when it comes to the interpretability and policy relevance of instrumental variable approaches. As already reiterated in the critical assessment, one has to acknowledge the care Angrist put into this work. Although his results do not seem to prove reliable, it opened a whole discussion on how to use instrumental variables to get the most out of them. Another contribution that should not go unnoticed is that Angrist shows that instruments can be used even though they might not come from the same sample as the dependent and the endogenous variable. Practically, this is very useful as it widens possible areas of application for instrumental variables. \n\nOverall, it has to be stated that the paper has some shortcomings but the care put into this paper and the good readibility allowed other researchers (and Angrist himself) to swoop in giving helpful remarks that improved the understanding of instrumental variable approaches for treatment effect evaluation.\n\n# References\n\n**Angrist, J.** (1990). [Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records](https://www.jstor.org/stable/2006669?seq=1#metadata_info_tab_contents). *American Economic Review*. 80. 313-36.\n\n**Angrist, J. D., & Pischke, J.-S.** (2009). Mostly harmless econometrics: An empiricist's companion.\n\n**Heckman, J.** (1997). Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations. *The Journal of Human Resources*, 32(3), 441-462. doi:10.2307/146178\n\n**Keane, M., & Wolpin, K.** (1997). The Career Decisions of Young Men. *Journal of Political Economy*, 105(3), 473-522. doi:10.1086/262080\n\n**Morgan, S., and Winship, C.** (2014). Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research). Cambridge: Cambridge University Press. doi:10.1017/CBO9781107587991\n\n**Rosenzweig, M. R. and Wolpin, K. I.**. (2000). \u201cNatural \u2018Natural Experiments\u2019 in Economics.\u201d *Journal of Economic Literature* 38:827\u201374.\n\n**Wald, A.** (1940). The Fitting of Straight Lines if Both Variables are Subject to Error. *Ann. Math. Statist.* 11 , no. 3, 284--300. \n\n# Appendix\n\n### Key Variables in the Data Sets\n\n#### data_cwhsa\n\n| **Name** | **Description** |\n|-----------------|--------------------------------------------|\n| **index** | |\n| byr | birth year |\n| race | ethnicity, 1 for white and 2 for nonwhite |\n| interval | interval of draft lottery numbers, 73 intervals with the size of five consecutive numbers |\n| year | year for which earnings are collected |\n| **variables** | |\n| vmn1 | nominal earnings |\n| vfin1 | fraction of people with zero earnings |\n| vnu1 | sample size |\n| vsd1 | standard deviation of earnings |\n\n#### data_cwhsb\n\n| **Name** | **Description** |\n|-----------------|--------------------------------------------|\n| **index** | |\n| byr | birth year |\n| race | ethnicity, 1 for white and 2 for nonwhite |\n| interval | interval of draft lottery numbers, 73 intervals with the size of five consecutive numbers |\n| year | year for which earnings are collected |\n| type | source of the earnings data, \"TAXAB\" for FICA and \"TOTAL\" for Total W-2 |\n| **variables** | |\n| vmn1 | nominal earnings |\n| vfin1 | fraction of people with zero earnings |\n| vnu1 | sample size |\n| vsd1 | standard deviation of earnings |\n\n#### data_cwhsc_new\n\n| **Name** | **Description** |\n|-----------------|--------------------------------------------|\n| **index** | |\n| byr | birth year |\n| race | ethnicity, 1 for white and 2 for nonwhite |\n| interval | interval of draft lottery numbers, 73 intervals with the size of five consecutive numbers |\n| year | year for which earnings are collected |\n| type | source of the earnings data, \"ADJ\" for adjusted FICA, \"TAXAB\" for FICA and \"TOTAL\" for Total W-2 |\n| **variables** | |\n| earnings | real earnings in 1978 dollars |\n| nj | sample size |\n| nj0 | number of persons in the sample with zero earnings |\n| iweight_old | weight for weighted least squares |\n| ps_r | fraction of people having served in the army |\n| ern74 to ern84 | unweighted covariance matrix of the real earnings |\n\n#### data_dmdc\n\n| **Name** | **Description** |\n|-----------------|--------------------------------------------|\n| **index** | |\n| byr | birth year |\n| race | ethnicity, 1 for white and 2 for nonwhite |\n| interval | interval of draft lottery numbers, 73 intervals with the size of five consecutive numbers |\n| **variables** | |\n| nsrvd | number of people having served |\n| ps_r | fraction of people having served |\n\n#### data_sipp (this is the only micro data set)\n\n| **Name** | **Description** |\n|-----------------|--------------------------------------------|\n| **index** | |\n| u_brthyr | birth year |\n| nrace | ethnicity, 0 for white and 1 for nonwhite |\n| **variables** | |\n| nvstat | 0 if man is not a veteran, 1 if he is |\n| fnlwgt_5 | fraction of people with this index among overall sample |\n| rsncode | 1 if person was draft eligible, else if not |\n", "meta": {"hexsha": "19ee8b55ec72d03fc128fef6e36f76d7c387b5f6", "size": 767063, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Angrist_1990.ipynb", "max_stars_repo_name": "Pascalheid/microeconometrics-course-project-Pascalheid", "max_stars_repo_head_hexsha": "acc374e582686a50c2026785b727d3ecbd8eed88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-26T01:08:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-26T01:08:37.000Z", "max_issues_repo_path": "Angrist_1990.ipynb", "max_issues_repo_name": "Pascalheid/microeconometrics-course-project-Pascalheid", "max_issues_repo_head_hexsha": "acc374e582686a50c2026785b727d3ecbd8eed88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Angrist_1990.ipynb", "max_forks_repo_name": "Pascalheid/microeconometrics-course-project-Pascalheid", "max_forks_repo_head_hexsha": "acc374e582686a50c2026785b727d3ecbd8eed88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 144.7288679245, "max_line_length": 101464, "alphanum_fraction": 0.7590941031, "converted": true, "num_tokens": 84897, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4532618480153861, "lm_q2_score": 0.25683200276421697, "lm_q1q2_score": 0.11641214820240174}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sympy as sym\nimport scipy.signal as signal\nfrom ipywidgets import widgets, interact\n```\n\n## PID krmilnik - zaprtozan\u010dni sistem\n\nProporcionalno-integrirni-diferencirni (PID) krmilni algoritem je najpogosteje uporabljen krmilni algoritem. Njegovo prenosno funkcijo zapi\u0161emo kot:\n\n\\begin{equation}\n P(s)=K_p \\cdot \\left( 1 + \\frac{1}{T_i s} + T_d s \\right).\n\\end{equation}\n\nPrenosna funkcija je sestavljena iz vsote proporcionalne, integrirne in diferencirne komponente. Ni nujno, da so v izbranem krmilniku prisotne vse tri komponente; \u010de ni diferencirne ali integrirne komponente, govorimo tako o PI oz. PD krmilniku. V tem interaktivnem primeru je prikazan odziv P, Pi, PD in PID krmilnika na enotsko sko\u010dno, enotsko impulzno in sinusno funkcijo ter enotsko rampo. Krmilnik je v tem primeru del krmilnega sistema s povratno zvezo. Objekt je lahko proporcionalni objekt ni\u010dtega, prvega ali drugega reda, ali pa integrirni objekt ni\u010dtega ali prvega reda. \n\nSpodnji grafi prikazujejo.\n1. Odziv zaprtozan\u010dnega sistema na izbran vstopni signal, izbran tip objekta in izbrani krmilnik (levi graf).\n2. Lego ni\u010del in polov prenosne funkcije rezultirajo\u010dega zaprtozancnega sistema.\n\n---\n\n### Kako upravljati s tem interaktivnim primerom?\n1. Izberi vstopni signal s preklapljanjem med *enotsko sko\u010dno funkcijo*, *enotsko impulzno funkcijo*, *enotsko rampo* in *sinusno funkcijo*.\n2. Izberi tip objekta: *P0* (proporcionalni objekt ni\u010dtega reda), *P1* (proporcionalni objekt prvega reda), *I0* (integrirni objekt ni\u010dtega reda) ali *I1* (integrirni objekt prvega reda). Prenosna funkcija objekta P0 je $k_p$ (v tem interaktivnem primeru $k_p=2$), PI objekta $\\frac{k_p}{\\tau s+1}$ (v tem interaktivnem primeru $k_p=1$ and $\\tau=2$), IO objekta $\\frac{k_i}{s}$ (v tem interaktivnem primeru $k_i=\\frac{1}{10}$) in I1 objekta $\\frac{k_i}{s(\\tau s +1)}$ (v tem interaktivnem primeru $k_i=1$ in $\\tau=10$).\n3. Izberi tip krmilnega algoritma s klikom na *P*, *PI*, *PD* ali *PID* gumb.\n4. Z uporabo drsnikov spreminjaj vrednosti koeficientov proporcionalnega ($K_p$), integrirnega ($T_i$) in diferencirnega ($T_d$) oja\u010dnja. \n5. Z uporabo drsnika $t_{max}$ lahko spreminja\u0161 interval vrednosti prikazanih na x osi.\n\n\n\n\n```python\nA = 10\na=0.1\ns, P, I, D = sym.symbols('s, P, I, D')\n\nobj = 1/(A*s)\nPID = P + P/(I*s) + P*D*s#/(a*D*s+1)\nsystem = obj*PID/(1+obj*PID)\nnum = [sym.fraction(system.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system.factor())[0], gen=s)))]\nden = [sym.fraction(system.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system.factor())[1], gen=s)))]\n\n# make figure\nfig = plt.figure(figsize=(9.8, 4),num='PID krmilnik - zaprtozan\u010dni sistem')\nplt.subplots_adjust(wspace=0.3)\n\n# add axes\nax = fig.add_subplot(121)\nax.grid(which='both', axis='both', color='lightgray')\nax.set_title('\u010casovni odziv')\nax.set_xlabel('$t$ [s]')\nax.set_ylabel('vhod, izhod')\nax.axhline(linewidth=.5, color='k')\nax.axvline(linewidth=.5, color='k')\n\nrlocus = fig.add_subplot(122)\n\n\ninput_type = 'enotska sko\u010dna funkcija'\n\n# plot step function and responses (initalisation)\ninput_plot, = ax.plot([],[],'C0', lw=1, label='vstopni signal')\nresponse_plot, = ax.plot([],[], 'C1', lw=2, label='izstopni signal')\nax.legend()\n\n\n\n\nrlocus_plot, = rlocus.plot([], [], 'r')\n\nplt.show()\n\ndef update_plot(KP, TI, TD, Time_span):\n global num, den, input_type\n \n num_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in num]\n den_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in den]\n system = signal.TransferFunction(num_temp, den_temp)\n zeros = np.roots(num_temp)\n poles = np.roots(den_temp)\n \n rlocus.clear()\n rlocus.scatter([np.real(i) for i in poles], [np.imag(i) for i in poles], marker='x', color='g', label='pol')\n rlocus.scatter([np.real(i) for i in zeros], [np.imag(i) for i in zeros], marker='o', color='g', label='ni\u010dla')\n rlocus.set_title('Diagram lege ni\u010del in polov')\n rlocus.set_xlabel('Re')\n rlocus.set_ylabel('Im')\n rlocus.grid(which='both', axis='both', color='lightgray')\n \n time = np.linspace(0, Time_span, 300)\n \n if input_type == 'enotska sko\u010dna funkcija':\n u = np.ones_like(time)\n u[0] = 0\n time, response = signal.step(system, T=time)\n elif input_type == 'enotska impulzna funkcija':\n u = np.zeros_like(time)\n u[0] = 10\n time, response = signal.impulse(system, T=time)\n elif input_type == 'sinusna funkcija':\n u = np.sin(time*2*np.pi)\n time, response, _ = signal.lsim(system, U=u, T=time)\n elif input_type == 'enotska rampa':\n u = time\n time, response, _ = signal.lsim(system, U=u, T=time)\n else:\n raise Exception(\"Error in the program. Please restart simulation.\")\n \n response_plot.set_data(time, response)\n input_plot.set_data(time, u)\n \n rlocus.axhline(linewidth=.3, color='k')\n rlocus.axvline(linewidth=.3, color='k')\n rlocus.legend()\n \n ax.set_ylim([min([np.min(u), min(response),-.1]),min(100,max([max(response)*1.05, 1, 1.05*np.max(u)]))])\n ax.set_xlim([-0.1,max(time)])\n\n plt.show()\n\ncontroller_ = PID\nobject_ = obj\n\ndef calc_tf():\n global num, den, controller_, object_\n system_func = object_*controller_/(1+object_*controller_)\n \n num = [sym.fraction(system_func.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[0], gen=s)))]\n den = [sym.fraction(system_func.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[1], gen=s)))]\n update_plot(Kp_widget.value, Ti_widget.value, Td_widget.value, time_span_widget.value)\n\ndef transfer_func(controller_type):\n global controller_\n proportional = P\n integral = P/(I*s)\n differential = P*D*s/(a*D*s+1)\n if controller_type =='P':\n controller_func = proportional\n Kp_widget.disabled=False\n Ti_widget.disabled=True\n Td_widget.disabled=True\n elif controller_type =='PI':\n controller_func = proportional+integral\n Kp_widget.disabled=False\n Ti_widget.disabled=False\n Td_widget.disabled=True\n elif controller_type == 'PD':\n controller_func = proportional+differential\n Kp_widget.disabled=False\n Ti_widget.disabled=True\n Td_widget.disabled=False\n else:\n controller_func = proportional+integral+differential\n Kp_widget.disabled=False\n Ti_widget.disabled=False\n Td_widget.disabled=False\n \n controller_ = controller_func\n calc_tf()\n \ndef transfer_func_obj(object_type):\n global object_\n if object_type == 'P0':\n object_ = 2\n elif object_type == 'P1':\n object_ = 1/(2*s+1) \n elif object_type == 'I0':\n object_ = 1/(10*s)\n elif object_type == 'I1':\n object_ = 1/(s*(10*s+1))\n calc_tf()\n\nstyle = {'description_width': 'initial'}\n\ndef buttons_controller_clicked(event):\n controller = buttons_controller.options[buttons_controller.index]\n transfer_func(controller)\nbuttons_controller = widgets.ToggleButtons(\n options=['P', 'PI', 'PD', 'PID'],\n description='Izberi tip krmilnega algoritma:',\n disabled=False,\n style=style)\nbuttons_controller.observe(buttons_controller_clicked)\n\ndef buttons_object_clicked(event):\n object_ = buttons_object.options[buttons_object.index]\n transfer_func_obj(object_)\nbuttons_object = widgets.ToggleButtons(\n options=['P0', 'P1', 'I0', 'I1'],\n description='Izberi tip objekta:',\n disabled=False,\n style=style)\nbuttons_object.observe(buttons_object_clicked)\n\ndef buttons_input_clicked(event):\n \n global input_type\n input_type = buttons_input.options[buttons_input.index]\n update_plot(Kp_widget.value, Ti_widget.value, Td_widget.value, time_span_widget.value)\nbuttons_input = widgets.ToggleButtons(\n options=['enotska sko\u010dna funkcija','enotska impulzna funkcija', 'enotska rampa', 'sinusna funkcija'],\n description='Izberi vstopni signal:',\n disabled=False,\n style = {'description_width': 'initial','button_width':'180px'})\nbuttons_input.observe(buttons_input_clicked)\n \nKp_widget = widgets.IntSlider(value=10,min=1,max=50,step=1,description=r'\\(K_p\\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1d')\nTi_widget = widgets.FloatLogSlider(value=1.,min=-3,max=1.1,step=.001,description=r'\\(T_{i} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\nTd_widget = widgets.FloatLogSlider(value=1.,min=-3,max=1.1,step=.001,description=r'\\(T_{d} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\n\ntime_span_widget = widgets.FloatSlider(value=10.,min=.5,max=50.,step=0.1,description=r'\\(t_{max} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1f')\n\ntransfer_func(buttons_controller.options[buttons_controller.index])\ntransfer_func_obj(buttons_object.options[buttons_object.index])\n\ndisplay(buttons_input)\ndisplay(buttons_object)\ndisplay(buttons_controller)\n\ninteract(update_plot, KP=Kp_widget, TI=Ti_widget, TD=Td_widget, Time_span=time_span_widget);\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Izberi vstopni signal:', options=('enotska sko\u010dna funkcija', 'enotska impulzna funk\u2026\n\n\n\n ToggleButtons(description='Izberi tip objekta:', options=('P0', 'P1', 'I0', 'I1'), style=ToggleButtonsStyle(de\u2026\n\n\n\n ToggleButtons(description='Izberi tip krmilnega algoritma:', options=('P', 'PI', 'PD', 'PID'), style=ToggleBut\u2026\n\n\n\n interactive(children=(IntSlider(value=10, description='\\\\(K_p\\\\)', max=50, min=1, readout_format='.1d'), Float\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "10b74bc8bf806bd5614ae12990e3db847ee274ad", "size": 135643, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_si/examples/02/.ipynb_checkpoints/TD-16-PID_krmilnik_zaprtozancni_sistem-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_si/examples/02/TD-16-PID_krmilnik_zaprtozancni_sistem.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_si/examples/02/TD-16-PID_krmilnik_zaprtozancni_sistem.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 112.194375517, "max_line_length": 83027, "alphanum_fraction": 0.7885921131, "converted": true, "num_tokens": 3644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.40356685373537454, "lm_q2_score": 0.28776782797747225, "lm_q1q2_score": 0.11613355694313096}} {"text": "[Link to this document's Jupyter Notebook](./0316--Parallel_Python_pre-class-assignment.ipynb)\n\nIn order to successfully complete this assignment you must do the required reading, watch the provided videos and complete all instructions. The embedded survey form must be entirely filled out and submitted on or before **11:59pm on Tuesday March 16**. Students must come to class the next day prepared to discuss the material covered in this assignment. \n\n---\n\n\n\n# Pre-Class Assignment: Parallel Python\n\n### Goals for today's pre-class assignment \n\n1. [Matrix Multiply Example](#Matrix-Multiply-Example)\n2. [Parallel Python example](#Parallel-Python-example)\n3. [The Python GIL (Global Interface Lock)](#The-Python-GIL-(Global-Interface-Lock))\n4. [Getting around the GIL](#Getting-around-the-GIL)\n5. [Assignment wrap up](#Assignment-wrap-up)\n\n\n\n\n---\n\n# 1. Matrix Multiply Example\n\n\n\n The following is a simple implementation of a matrix multiply written in python. Review the code try to understand what it is doing.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pylab as plt\nimport numpy as np\nimport sympy as sp\nimport random\nimport time\nsp.init_printing(use_unicode=True)\n```\n\n\n```python\n#simple matrix multiply (no numpy)\ndef multiply(m1,m2):\n m = len(m1)\n d = len(m2)\n n = len(m2[0])\n if len(m1[0]) != d:\n print(\"ERROR - inner dimentions not equal\")\n result = [[0 for i in range(m)] for j in range(n)]\n for i in range(0,m):\n for j in range(0,n):\n for k in range(0,d):\n result[i][j] = result[i][j] + m1[i][k] * m2[k][j]\n return result\n```\n\n\n```python\n# Random generated 2d lists of lists that can be multiplied \nm = 4\nd = 10\nn = 4\n\nA = [[random.random() for i in range(d)] for j in range(m)]\nB = [[random.random() for i in range(n)] for j in range(d)]\n```\n\n\n```python\n#Compute matrix multiply using your function\nstart = time.time()\n\nsimple_answer = multiply(A, B)\nsimple_time = time.time()-start\n\nprint('simple_answer =',simple_time,'seconds')\n```\n\nLets compare this to the numpy result:\n\n\n```python\n#Compare to numpy result\nstart = time.time()\n\nnp_answer = np.matrix(A)*np.matrix(B)\nnp_time = time.time()-start\n\nprint('np_answer =',np_time,'seconds')\n\n```\n\nFor this example, numpy result are most likely slower than the simple result. Think about why this might be. We will discuss this later. \n\n✅ **DO THIS:** See if you can write a loop to do a scaling study for the above code. Loop over the value of $n$ such that $n$ is 4, 16, 32, 64, 128 and 256. For each iteration generate two random matrices (as above) with $m = d = n$. Then time the matrix multiply for the provided function and again for the numpy function. Graph the results as size of $n$ vs time. \n\n\n```python\n# Put your code here\n```\n\n✅ **DO THIS:** Explore the Internet for ways to speed up Python (There are a lot of them). Save some of your search results in the cell below and come to class prepaired to discuss what you found.\n\nPut your search results here.\n\n\n\n---\n\n# 2. Parallel Python example\n\n\n\nHere is an example for running parallel python using the ```multiprocessing``` library. \n\nhttps://stackoverflow.com/questions/10415028/how-can-i-recover-the-return-value-of-a-function-passed-to-multiprocessing-proce\n\n\n\n```python\nimport multiprocessing\nnum_procs = multiprocessing.cpu_count()\nprint('You have', num_procs, 'processors')\n\ndef worker(procnum, return_dict):\n '''worker function'''\n print(str(procnum) + ' represent!')\n return_dict[procnum] = procnum\n\n\nif __name__ == '__main__':\n manager = multiprocessing.Manager()\n return_dict = manager.dict()\n jobs = []\n for i in range(5):\n p = multiprocessing.Process(target=worker, args=(i,return_dict))\n jobs.append(p)\n p.start()\n\n for proc in jobs:\n proc.join()\n print(return_dict.values())\n```\n\n### Lets try to make a parallel matrix multiply\n\nThe following is the instructor's attempt at using multiprocessing to do matrix multiply. First lets start with a serial method.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pylab as plt\nimport numpy as np\nimport sympy as sp\nimport random\nimport time\nsp.init_printing(use_unicode=True)\n```\n\n\n```python\n#simple matrix multiply (no numpy)\ndef multiply(m1,m2):\n m = len(m1)\n d = len(m2)\n n = len(m2[0])\n if len(m1[0]) != d:\n print(\"ERROR - inner dimentions not equal\")\n result = [[0 for i in range(m)] for j in range(n)]\n for i in range(0,m):\n for j in range(0,n):\n for k in range(0,d):\n result[i][j] = result[i][j] + m1[i][k] * m2[k][j]\n return result\n```\n\n\n```python\n# Random generated 2d lists of lists that can be multiplied \nm = 4\nd = 10\nn = 4\n\nA = [[random.random() for i in range(d)] for j in range(m)]\nB = [[random.random() for i in range(n)] for j in range(d)]\n```\n\n\n```python\n#Compute matrix multiply using your function\n\n\nstart = time.time()\n\nsimple_answer = multiply(A, B)\nsimple_time = time.time()-start\n\nprint('simple_answer =',simple_time,'seconds')\n```\n\nLets compare this to the numpy result:\n\n\n```python\n#Compare to numpy result\nstart = time.time()\n\nnp_answer = np.matrix(A)*np.matrix(B)\nnp_time = time.time()-start\n\nprint('np_answer =',np_time,'seconds')\n\n```\n\n\n```python\n#Compare to numpy result\nA_ = np.matrix(A)\nB_ = np.matrix(B)\n\nstart = time.time()\n\nnp_answer = A_*B_\nnp_time = time.time()-start\n\nprint('np_answer =',np_time,'seconds')\n```\n\n\n```python\nnp.allclose(simple_answer,np_answer)\n```\n\nOn some systems the numpy result may be slower than the simple result. Think about why this might be. We will discuss this later. \n\n### Now lets use multiprocessing to try and do a parallel method\n\n\n```python\n#Attempt at a parallel multiply\ndef parallel_multiply(m1,m2):\n m = len(m1)\n d = len(m2)\n n = len(m2[0])\n\n def dot_worker(row,col):\n \"\"\"thread worker function\"\"\"\n #print('Worker:', i,j)\n temp = 0\n for k in range(len(m2)):\n temp = temp + m1[row][k] * m2[k][col]\n return_dict[(row,col)] = temp\n return \n\n jobs = []\n manager = multiprocessing.Manager()\n return_dict = manager.dict()\n\n for i in range(m):\n for j in range(n):\n #p = dot_worker(i,j)\n p = multiprocessing.Process(target=dot_worker, args=(i,j,))\n jobs.append(p)\n p.start()\n\n for proc in jobs:\n proc.join()\n \n print('Used',len(jobs),'threads in calculation.')\n \n C = return_dict.values()\n C = np.matrix(return_dict.values())\n C = C.reshape((m,n))\n return C\n```\n\n\n```python\n#Compute matrix multiply using your function\nstart = time.time()\n\nparallel_answer = parallel_multiply(A, B)\nparallel_time = time.time()-start\n\nprint('parallel_answer=',parallel_time,'seconds')\n```\n\n\n```python\nnp.allclose(parallel_answer,np_answer)\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n \nobjects = ('Simple', 'Numpy', 'parallel')\ny_pos = np.arange(len(objects))\nperformance = [simple_time,np_time,parallel_time]\n \nplt.bar(y_pos, performance, align='center', alpha=0.5)\nplt.xticks(y_pos, objects)\nplt.ylabel('Time (seconds)')\nplt.yscale('log')\nplt.title('Programming language usage')\n \n```\n\n✅ **QUESTION:** Why do you think the parallel version was so much slower than Python?\n\nPut your answer to the above question here.\n\n\n\n---\n\n# 3. The Python GIL (Global Interface Lock)\n\n\n\n✅ **DO THIS:** Read the following blog post and answer the questions: https://wiki.python.org/moin/GlobalInterpreterLock\n\n✅ **QUESTION:** Why was the GIL introduced to the Python programming language?\n\nPut your answer to the above question here.\n\n✅ **QUESTION:** How does the GIL help avoid race conditions?\n\nPut your answer to the above question here.\n\n✅ **QUESTION:** How does the GIL help avoid deadlock?\n\nPut your answer to the above question here.\n\n✅ **QUESTION:** Why is the GIL problematic to parallel libraries like the \"thread\" and \"multiprocessing\" libraries?\n\nPut your answer to the above question here.\n\n\n\n---\n\n# 4. Getting around the GIL\n\n\nFortunately there are ways to get around the GIL. In fact, Python has libraries that do shared memory parallelization, shared network parallelization and GPU acceleration. Do some research and answer the following questions:\n\n\n✅ **QUESTION:** Some of ```numpy``` library can run in parallel. How does ```numpy``` get around the GIL? \n\nPut your answer to the above question here.\n\n✅ **QUESTION:** The ```numba``` library can also run in parallel. How does ```numba``` get around the GIL? \n\nPut your answer to the above question here.\n\n✅ **QUESTION:** What python library can be used to program GPUs?\n\nPut your answer to the above question here.\n\n✅ **QUESTION:** What python library can be used to run shared network parallelization such as the Message Passing Interface (MPI)?\n\nPut your answer to the above question here.\n\n✅ **QUESTION:** There seem to be a lot of solutions for running Python in parallel. Provide an argument(s) as to why you would bother with an \"older\" language such as C/C++ or Fortran? \n\nPut your answer to the above question here.\n\n----\n\n\n# 5. Assignment wrap-up\n\nPlease fill out the form that appears when you run the code below. **You must completely fill this out in order to receive credits for the assignment!**\n\n[Direct Link to Google Form](https://cmse.msu.edu/cmse401-pc-survey)\n\n\nIf you have trouble with the embedded form, please make sure you log on with your MSU google account at [googleapps.msu.edu](https://googleapps.msu.edu) and then click on the direct link above.\n\nPut your answer to the above question here\n\n✅ **QUESTION:** Summarize what you did in this assignment.\n\nPut your answer to the above question here\n\n✅ **QUESTION:** What questions do you have, if any, about any of the topics discussed in this assignment after working through the jupyter notebook?\n\nPut your answer to the above question here\n\n✅ **QUESTION:** How well do you feel this assignment helped you to achieve a better understanding of the above mentioned topic(s)?\n\nPut your answer to the above question here\n\n✅ **QUESTION:** What was the **most** challenging part of this assignment for you? \n\nPut your answer to the above question here\n\n✅ **QUESTION:** What was the **least** challenging part of this assignment for you? \n\nPut your answer to the above question here\n\n✅ **QUESTION:** What kind of additional questions or support, if any, do you feel you need to have a better understanding of the content in this assignment?\n\nPut your answer to the above question here\n\n✅ **QUESTION:** Do you have any further questions or comments about this material, or anything else that's going on in class?\n\nPut your answer to the above question here\n\n✅ **QUESTION:** Approximately how long did this pre-class assignment take?\n\nPut your answer to the above question here\n\n\n```python\nfrom IPython.display import HTML\nHTML(\n\"\"\"\n\n\"\"\"\n)\n```\n\n\n\n---------\n### Congratulations, we're done!\n\nTo get credit for this assignment you must fill out and submit the above survey from on or before the assignment due date.\n\n### Course Resources:\n\n\n - [Website](https://msu-cmse-courses.github.io/cmse802-f20-student/)\n - [ZOOM](https://msu.zoom.us/j/98207034052)\n - [JargonJar](https://docs.google.com/document/d/1ahg48CCFhRzUL-QIHzlt_KEf1XqsCasFBU4iePHhcug/edit#)\n - [GIT](https://gitlab.msu.edu/colbrydi/cmse401-s21.git)\n\n\n\nWritten by Dr. Dirk Colbry, Michigan State University\n
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.\n\n----\n\n----\n", "meta": {"hexsha": "bdd796e39c9f1fce3c90a02de66fe79d8108713b", "size": 22769, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignments/0316--Parallel_Python_pre-class-assignment.ipynb", "max_stars_repo_name": "msu-cmse-courses/cmse401-S21-student", "max_stars_repo_head_hexsha": "e7407d5f7860149606d9ea770eeafe61e93122c6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignments/0316--Parallel_Python_pre-class-assignment.ipynb", "max_issues_repo_name": "msu-cmse-courses/cmse401-S21-student", "max_issues_repo_head_hexsha": "e7407d5f7860149606d9ea770eeafe61e93122c6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignments/0316--Parallel_Python_pre-class-assignment.ipynb", "max_forks_repo_name": "msu-cmse-courses/cmse401-S21-student", "max_forks_repo_head_hexsha": "e7407d5f7860149606d9ea770eeafe61e93122c6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-01-23T18:15:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T21:18:44.000Z", "avg_line_length": 25.9920091324, "max_line_length": 405, "alphanum_fraction": 0.5456541789, "converted": true, "num_tokens": 3191, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.23370636225126956, "lm_q1q2_score": 0.11594028422095026}} {"text": "Probabilistic Programming and Bayesian Methods for Hackers \n========\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n#### Looking for a printed version of Bayesian Methods for Hackers?\n\n_Bayesian Methods for Hackers_ is now a published book by Addison-Wesley, available on [Amazon](http://www.amazon.com/Bayesian-Methods-Hackers-Probabilistic-Addison-Wesley/dp/0133902838)! \n\n\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assumes that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json, matplotlib\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 1000]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) / 2, 2, k + 1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$ pass. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2 * p / (1 + p), color=\"#348ABD\", lw=3)\n# plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2 * (0.2) / 1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Is my code bug-free?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1. / 3, 2. / 3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.ylim(0,1)\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n#### Expected Value\nExpected value (EV) is one of the most important concepts in probability. The EV for a given probability distribution can be described as \"the mean value in the long run for many repeated samples from that distribution.\" To borrow a metaphor from physics, a distribution's EV acts like its \"center of mass.\" Imagine repeating the same experiment many times over, and taking the average over each outcome. The more you repeat the experiment, the closer this average will become to the distributions EV. (side note: as the number of repeated experiments goes to infinity, the difference between the average outcome and the EV becomes arbitrarily small.)\n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots, \\; \\; \\lambda \\in \\mathbb{R}_{>0} $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [0.1, 3, 6, 7]\ncolours = [\"#348ABD\", \"#A60628\", \"#A01212\", \"#B15555\"]\nfor i,j in zip(lambda_, colours):\n plt.bar(a, poi.pmf(a, i), color=j,\n label=\"$\\lambda = %.1f$\" % i, alpha=0.60,\n edgecolor=j, lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1, 3]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1. / l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0, 1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```python\nimport pymc as pm\n\nalpha = 1.0 / count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nlambda_1 = pm.Exponential(\"lambda_1\", alpha)\nlambda_2 = pm.Exponential(\"lambda_2\", alpha)\n\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```python\nprint(\"Random output:\", tau.random(), tau.random(), tau.random())\n```\n\n Random output: 52 12 49\n\n\n\n```python\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. Deterministic functions will be covered in Chapter 2. \n\n\n```python\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n# Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [-----------------100%-----------------] 40000 of 40000 complete in 4.7 sec\n\n\n```python\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```python\nfigsize(12.5, 10)\n# histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data) - 20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\")\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n# type your code here.\nprint(\"{} {} \".format(lambda_1_samples.mean(), lambda_2_samples.mean()))\n```\n\n 17.750529345875638 22.709824130540305 \n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n# type your code here.\n(1 - np.mean(lambda_1_samples/lambda_2_samples) ) * 100\n```\n\n\n\n\n 21.71608702288387\n\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n# type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg/).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "07ccfbf27df164ac7aa215940c68753640a6ae5d", "size": 889424, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_stars_repo_name": "TudorAndrei/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "b596c3e32333ef382922c046ae506caa80646a23", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_issues_repo_name": "TudorAndrei/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "b596c3e32333ef382922c046ae506caa80646a23", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_forks_repo_name": "TudorAndrei/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "b596c3e32333ef382922c046ae506caa80646a23", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 820.5018450185, "max_line_length": 189489, "alphanum_fraction": 0.7339457896, "converted": true, "num_tokens": 11708, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38491214448393346, "lm_q2_score": 0.3007455852086007, "lm_q1q2_score": 0.11576062814671803}} {"text": " **Chapter 3: [Imaging](Ch3-Imaging.ipynb)** \n\n\n
\n\n# Energy-Loss Near-Edge Structure \n\npart of \n\n **[Analysis of Transmission Electron Microscope Data](_Analysis_of_Transmission_Electron_Microscope_Data.ipynb)**\n\n\nby Gerd Duscher, 2019\n\nMicroscopy Facilities
\nJoint Institute of Advanced Materials
\nThe University of Tennessee, Knoxville\n\nModel based analysis and quantification of data acquired with transmission electron microscopes\n\n\n## Content\n\n- Retrieving and Plotting of reference EELS spectra from the [EELS database](https://eelsdb.eu/spectra/)\n- Discussion of the energy-loss near-edge structure (ELNES) of specific edges.\n\n## Load important packages\n\n### Check Installed Packages\n\n\n```python\nimport sys\nfrom pkg_resources import get_distribution, DistributionNotFound\n\ndef test_package(package_name):\n \"\"\"Test if package exists and returns version or -1\"\"\"\n try:\n version = get_distribution(package_name).version\n except (DistributionNotFound, ImportError) as err:\n version = '-1'\n return version\n\n# Colab setup ------------------\nif 'google.colab' in sys.modules:\n !pip install pyTEMlib -q\n# pyTEMlib setup ------------------\nelse:\n if test_package('sidpy') < '0.0.5':\n print('installing sidpy')\n !{sys.executable} -m pip install --upgrade pyTEMlib -q\n if test_package('pyTEMlib') < '0.2021.4.20':\n print('installing pyTEMlib')\n !{sys.executable} -m pip install --upgrade pyTEMlib -q\n# ------------------------------\nprint('done')\n```\n\n done\n\n\n### Import all relevant libraries\n\nPlease note that the EELS_tools package from pyTEMlib is essential.\n\n\n```python\nimport sys\nif 'google.colab' in sys.modules:\n %pylab --no-import-all inline\nelse: \n %pylab --no-import-all notebook\n %gui qt\n \nimport warnings\nwarnings.filterwarnings('ignore')\n\nfrom scipy.ndimage.filters import gaussian_filter\n\n## import the configuration files of pyTEMlib (we need access to the data folder)\nimport pyTEMlib\nimport pyTEMlib.file_tools as ft\nimport pyTEMlib.eels_tools as eels\n\n# For archiving reasons it is a good idea to print the version numbers out at this point\nprint('pyTEM version: ',pyTEMlib.__version__)\n\n```\n\n Populating the interactive namespace from numpy and matplotlib\n pyTEM version: 0.2021.04.20\n\n\n## Chemical Shift\n\nThe chemical shift is the first feature that we discuss in respect of the shape or appearance of the ionization edges: the energy-loss near edge structure (ELNES).\nThis section and the following one explain how to do simple analysis of near--edge features. \n\n\nThe chemical shift refers to small changes (up to a few eV) of the edge onset, and how this shift depends on the bonding of an element in a solid. \nGoing back to figure in the [Introduction to Core-Loss Spectra](CH4_07-Introduction_Core_Loss.ipynb), we see that such a change can be caused by a change of the band-gap, in which case the final states are moving or by a movement of the core-levels (initial states). \n\nPlease note, that this explanation above is a simplification; what we measure is the energy difference of an excited atom to one in ground state. In the excited atom all states react to the new electronic configuration and not only the final and initial states. In fact, to calculate the energy-difference, one cannot use the difference between core-levels and bottom of the conduction band. \n\nHowever, we want to know which of the above effects (band gap changes or core-level shift) is the major one, so that we can conclude back on bonding of the element in question.\n\n\n\n\n\nAs an example of chemical shift we look at reference data of the silicon L$_{2,3}$ edge.\n\n### Load reference data\n\n\n```python\nSi_L_reference_spectra = eels.get_spectrum_eels_db(element='Si',edge='L2,3')\n```\n\n a_SiO2_Si_L3_S_Schamm_63\n a_SiO2_Si_L3_S_Schamm_64\n Si_Si_L3_S_Schamm_58\n Si3N4_Si_L3_Lingyang_Li_166\n Si3N4_(alpha)_Si_L3_S_Schamm_61\n Si3N4_(alpha)_Si_L3_S_Schamm_62\n SiC(6H)_Si_L3_S_Schamm_65\n SiC(6H)_Si_L3_S_Schamm_66\n found 8 spectra in EELS database)\n\n\n### Plot silicon spectra\n\n\n```python\nplt.figure()\nfor name, spectrum in Si_L_reference_spectra.items(): \n if 'Core' in spectrum['TITLE'] or 'L3' in spectrum['TITLE']:\n #plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = spectrum['TITLE'])\n pass\n \nfor name, spectrum in Si_L_reference_spectra.items(): \n if 'a_SiO2_Si_L3_S_Schamm_63' in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Si-L$_{2,3}$: a-SiO$_2$')\n if 'Si_Si_L3_S_Schamm_58' in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Si-L$_{2,3}$: Si')\n if 'Si3N4_(alpha)_Si_L3_S_Schamm_62' in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Si-L$_{2,3}$: Si$_3$N$_4$')\n if 'SiC(6H)_Si_L3_S_Schamm_66'in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Si-L$_{2,3}$: SiC')\n\nplt.legend();\n\n```\n\n\n \n\n\n\n\n\n\nThe shift of the edges as above can be caused by the intial and/or the final states.\n\n### Band gap \nThe band gap changes are treated in the solid state theory of band structure and are, therefore, well covered in other textbooks. The trend is that with increased oxidation (or more general increased electronegativity of the reaction partner as in the series: B, C, N, O), the band gap opens and the edge shifts to higher energies. \nThis is seen in the figure above, where the onset of the Si-L$_{2,3}$ edge shifts to higher energies with Pauli electron--negativity of the reaction partner.\n\n\nIn fact, one can monitor band gap changes with stoichiometry at interfaces by the shift of the edge.\nPlease be aware that we see only the shift of the conduction band bottom and not the whole band gap change. This effect of the band gap is obvious between Si and SiO$_2$, where the edge shifts by about 6.5 eV.\n\n### Core-level shift\nThe initial state, the ``core-level``, can also shift, for example after oxidation. Some electrons will transfer to an anion (for example oxygen) and less electrons are available to fill the band structure. This is shown below for the case of Cu and its two oxides Cu$_2$O and CuO.\n\n\nThe more electrons transfer to oxygen for the ionic bonding of these materials, the more the edges shift to lower energies, even though a band gap opens up. The opening up of the band gap will cause a shift too higher energies and counteracts the effect of ionization. Due to lower electron densities, at the Cu atoms in the oxides, the core levels are assumed to shift to higher energies (see below) and compensate a little for the effect.\n\n>\n> \n>\n\n\nThe core-level shift is generally a small effect. This core states react to increase of electron density at the atom site with a decrease and vice versa. Simplified , we can think of the core level electrons getting repulsed from an increased electron density (through Coulomb interaction) and pushed closer (lower in energy) to the core.\n\n\n\n```python\nCu_L_reference_spectra = eels.get_spectrum_eels_db(element='Cu',edge='L2,3')\n```\n\n Cu_Cu_L3_Y_Kihn_124\n Cu2O_Cu_L3_Y_Kihn_129\n Profile Of Cu4O3-Cu L23\n CuO_Cu_L3_Y_Kihn_127\n found 4 spectra in EELS database)\n\n\n\n```python\nplt.figure()\n\nfor name, spectrum in Cu_L_reference_spectra.items(): \n if 'Cu_Cu_L3_Y_Kihn_124' in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Cu-L$_{2,3}$: Cu')\n if 'CuO_Cu_L3_Y_Kihn_127' in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Cu-L$_{2,3}$: CuO')\n if 'Cu2O_Cu_L3_Y_Kihn_129' in spectrum['TITLE']:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'Cu-L$_{2,3}$: Cu$_2$O')\n\nplt.legend();\nplt.xlim(910, 980)\n\n```\n\n\n \n\n\n\n\n\n\n\n\n\n (910.0, 980.0)\n\n\n\nThe case of the oxidized Cu where the slightly oxidized Cu$_2$O does shift to slighly lower energies compared to pure Cu, \nthe shift to lower energies due to ionic bonding is a little larger than the opening of the band gap and the core level shift to higher energies (further away from the nucleus) because of less Coulomb repulsion from the valence electrons.\n\nThis effect is even more pronounced for CuO with an larger band gap than Cu$_2$O.\n\nIn the figure below we see that the Si-L$_{3,2}$ edge shifts to higher energies with increased Ge content, when the spectra are taken from different locations at the diffuse Si - Ge interface. Intuitively, we would expect a shift to lower energies, because the band gap of SiGe alloys and of Ge is smaller than Si.\n\n\n\n\n\n*We see that as we acquire spectra at the diffuse interface between Si and Ge, that the Si-L${_3,2}$ edge shifts to higher energies. This is surprising as SiGe and Ge posses a smaller band gap than Si and one would expect the opposite.*\n\nThis shift can be explained by a shift of core levels. An all electron calculations can determine the 2p$_{3/2}$ core levels of an atom in a compound which is shown for Si-Ge and Si-GaAs alloys in the figure below. The calculations show that there is a core level shift to lower energies with increased Ge and GaAs content. Ge and GaAs will add additional electrons to the electronic structure and the Coulomb repulsion between core level electrons and valence electrons will increase, pushing the core levels to lower energies. \n\n\n\n*All electron ab initio calculations of the core level states for Si-Ge and Si-GaAs alloys with different composition. The calculations show an 2p$_{3/2}$ core level shift to lower energies with deviation of composition from pure Si (on the left).*\n\n\nThe shift of the core--levels to lower energies will increase the distance between core--level and conduction band bottom, which results in a simple picture in a shift to higher energies. We see that for pure Si, the 2p$_{3/2}$ core level is at about 89 eV but the edge is at 99.8 eV. The difference in energy is caused by relaxation of valence and core electrons. Effectively, we measure with the EELS edge onset the energy difference between an excited atom and an atom in its ground state.\n\nAll electrons will relax according to the overall electron density at the atom sites and the calculated core--level shifts can not be used for predicting the edge shifts. However, these calculations can explain the physical origin of the edge shift.\n\n### Conclusion\nIn summation, we can say that the following effects (strongest first, weakest last) cause a chemical shift:\n\n- band gap opening\n- ionic bonding\n- core level shift\n\nAll of these effects can be present at once, but usually only one dominates the chemical shift.\n\n\n## White Line\n\nIn this section, we try to analyze a distinct feature of the transition metal elements. The d-states of transition metal elements form a very flat band in the band structure. This flat band, creates a strong peak in the density of states. This analysis is based on the following simplification:\n\nIn the figure below, we see a s or p free electron like band in the general shape of a parabola, This parabola gives rise to a saw tooth like feature in the density of states (DOS) because flat bands have a higher density of states than steep ones. The DOS of the conduction band (above the Fermi level) is closely related to our EELS spectra. A flat d-band will cause a very prominent peak, a so called white line (in the age of photographic recording, these peaks appeared as white lines).\n\n\n\n *A schematic of the relationship of density of states (DOS on the left) and band structure of a transition metal element on the right). The s and p free electron like bands (parabola) give rise to saw tooth like feature in the DOS and the flat d bands (red) cause a sharp peak in the DOS.*\n \nSince these d-bands are so prominent we can easily separate them from the rest. In figure \\ref{fig:Ti-whiteline} we use the calculated cross--section as a model of the s and p free electron like states. After a subtraction, we get the pure d-band contribution.\n\n\n*We use the cross section of the Ti-L$_{2,3}$ edge (green) as a model for the free electron gas and subtract it from the experimental Ti-L$_{2,3}$ edge (red). The residual peaks (blue) can be analyzed as pure d-states. The two double peaks of the Ti-L$_{2,3}$ edge indicate that there is some structure to the d-bands (here crystal field splitting).*\n\nA simple analysis of the white line ratios of Ti-L$_3$ to Ti-L$_2$ of SrTiO$_3$ yields a intensity ratio of 242 / 314 = 0.8. However, just considering the initial states (and assuming the transition probability ( or more accurately the transition matrix elements) are the same for both edges) with 4 electrons in p$_{3/2}$ and 2 electrons in p$_{1/2}$ would let us expect a ration of 2 to 1. \n\n>Please, note that both the Ti-L$_3$ and Ti-L$_2$ edge are split in two. We will discuss this crystal field splitting in chapter \\ref{sec:Titanium} as an ELNES feature. Here we just consider the sum over the the whole Ti-L$_3$ or/and Ti-L$_2$ and ignore this splitting.\n\nThe deviation from the 2 : 1 white line ratio is assumed to be caused by J-J coupling, and is, therefore, symmetry dependent. The anomalous white line ratios have been used to determine valency of transition elements in compounds. Physically this approach is on shaky ground, because we do not know all the reasons for the change in the ratios, it has, however, been shown to be reliable for binary metallic alloys.\n\nFortunately, there is a easier method (from the physical point of view). We compare the total amount of white line intensities (which corresponds to the empty d-states) and normalize them by the free-electron gas like intensity beyond the white lines. \n\nWe use the method of Okamoto et al. \\cite{Okamoto-Disko1992}. \n\n\nThe energy-window for the free-electron like part of the edge can be chosen arbitrarily and consistently.\nFollowing Okamoto et al., a 50 eV integration window should be used 50 eV beyond the edge onset. These will allow to compare the results to values in the literature. \nThe edge has to be taken in very thin areas and if possible corrected for single scattering distribution, because otherwise the free electron like part contains plasmon contributions, which change the analysis.\n\nFor the above spectrum in the figure above, we get for the white line / free electron gas ratio (50 eV - 50 eV beyond edge onset) = 556 / 974 = 0.57. Ti in SrTiO$_3$ can be considered as Ti$^{4+}$ with no electrons in the d-bands, but using this ratio in the paper of Okamoto et al. would yield a d band occupancy of 4 as in a metal. The difference may lay in the usage of a Hatree-Slater X-section for the analysis while Okamoto et al. use a hydrogenic one. Also, the SrTiO$_3$ spectrum was presumably taken under completely different acquisition conditions than Okamoto's spectra. \n\\\nFor example, the SrTiO$_3$ spectrum was not corrected for convergence angle, even though it was acquired in Z-contrast mode. Another source of error, is of course the background fit, which could change especially the free electron integration result. The fact that the SrTiO$_3$ spectrum was not corrected for single scattering distribution may also overestimate the free electron gas contribution, even though the spectrum was taken in a every thin area.\n\nFor TiO$_2$ spectrum of the core-loss atlas I get for the white line / free electron gas 256 / 494 = 0.52. TiO$_2$ contains also only Ti$^{4+}$. This is the level of agreement we can expect, if we use two spectra with completely different acquisition parameters. \n\n\nIn the plotof the Cu-L edges above, we can see that Cu has no empty d-states but with oxidation the d-bands get unoccupied and white lines appear. The more electrons get transferred to the oxygen neighbors, the more empty d-states and the more prominent the white lines will appear.\n\nThis analysis of the occupancy of d-states is extremely important for magnetic materials where the strength depends on the unpaired (d-band or f-band) electrons.\n\nThe same analysis can be done for the empty f bands of M--edges, which are also rather flat. Usually, the M$_{4,5}$ and the M$_{2,3}$ edges form doublets of white lines.\n\n\n\n\n## ELNES\n\nSo far, we have only interpreted distinct features of the shape of the ionization edges. A general approach is to look at the shape of the edges in total and use this shape as a kind of fingerprint for the interpretation. Another one is to try to understand the different features as means of electronic structure calculations of various sophistication. \n\nIn order to understand the different approaches ( and their level of confidence in the results), we will discuss the most important edges one by one.\n\nThe shape of the ELNES is closely related to the density of states of the conduction band. The next chapters discuss the basics for an electronic structure interpretation of ELNES\n\n\n\n### Transition matrix and electronic structure\nThe single scattering intensity of an ionization edge $J_k^1(E)$ is related to the band structure through Fermi's Golden Rule: The transition rate is proportional to the density of final states $N(E)$ multiplied with the square of an atomic transition matrix $M(E)$\n \n \\begin{equation} \\Large\n J_k^1(E) \\propto |M(E)|^2 N(E)\n \\end{equation} \n \n The transition matrix describes the transition probability between the core states and the final states (given by $N(E)$). Because the transition probability generally decreases with higher energies (above the edge threshold, the transition matrix gives the overall shape of the edge (sawtooth) and can be determined by atomic physics. \n \n\n The density of final states (conduction band) ($N(E)$) expresses the chemical environment and its symmetry. \n Because the core--levels are highly localized the final states $N(E)$ present the local density of states. This localization causes a different shape for different elements in the same compound, even if they are nearest neighbors (with a distance of only a few Angstrom). The $N(E)$ will of course be different for elements in materials with different (local) symmetry, coordination or chemical composition.\n \n### Life-time broadening\n For arbitrary excitations, the $N(E)$ is the joint density of states, which means a convolution of the initial and the final states. The density of final states $N(E)$ is broadened in the spectrum by the energy resolution of the experimental setup $\\delta E$ and the width of the initial state $\\Gamma_i$. $\\Gamma_i$ can be approximated with the uncertainty principle: \n \n \\begin{equation} \\Large\n \\Gamma_i \\pi_h \\approx \\hbar\n \\end{equation}\n\n The lifetime of the core -hole $\\pi_h$ is determined how fast the core--hole is filled and the additional energy is dissipated through emission of Auger electrons (for light elements) or X-ray photons (heavier atoms). The value of $\\Gamma_i$ depends on the threshold energy of the edge and is calculated to be around 0.1 and 2 eV for K-edges of the first 40 elements.\n \n\n Further broadening of the $N(E)$ is induced by the lifetime of the final states $\\pi_f$. The inelastic mean free path of the ejected electron is only a few nm (assuming a kinetic energy of less than 50eV). \n Using the free electron approximation ($E_{kin} = m_0 v^2 / 2$), we get for the energy broadening of the final states:\n \\begin{equation} \\Large\n \\Gamma_f \\approx \\frac {\\hbar}{\\pi_f} = \\frac{\\hbar v}{\\lambda_i } = \\frac{\\hbar}{\\lambda_i} \\sqrt{\\frac{2E_{kin}}{m_0}} \n \\end{equation}\n \n \n Since the inelastic mean free path $\\lambda_i$ varies inversely with kinetic energy $E_{kin}$ below 50 eV (and raises only slightly above 50 eV), the observed density of state structure is more and more broadened the further the distance from the onset. \n \n The next two chapters discuss the density of final states $N(E)$ and the transition matrix $M(E)$ in detail.\n \n\n### Dipole-selection rule\n\nAssuming a single electron approximation (and almost no electronic theory theory solves the many-particle problem fully) for the excitation, we can replace the many electron transition matrix elements with single electron matrix elements $M(\\vec{q},E)$.\n\n\\begin{equation} \\Large\nM(\\vec{q},E) = \n \\end{equation}\n \n with the initial wave function $|i> = \\phi_i$ and the complex conjugated final wave function $\n\n\n\n\n\n\n\n\n\n (275.0, 310.0)\n\n\n\nLooking at the bonding of carbon in molecular orbital theory or (its predecessor) Ligand--Field theory the non--hybridized p electron in graphite will form an occupied $\\pi$ bond and an unoccupied $\\pi^*$ bond. In figure \\ref{fig:C-K} we see that the unoccupied $\\pi^*$ state is visible in the graphite spectrum. \nIn diamond, there is no molecule like p electron and consequently there is no $\\pi$ or $\\pi^*$ bond. \nThe appearance of the $\\pi^*$ state in a carbon spectrum is used as a fingerprint for $sp_2$ hybridization. In the case of so called diamond like carbon, an amorphous carbon with $sp_2$ and $sp_3$ bonds, the quality (amount of $sp_3$ bonds) of the diamond like carbon can be assessed by the intensity of the $\\pi^*$ peak (or rather the lack of it).\n\nBoth spectra have a $\\sigma_1^*$ and $\\sigma_2^*$ peak which are associated with the molecule like s states.\nThe C-K edge should show only the p density of states due to the dipole selection rule. The $\\sigma^*$ states show up in the C-K edge because of these states are already related to the s-p like free electron gas density of states (s-p hybridization) above the edge. The $\\sigma^*$ states are the non-bonding states of the ( $sp_2$ or $sp_3$) hybridized states and are, therefore, present in any carbon compound.\n\nThe quantification of $sp_2$ versus $sp_3$ hybridization is also important in polymers (where the non hybridized p electron in a $sp_2$ configuration forms the conducting double bonds. In Buckminster fullerens (bucky balls) and carbon nanotubes the $sp_3$ hybridization is always associated with a defect (dislocation like), where a carbon atom has now 4 nearest neighbors.\n\n\n### Silicon\n\nThe calculation of the transition matrix $M(E)$ for \nthe Si-L$_{3,2}$ edge shows that the intensity of the ELNES consists almost exclusively of d-states. Less than 5\\% of the intensity is from the also dipole--allowed s-DOS. \n\nWe can, therefore, assume that only d-states form the Si-L$_{3,2}$ ELNES.\nThe spin orbit splitting of the initial p states is 0.7 eV, and this means that the L$_3$ and the L$_2$ are separated by 0.7 eV, which cannot be resolved with most instrumental setups. \nTo the calculated (local) d-DOS the same DOS has to be added (with a ratio of about 2:1) but shifted by 0.7 eV.\n \n\n*Comparison of experimental and theoretical data. While an effective mass exciton would explain the sharp raise, the effect is to small, the electronic structure calculation without core hole effect, placed at the correct onset, does not agree with the experiment.*\n\t\nThe edge onset of the Si-L$_{3,2}$ of pure Si should be at 100 eV without core-hole effects. A d--Dos calculated without influence of a core--hole is shown in figure \\ref{fig:Si-L-pure} beginning at this value. We can clearly see that this DOS of state cannot reproduce the experimental ELNES. From this disagreement between experiment and DOS without core-hole effect, we conclude that the \n\tcore-hole effects must be included.\n\t\nThe main feature of the Si-L$_{3,2}$ of pure Si is the extreme sharp raise of the edge at the onset.\nThis feature cannot be explained by the d-DOS calculated without core--hole effects, which raises not as steeply as the experimental ELNES.\n\t\nThis steep raise is another indication of the core--hole and must have its origin in an excitonic effect (an interaction of the excess electron in the conduction band an the hole in the core state).\nIn the figure above, the calculations of an effective mass electron (due to an state that is created just below the conduction band) is compared to the experimental ELNES. Such an effective mass electron must be considered delocalized. We see that the raise is steep enough to explain the experimental raise, we also see that the effect (intensity) is not change the ELNES.\n\n\t\nOnly the explicit inclusion of the core--hole or the Z+1 calculations in figure \\ref{fig:Si-L-pure2} can explain this steep onset. We can, therefore, conclude on localized excitonic enhancement of the states at the bottom of conduction band. This is a rather localized excitonic effect.\n\nWe can also see in the comparison of the explicit inclusion of the core-hole and the Z+1 approximation that both simulations lead to the same ELNES, however only the explicit core-hole calculation can predict the exact intensity (cross section) of the Si-L$_{3,2}$ edge.\n\nThe same calculations are also successful for SiO$_2$ (quartz) as can be seen in figure \\ref{fig:Si-L-sio2}. The experimental data show the spin--orbit splitting in the first peak, all other features are to smeared out to show a clear distinction between transitions originating from $2p_{3/2}$ and 2p$_1/2$.\nDue to the simple addition of the shifted spectra, the splitting in the first peak is reproduced rather easily and cannot be used for further analysis. Again, this edge is completely dominated by the local d-DOS. \n \n\n### Oxygen and Nitrogen\n\nOxygen and nitrogen edges are usually very similar. Here we will discuss mostly the oxygen edge, but this can be easily transferred to nitrogen.\n\nThe Si-SiO$_2$ interface shows oxygen deficiency in the oxide at the interface. In the following, I will show that the oxygen K edge ELNES cannot be used to probe the oxygen deficiency. Experimentally, the oxygen K edge has a chemical shift by about 1 eV. The structure of the edge is washed out at the interface as shown in figure . Higher spatial resolution experiments by Muller (nature 2003) show a completely structureless O-K edge. Simulation of the O-K edge show that this shift and the featureless structure is due to the dimer like structure (Si-O-Si) which is not avoidable at any Si-SiO$_2$ interface.\n\nAnother approach is the so called \"finger-print\" method. In this method, one compares edges from different but known materials and hopes that similar features are conclusive for different coordinations within the unknown material. This approach can be improved by using simulations of the ELNES of the edges as seen in the plot below.\n\n\n```python\nO_reference_spectra = eels.get_spectrum_eels_db(element='O',edge='K')\n```\n\n (La0_7,Sr0_3)MnO3_O_K_imhoff_229\n 8-pm\n Al2O3_Al_K_Flank_1xray\n Al2O3_(alpha)_Al_K_Stefan_Nufer_53\n AlPO4_Al_K_Flank_2xray\n B2O3_B_K_Karine_Varlot_8xray\n B2O3_B_K_Karine_Varlot_9xray\n 7-pm\n E113 250-2-cal-kgd-pm-dm\n e113 k-pm\n 255-1-pmmod\n CaCoO NT-2\n CaTiO3_Ti_L3_Saso_Sturm_133\n CoO_O_K_dominique_IMHOFF_235\n Cr2O3_(alpha)_O_K_Rik_Brydson_158\n Cu2O_O_K_Y_Kihn_130\n Profile Of Cu4O3-O-K\n CuO_O_K_Y_Kihn_126\n Fe2O3_(alpha)_O_K_A__Gloter,_A__Chen_201\n Fe2TiO5_Ti_L3_E_Fries_P_Perriat_36\n Fe3O4_O_K_A__Gloter,_A__Chen_205\n FeCO3_O_K_A__Gloter,_A__Chen_206\n FeOOH_O_K_A__Gloter,_A__Chen_204\n FeOOH__(alpha)_O_K_A__Gloter,_A__Chen_203\n FeOOH__(beta)_O_K_A__Gloter,_A__Chen_202\n lsmo2\n APL_article_LSMO_oxygen2\n APL_article_interface_oxygen2\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n LaCoO3_O_K_Irmgrad_Rom,_Ferdinand_Hofer_162\n MgCO3_O_K_Irmgard_Rom_43\n MgCO3_C_K_Irmgard_Rom_44\n MgO_O_K_Giovanni_Bertoni_112\n MgO_Mg_K_Giovanni_Bertoni_113\n MgO_O_K_dominique_IMHOFF_233\n MgO_Mg_K_dominique_IMHOFF_234\n Na2O_Na_K_Flank_34xray\n Nb2O5_Nb_M_David_Bach,_Wilfried_Sigle_215\n NbO2_Nb_M_David_Bach,_Wilfried_Sigle_217\n SiO2_Si_K_Flank_27xray\n SiO2_Si_K_Flank_28xray\n SiO2_Si_K_Flank_29xray\n Spectre centre EELSDB\n SrTiO3_O_K_imhoff_232\n srtio32\n APL_article_STO_oxygen2\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n y\n EELS Spectrum Image (high-loss)model from decomposition with 6 components\n TiO2_(amorphous)_O_K_Giovanni_Bertoni_106\n TiO2_(anatase)_O_K_Giovanni_Bertoni_103\n Y3Al5O12_O_K_Saso_Sturm_182\n Y3Al5O12_Al_K_Saso_Sturm_213\n Y4Al2O9_O_K_Saso_Sturm_178\n Y4Al2O9_Al_K_Saso_Sturm_179\n YAlO3_O_K_Saso_Sturm_186\n YAlO3_Al_K_Saso_Sturm_187\n ZnO_O_K_Wilfried_Sigle_169\n found 55 spectra in EELS database)\n\n\n\n```python\nO_reference_titles = ['SrTiO3_O_K_imhoff_232',\n 'MgO_O_K_Giovanni_Bertoni_112',\n 'ZnO_O_K_Wilfried_Sigle_169',\n 'Cr2O3_(alpha)_O_K_Rik_Brydson_158'\n ]\nO_reference_materials = ['SrTiO$_3$', 'MgO', 'ZnO', 'Cr$_2$O$_3$']\nplt.figure()\ni = 0\nfor name, spectrum in O_reference_spectra.items(): \n if spectrum['TITLE'] in O_reference_titles:\n plt.plot(spectrum['enery_scale'],spectrum['data']/np.max(spectrum['data']), label = 'O-K:'+O_reference_materials[i])\n i+=1\n \nplt.legend();\nplt.xlim(525,570)\n\n\n```\n\n\n \n\n\n\n\n\n\n\n\n\n (525.0, 570.0)\n\n\n\nA comparison hows that the cubic oxides MgO and MgAl$_2$O$_4$ (spinel structure) have a similar ELNES which differs from the ELNES from the rhombohedral oxide Al$_2$O$_3$.\n\n\nCorrelation effects of valence electrons cause the so called Hubbard band. These bands dominate the electronic structure in high T$_c$ semiconductors, giant magnet resistors and other materials with high correlations.\n\nIn the figure below, we see that such a correlation effect takes place in the giant magneto-resistor LaMnO$_3$ but not in the perovskite LaAlO$_3$, which does not have this splitting of the d bands (Al does not have d electrons but Mn has), a precondition for the formation of a Hubbard band.\n\t\n\n\n\n*The O-K edge in LaMnO$_3$ has a pre-peak that is interpreted as Hubbard band and is not present in LaAlO$_3$ even so it has the same structure.*\n\n\t\nThe same Hubbard band is visible in the high T$_c$ superconductor YBa$_2$Cu$_3$O$_{7-\\delta}$.\nIn figure \\ref{fig:O-K-YBCO-DL}, we see the probing of this Hubbard band at a dislocation in YBa$_2$Cu$_3$O$_{7-\\delta}$.\n\n\n\n*The O-K edge at a dislocation in YBa$_2$Cu$_3$O$_{7-\\delta}$ has less of the signature of a Hubbard band than far away in the bulk. This lack of holes in the dislocation disturbs the superconductivity and is utilized in grain boundaries for Josephson junctions but is detrimental in polycrystalline high T$_c$ wires.*\n\n\nThe prepeak that is caused by the Hubbard band is reduced in the vicinity of the dislocation and vanishes completely within the dislocation core. This lack of holes in the dislocation disturbs the superconductivity and is utilized in grain boundaries for Josephson junctions but is detrimental in polycrystalline high T$_c$ wires.\n \n\n## Spatial Resolution in EELS\n% images/spat-dif-resolution.jpg}\n\n### Methods to achieve spatial resolution\n- Spot Mode\n- Area Mode = Spatial Difference\n- Lines Scan Mode \n - Each Spot of Line one Spectrum\n - Each Segement of Line one Spectrum\n - Each Area of Line one Spectrum\n- Spectrum Imaging\n\n### Spot mode\n% {images/spat-dif-spot.jpg}\n\n### Spatial difference\n% images/spat-dif-spatdif1.jpg\n% images/spat-dif-spatdif2.jpg\n% images/spat-dif-spatdif3.jpg\n*EELS at Bi doped Cu grain boundary*\n% images/spat-dif-spatdif4.jpg\n% images/spat-dif-ls1.jpg\n*As segregation at Si/SiO$_2$ interface*\n% images/spat-dif-ls2.jpg\n*As segregation at Si/Sio$_2$ interface*\n\n### Energy Filtered Imaging (EFTEM) \n% images/spat-dif-eftem1.jpg}\n% images/spat-dif-eftem2.jpg}\n% images/spat-dif-eftem3.jpg}\n\n## Summary\n\nThe core--loss part of the electron energy--loss spectrum allows us to determine:\n- chemical composition\n- bonding\n- magnetic moment through ionic charge \n\n>\n>\twith high spatial resolution!!!\n>\n\n## Navigation\n- **Up Chapter 4: [Imaging](CH4_00-Spectroscopy.ipynb)** \n- **Back: [Analysis of Core-Loss](CH4_09-Analysis_Core_Loss.ipynb)** \n- **List of Content: [Front](../_MSE672_Intro_TEM.ipynb)** \n\n\n```python\n\n```\n", "meta": {"hexsha": "8d027c5fcc53fae6bb226b23b77845269f8a6437", "size": 810946, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Spectroscopy/CH4_10-ELNES.ipynb", "max_stars_repo_name": "ahoust17/MSE672-Introduction-to-TEM", "max_stars_repo_head_hexsha": "6b412a3ad07ee273428a95a7158aa09058d7e2ee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-01-22T18:09:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-26T20:17:34.000Z", "max_issues_repo_path": "Spectroscopy/CH4_10-ELNES.ipynb", "max_issues_repo_name": "ahoust17/MSE672-Introduction-to-TEM", "max_issues_repo_head_hexsha": "6b412a3ad07ee273428a95a7158aa09058d7e2ee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Spectroscopy/CH4_10-ELNES.ipynb", "max_forks_repo_name": "ahoust17/MSE672-Introduction-to-TEM", "max_forks_repo_head_hexsha": "6b412a3ad07ee273428a95a7158aa09058d7e2ee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2021-01-26T16:10:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T14:53:16.000Z", "avg_line_length": 135.4285237141, "max_line_length": 155375, "alphanum_fraction": 0.8174366727, "converted": true, "num_tokens": 14101, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295350395727, "lm_q2_score": 0.2598256436924554, "lm_q1q2_score": 0.11575999822565736}} {"text": "# [Math-Bot] Siamese LSTM: Detecting duplicates\n\n\n\nAuthor: Alin-Andrei Georgescu 2021\n\nWelcome to my notebook! It explores the Siamese networks applied to natural language processing. The model is intended to detect duplicates, in other words to check if two sentences are similar.\nThe model uses \"Long short-term memory\" (LSTM) neural networks, which are an artificial recurrent neural networks (RNNs). This version uses a custom embedding layer with weigths based on GloVe pretrained vectors.\n\n## Outline\n\n- [Overview](#0)\n- [Part 1: Importing the Data](#1)\n - [1.1 Loading in the data](#1.1)\n - [1.2 Converting a sentence to a tensor](#1.2)\n - [1.3 Understanding and building the iterator](#1.3)\n- [Part 2: Defining the Siamese model](#2)\n - [2.1 Understanding and building the Siamese Network](#2.1)\n - [2.2 Implementing Hard Negative Mining](#2.2)\n- [Part 3: Training](#3)\n- [Part 4: Evaluation](#4)\n- [Part 5: Making predictions](#5)\n\n\n### Overview\n\nGeneral ideas:\n- Designing a Siamese networks model\n- Implementing the triplet loss\n- Evaluating accuracy\n- Using cosine similarity between the model's outputted vectors\n- Working with Trax and Numpy libraries in Python 3\n\nThe LSTM cell's architecture (source: https://www.researchgate.net/figure/The-structure-of-the-LSTM-unit_fig2_331421650):\n\n\n\n\nI will start by preprocessing the data, then I will build a classifier that will identify whether two sentences are the same or not. \n\n\nI tokenized the data, then split the dataset into training and testing sets. I loaded pretrained GloVe word embeddings and built a sentence's vector by averaging the composing word's vectors. The model takes in the two sentence embeddings, runs them through an LSTM, and then compares the outputs of the two sub networks using cosine similarity.\n\nThis notebook has been built based on Coursera's Natural Language Processing Specialization.\n\n\n\n# Part 1: Importing the Data\n\n### 1.1 Loading in the data\n\nFirst step in building a model is building a dataset. I used three datasets in building my model:\n- the Quora Question Pairs\n- edited SICK dataset\n- custom Maths duplicates dataset\n\nRun the cell below to import some of the needed packages. \n\n\n```python\nimport os\nimport re\nimport nltk\nfrom nltk.corpus import stopwords\nnltk.download('punkt')\nnltk.download('stopwords')\nnltk.download('wordnet')\n\nimport numpy as np\nimport pandas as pd\nimport random as rnd\n\n!pip install textcleaner\nimport textcleaner as tc\n\n!pip install trax\nimport trax\nfrom trax import layers as tl\nfrom trax.supervised import training\nfrom trax.fastmath import numpy as fastnp\n\nfrom collections import defaultdict\n\n# set random seeds\nrnd.seed(34)\n```\n\n**Notice that in this notebook Trax's numpy is referred to as `fastnp`, while regular numpy is referred to as `np`.**\n\nNow the dataset and word embeddings will get loaded and the data processed.\n\n\n```python\ndata = pd.read_csv(\"data/merged_dataset.csv\", encoding=\"utf-8\")\n\nN = len(data)\nprint(\"Number of sentence pairs: \", N)\n\ndata.head()\n\n!wget -O data/glove.840B.300d.zip nlp.stanford.edu/data/glove.840B.300d.zip\n!unzip -d data data/glove.840B.300d.zip\n!rm data/glove.840B.300d.zip\n\n!wget -O data/glove.6B.zip nlp.stanford.edu/data/glove.6B.zip\n!unzip -d data data/glove.6B.zip\n!rm data/glove.6B.zip\n!rm data/glove.6B.200d.txt\n!rm data/glove.6B.100d.txt\n!rm data/glove.6B.50d.txt\n\n# load the whole embedding into memory\nembeddings_index = defaultdict(lambda: 1)\nembeddings = dict()\nindex = 0\n\npad = \"\"\nembeddings[pad] = np.zeros(300)\nembeddings_index[pad] = index\nindex += 1\n\nunk = \"\"\nembeddings[unk] = np.asarray([0.22418134, -0.28881392, 0.13854356, 0.00365387, -0.12870757, 0.10243822, 0.061626635, 0.07318011, -0.061350107, -1.3477012, 0.42037755, -0.063593924, -0.09683349, 0.18086134, 0.23704372, 0.014126852, 0.170096, -1.1491593, 0.31497982, 0.06622181, 0.024687296, 0.076693475, 0.13851812, 0.021302193, -0.06640582, -0.010336159, 0.13523154, -0.042144544, -0.11938788, 0.006948221, 0.13333307, -0.18276379, 0.052385733, 0.008943111, -0.23957317, 0.08500333, -0.006894406, 0.0015864656, 0.063391194, 0.19177166, -0.13113557, -0.11295479, -0.14276934, 0.03413971, -0.034278486, -0.051366422, 0.18891625, -0.16673574, -0.057783455, 0.036823478, 0.08078679, 0.022949161, 0.033298038, 0.011784158, 0.05643189, -0.042776518, 0.011959623, 0.011552498, -0.0007971594, 0.11300405, -0.031369694, -0.0061559738, -0.009043574, -0.415336, -0.18870236, 0.13708843, 0.005911723, -0.113035575, -0.030096142, -0.23908928, -0.05354085, -0.044904727, -0.20228513, 0.0065645403, -0.09578946, -0.07391877, -0.06487607, 0.111740574, -0.048649278, -0.16565254, -0.052037314, -0.078968436, 0.13684988, 0.0757494, -0.006275573, 0.28693774, 0.52017444, -0.0877165, -0.33010918, -0.1359622, 0.114895485, -0.09744406, 0.06269521, 0.12118575, -0.08026362, 0.35256687, -0.060017522, -0.04889904, -0.06828978, 0.088740796, 0.003964443, -0.0766291, 0.1263925, 0.07809314, -0.023164088, -0.5680669, -0.037892066, -0.1350967, -0.11351585, -0.111434504, -0.0905027, 0.25174105, -0.14841858, 0.034635577, -0.07334565, 0.06320108, -0.038343467, -0.05413284, 0.042197507, -0.090380974, -0.070528865, -0.009174437, 0.009069661, 0.1405178, 0.02958134, -0.036431845, -0.08625681, 0.042951006, 0.08230793, 0.0903314, -0.12279937, -0.013899368, 0.048119213, 0.08678239, -0.14450377, -0.04424887, 0.018319942, 0.015026873, -0.100526, 0.06021201, 0.74059093, -0.0016333034, -0.24960588, -0.023739101, 0.016396184, 0.11928964, 0.13950661, -0.031624354, -0.01645025, 0.14079992, -0.0002824564, -0.08052984, -0.0021310581, -0.025350995, 0.086938225, 0.14308536, 0.17146006, -0.13943303, 0.048792403, 0.09274929, -0.053167373, 0.031103406, 0.012354865, 0.21057427, 0.32618305, 0.18015954, -0.15881181, 0.15322933, -0.22558987, -0.04200665, 0.0084689725, 0.038156632, 0.15188617, 0.13274793, 0.113756925, -0.095273495, -0.049490947, -0.10265804, -0.27064866, -0.034567792, -0.018810693, -0.0010360252, 0.10340131, 0.13883452, 0.21131058, -0.01981019, 0.1833468, -0.10751636, -0.03128868, 0.02518242, 0.23232952, 0.042052146, 0.11731903, -0.15506615, 0.0063580726, -0.15429358, 0.1511722, 0.12745973, 0.2576985, -0.25486213, -0.0709463, 0.17983761, 0.054027, -0.09884228, -0.24595179, -0.093028545, -0.028203879, 0.094398156, 0.09233813, 0.029291354, 0.13110267, 0.15682974, -0.016919162, 0.23927948, -0.1343307, -0.22422817, 0.14634751, -0.064993896, 0.4703685, -0.027190214, 0.06224946, -0.091360025, 0.21490277, -0.19562101, -0.10032754, -0.09056772, -0.06203493, -0.18876675, -0.10963594, -0.27734384, 0.12616494, -0.02217992, -0.16058226, -0.080475815, 0.026953284, 0.110732645, 0.014894041, 0.09416802, 0.14299914, -0.1594008, -0.066080004, -0.007995227, -0.11668856, -0.13081996, -0.09237365, 0.14741232, 0.09180138, 0.081735, 0.3211204, -0.0036552632, -0.047030564, -0.02311798, 0.048961394, 0.08669574, -0.06766279, -0.50028914, -0.048515294, 0.14144728, -0.032994404, -0.11954345, -0.14929578, -0.2388355, -0.019883996, -0.15917352, -0.052084364, 0.2801028, -0.0029121689, -0.054581646, -0.47385484, 0.17112483, -0.12066923, -0.042173345, 0.1395337, 0.26115036, 0.012869649, 0.009291686, -0.0026459037, -0.075331464, 0.017840583, -0.26869613, -0.21820338, -0.17084768, -0.1022808, -0.055290595, 0.13513643, 0.12362477, -0.10980586, 0.13980341, -0.20233242, 0.08813751, 0.3849736, -0.10653763, -0.06199595, 0.028849555, 0.03230154, 0.023856193, 0.069950655, 0.19310954, -0.077677034, -0.144811], dtype='float32')\nembeddings_index[unk] = index\nindex += 1\n\nf = open(\"data/glove.6B.300d.txt\")\nfor line in f:\n values = line.split()\n word = values[0]\n coefs = np.asarray(values[1:], dtype=\"float32\")\n embeddings_index[word] = index\n index += 1\n embeddings[word] = coefs\nf.close()\n\nvocab_size = index + 1\nprint(\"Loaded %s word vectors.\" % vocab_size)\n```\n\nThen I split the data into a train and test set. The test set will be used later to evaluate the model.\n\n\n```python\nN_dups = len(data[data.is_duplicate == 1])\n\n# Take 90% of the duplicates for the train set\nN_train = int(N_dups * 0.9)\nprint(N_train)\n\n# Take the rest of the duplicates for the test set + an equal number of non-dups\nN_test = (N_dups - N_train) * 2\nprint(N_test)\n\ndata_train = data[: N_train]\n# Shuffle the train set\ndata_train = data_train.sample(frac=1)\n\ndata_test = data[N_train : N_train + N_test]\n# Shuffle the test set\ndata_test = data_test.sample(frac=1)\n\nprint(\"Train set: \", len(data_train), \"; Test set: \", len(data_test))\n\n# Remove the unneeded data to some memory\ndel(data)\n```\n\n\n```python\nS1_train_words = np.array(data_train[\"sentence1\"])\nS2_train_words = np.array(data_train[\"sentence2\"])\n\nS1_test_words = np.array(data_test[\"sentence1\"])\nS2_test_words = np.array(data_test[\"sentence2\"])\ny_test = np.array(data_test[\"is_duplicate\"])\n\ndel(data_train)\ndel(data_test)\n```\n\nAbove, you have seen that the model only takes the duplicated sentences for training.\nAll this has a purpose, as the data generator will produce batches $([s1_1, s1_2, s1_3, ...]$, $[s2_1, s2_2,s2_3, ...])$, where $s1_i$ and $s2_k$ are duplicate if and only if $i = k$.\n\nAn example of how the data looks is shown below.\n\n\n```python\nprint(\"TRAINING SENTENCES:\\n\")\nprint(\"Sentence 1: \", S1_train_words[0])\nprint(\"Sentence 2: \", S2_train_words[0], \"\\n\")\nprint(\"Sentence 1: \", S1_train_words[5])\nprint(\"Sentence 2: \", S2_train_words[5], \"\\n\")\n\nprint(\"TESTING SENTENCES:\\n\")\nprint(\"Sentence 1: \", S1_test_words[0])\nprint(\"Sentence 2: \", S2_test_words[0], \"\\n\")\nprint(\"is_duplicate =\", y_test[0], \"\\n\")\n```\n\nThe first step is to tokenize the sentences using a custom tokenizer defined below.\n\n\n```python\n# Create arrays\nS1_train = np.empty_like(S1_train_words)\nS2_train = np.empty_like(S2_train_words)\n\nS1_test = np.empty_like(S1_test_words)\nS2_test = np.empty_like(S2_test_words)\n```\n\n\n```python\ndef data_tokenizer(sentence):\n \"\"\"Tokenizer function - cleans and tokenizes the data\n\n Args:\n sentence (str): The input sentence.\n Returns:\n list: The transformed input sentence.\n \"\"\"\n \n if sentence == \"\":\n return \"\"\n\n sentence = tc.lower_all(sentence)[0]\n\n # Change tabs to spaces\n sentence = re.sub(r\"\\t+_+\", \" \", sentence)\n # Change short forms\n sentence = re.sub(r\"\\'ve\", \" have\", sentence)\n sentence = re.sub(r\"(can\\'t|can not)\", \"cannot\", sentence)\n sentence = re.sub(r\"n\\'t\", \" not\", sentence)\n sentence = re.sub(r\"I\\'m\", \"I am\", sentence)\n sentence = re.sub(r\" m \", \" am \", sentence)\n sentence = re.sub(r\"(\\'re| r )\", \" are \", sentence)\n sentence = re.sub(r\"\\'d\", \" would \", sentence)\n sentence = re.sub(r\"\\'ll\", \" will \", sentence)\n sentence = re.sub(r\"(\\d+)(k)\", r\"\\g<1>000\", sentence)\n # Make word separations\n sentence = re.sub(r\"(\\+|-|\\*|\\/|\\^|\\.)\", \" $1 \", sentence)\n # Remove irrelevant stuff, nonprintable characters and spaces\n sentence = re.sub(r\"(\\'s|\\'S|\\'|\\\"|,|[^ -~]+)\", \"\", sentence)\n sentence = tc.strip_all(sentence)[0]\n\n if sentence == \"\":\n return \"\"\n\n return tc.token_it(tc.lemming(sentence))[0]\n```\n\n\n```python\nfor idx in range(len(S1_train_words)):\n S1_train[idx] = data_tokenizer(S1_train_words[idx])\n\nfor idx in range(len(S2_train_words)):\n S2_train[idx] = data_tokenizer(S2_train_words[idx])\n \nfor idx in range(len(S1_test_words)): \n S1_test[idx] = data_tokenizer(S1_test_words[idx])\n\nfor idx in range(len(S2_test_words)): \n S2_test[idx] = data_tokenizer(S2_test_words[idx])\n```\n\n\n### 1.2 Converting a sentence to a tensor\n\nThe next step is to convert every sentence to a tensor, or an array of numbers, using the word embeddings loaded above.\n\n\n```python\n# Converting sentences to arrays of integers\nsentence_of_indexes = []\n\nfor i in range(len(S1_train)):\n for word in S1_train[i]:\n sentence_of_indexes += [embeddings_index[word]] if word in embeddings_index else [embeddings_index[unk]]\n \n S1_train[i] = sentence_of_indexes\n sentence_of_indexes = []\n\nfor i in range(len(S2_train)):\n for word in S2_train[i]:\n sentence_of_indexes += [embeddings_index[word]] if word in embeddings_index else [embeddings_index[unk]]\n \n S2_train[i] = sentence_of_indexes\n sentence_of_indexes = []\n\nfor i in range(len(S1_test)):\n for word in S1_test[i]:\n sentence_of_indexes += [embeddings_index[word]] if word in embeddings_index else [embeddings_index[unk]]\n \n S1_test[i] = sentence_of_indexes\n sentence_of_indexes = []\n\nfor i in range(len(S2_test)):\n for word in S2_test[i]:\n sentence_of_indexes += [embeddings_index[word]] if word in embeddings_index else [embeddings_index[unk]]\n \n S2_test[i] = sentence_of_indexes\n sentence_of_indexes = []\n```\n\n\n```python\nprint(\"FIRST SENTENCE IN TRAIN SET:\\n\")\nprint(S1_train_words[0], \"\\n\") \nprint(\"ENCODED VERSION:\")\nprint(S1_train[0],\"\\n\")\ndel(S1_train_words)\ndel(S2_train_words)\n\nprint(\"FIRST SENTENCE IN TEST SET:\\n\")\nprint(S1_test_words[0], \"\\n\")\nprint(\"ENCODED VERSION:\")\nprint(S1_test[0])\ndel(S1_test_words)\ndel(S2_test_words)\n```\n\nNow, the train set must be split into a training/validation set so that it can be used to train and evaluate the Siamese model.\n\n\n```python\n# Splitting the data\ncut_off = int(len(S1_train) * .8)\ntrain_S1, train_S2 = S1_train[: cut_off], S2_train[: cut_off]\nval_S1, val_S2 = S1_train[cut_off :], S2_train[cut_off :]\nprint(\"Number of duplicate sentences: \", len(S1_train))\nprint(\"The length of the training set is: \", len(train_S1))\nprint(\"The length of the validation set is: \", len(val_S1))\n```\n\n\n### 1.3 Understanding and building the iterator \n\nGiven the compational limits, we need to split our data into batches. In this notebook, I built a data generator that takes in $S1$ and $S2$ and returned a batch of size `batch_size` in the following format $([s1_1, s1_2, s1_3, ...]$, $[s2_1, s2_2,s2_3, ...])$. The tuple consists of two arrays and each array has `batch_size` sentences. Again, $s1_i$ and $s2_i$ are duplicates, but they are not duplicates with any other elements in the batch. \n\nThe command `next(data_generator)` returns the next batch. This iterator returns a pair of arrays of sentences, which will later be used in the model.\n\n**The ideas behind:** \n- The generator returns shuffled batches of data. To achieve this without modifying the actual sentence lists, a list containing the indexes of the sentences is created. This list can be shuffled and used to get random batches everytime the index is reset.\n- Append elements of $S1$ and $S2$ to `input1` and `input2` respectively.\n\n\n```python\ndef data_generator(S1, S2, batch_size, pad=1, shuffle=False):\n \"\"\"Generator function that yields batches of data\n\n Args:\n S1 (list): List of transformed (to tensor) sentences.\n S2 (list): List of transformed (to tensor) sentences.\n batch_size (int): Number of elements per batch.\n pad (int, optional): Pad character from the vocab. Defaults to 1.\n shuffle (bool, optional): If the batches should be randomnized or not. Defaults to False.\n Yields:\n tuple: Of the form (input1, input2) with types (numpy.ndarray, numpy.ndarray)\n NOTE: input1: inputs to your model [s1a, s2a, s3a, ...] i.e. (s1a,s1b) are duplicates\n input2: targets to your model [s1b, s2b,s3b, ...] i.e. (s1a,s2i) i!=a are not duplicates\n \"\"\"\n\n input1 = []\n input2 = []\n idx = 0\n len_s = len(S1)\n sentence_indexes = [*range(len_s)]\n\n if shuffle:\n rnd.shuffle(sentence_indexes)\n\n while True:\n if idx >= len_s:\n # If idx is greater than or equal to len_q, reset it\n idx = 0\n # Shuffle to get random batches if shuffle is set to True\n if shuffle:\n rnd.shuffle(sentence_indexes)\n\n s1 = S1[sentence_indexes[idx]]\n s2 = S2[sentence_indexes[idx]]\n\n idx += 1\n\n input1.append(s1)\n input2.append(s2)\n\n if len(input1) == batch_size:\n # Determine max_len as the longest sentence in input1 & input 2\n max_len = max(max([len(s) for s in input1]), max([len(s) for s in input2]))\n # Pad to power-of-2\n max_len = 2 ** int(np.ceil(np.log2(max_len)))\n\n b1 = []\n b2 = []\n for s1, s2 in zip(input1, input2):\n # Add [pad] to s1 until it reaches max_len\n s1 = s1 + [pad] * (max_len - len(s1))\n # Add [pad] to s2 until it reaches max_len\n s2 = s2 + [pad] * (max_len - len(s2))\n\n # Append s1\n b1.append(s1)\n # Append s2\n b2.append(s2)\n\n # Use b1 and b2\n yield np.array(b1), np.array(b2)\n\n # reset the batches\n input1, input2 = [], []\n```\n\n\n```python\nbatch_size = 2\nres1, res2 = next(data_generator(train_S1, train_S2, batch_size))\nprint(\"First sentences :\\n\", res1, \"\\n Shape: \", res1.shape)\nprint(\"Second sentences :\\n\", res2, \"\\n Shape: \", res2.shape)\n```\n\nNow we can go ahead and start building the neural network, as we have a data generator.\n\n\n# Part 2: Defining the Siamese model\n\n\n\n### 2.1 Understanding and building the Siamese Network \n\nA Siamese network is a neural network which uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. The Siamese network model proposed in this notebook looks like this:\n\n\n\nThe sentences' embeddings are passed to an LSTM layer, the output vectors, $v_1$ and $v_2$, are normalized, and finally a triplet loss is used to get the corresponding cosine similarity for each pair of sentences. The triplet loss makes use of a baseline (anchor) input that is compared to a positive (truthy) input and a negative (falsy) input. The distance from the baseline (anchor) input to the positive (truthy) input is minimized, and the distance from the baseline (anchor) input to the negative (falsy) input is maximized. In math equations, the following is maximized:\n\n$$\\mathcal{L}(A, P, N)=\\max \\left(\\|\\mathrm{f}(A)-\\mathrm{f}(P)\\|^{2}-\\|\\mathrm{f}(A)-\\mathrm{f}(N)\\|^{2}+\\alpha, 0\\right)$$\n\n$A$ is the anchor input, for example $s1_1$, $P$ the duplicate input, for example, $s2_1$, and $N$ the negative input (the non duplicate sentence), for example $s2_2$.
\n$\\alpha$ is a margin - how much the duplicates are pushed from the non duplicates. \n
\n\n**The ideas behind:**\n- Trax library is used in implementing the model.\n- `tl.Serial`: Combinator that applies layers serially (by function composition) allowing the set up the overall structure of the feedforward.\n- `tl.LSTM` The LSTM layer. \n- `tl.Mean`: Computes the mean across a desired axis. Mean uses one tensor axis to form groups of values and replaces each group with the mean value of that group.\n- `tl.Fn` Layer with no weights that applies the function f - vector normalization in this case.\n- `tl.parallel`: It is a combinator layer (like `Serial`) that applies a list of layers in parallel to its inputs.\n\n\n\n```python\ndef Siamese(vocab_size=vocab_size, d_model=128):\n \"\"\"Returns a Siamese model.\n\n Args:\n vocab_size (int, optional): Length of the vocabulary. Defaults to len(vocab).\n d_model (int, optional): Depth of the model. Defaults to 128.\n mode (str, optional): \"train\", \"eval\" or \"predict\", predict mode is for fast inference. Defaults to \"train\".\n\n Returns:\n trax.layers.combinators.Parallel: A Siamese model. \n \"\"\"\n \n global embedding_layer\n\n def normalize(x): # normalizes the vectors to have L2 norm 1\n return x / fastnp.sqrt(fastnp.sum(x * x, axis=-1, keepdims=True))\n\n embedding_layer = tl.Embedding(vocab_size, d_model)\n\n s_processor = tl.Serial( # Processor will run on S1 and S2.\n embedding_layer,\n tl.LSTM(d_model), # LSTM layer\n tl.Mean(axis=1), # Mean over columns\n tl.Fn('Normalize', lambda x: normalize(x)) # Apply normalize function\n ) # Returns one vector of shape [batch_size, d_model].\n \n # Run on S1 and S2 in parallel.\n model = tl.Parallel(s_processor, s_processor)\n return model\n```\n\nSetup the Siamese network model.\n\n\n```python\n# Check the model\nmodel = Siamese()\nembedding_layer.weights = np.asarray(list(embeddings.values()))\nprint(model)\n```\n\n\n\n### 2.2 Implementing Hard Negative Mining\n\n\nNow it's the time to implement the `TripletLoss`.\nAs explained earlier, loss is composed of two terms. One term utilizes the mean of all the non duplicates, the second utilizes the *closest negative*. The loss expression is then:\n \n\\begin{align}\n \\mathcal{Loss_1(A,P,N)} &=\\max \\left( -cos(A,P) + mean_{neg} +\\alpha, 0\\right) \\\\\n \\mathcal{Loss_2(A,P,N)} &=\\max \\left( -cos(A,P) + closest_{neg} +\\alpha, 0\\right) \\\\\n\\mathcal{Loss(A,P,N)} &= mean(Loss_1 + Loss_2) \\\\\n\\end{align}\n\n\n```python\ndef TripletLossFn(v1, v2, margin=0.25):\n \"\"\"Custom Loss function.\n\n Args:\n v1 (numpy.ndarray): Array with dimension (batch_size, model_dimension) associated to S1.\n v2 (numpy.ndarray): Array with dimension (batch_size, model_dimension) associated to S2.\n margin (float, optional): Desired margin. Defaults to 0.25.\n\n Returns:\n jax.interpreters.xla.DeviceArray: Triplet Loss.\n \"\"\"\n\n scores = fastnp.dot(v1, v2.T) # pairwise cosine sim\n batch_size = len(scores)\n\n positive = fastnp.diagonal(scores) # the positive ones (duplicates)\n negative_without_positive = scores - 2.0 * fastnp.eye(batch_size)\n\n closest_negative = fastnp.max(negative_without_positive, axis=1)\n negative_zero_on_duplicate = (1.0 - fastnp.eye(batch_size)) * scores\n mean_negative = fastnp.sum(negative_zero_on_duplicate, axis=1) / (batch_size - 1)\n\n triplet_loss1 = fastnp.maximum(0.0, margin - positive + closest_negative)\n triplet_loss2 = fastnp.maximum(0.0, margin - positive + mean_negative)\n triplet_loss = fastnp.mean(triplet_loss1 + triplet_loss2)\n \n return triplet_loss\n```\n\n\n```python\nv1 = np.array([[0.26726124, 0.53452248, 0.80178373],[0.5178918 , 0.57543534, 0.63297887]])\nv2 = np.array([[ 0.26726124, 0.53452248, 0.80178373],[-0.5178918 , -0.57543534, -0.63297887]])\nTripletLossFn(v2,v1)\nprint(\"Triplet Loss:\", TripletLossFn(v2,v1))\n```\n\n**Expected Output:**\n```CPP\nTriplet Loss: 0.5\n``` \n\n\n```python\nfrom functools import partial\ndef TripletLoss(margin=1):\n # Trax layer creation\n triplet_loss_fn = partial(TripletLossFn, margin=margin)\n return tl.Fn(\"TripletLoss\", triplet_loss_fn)\n```\n\n\n\n# Part 3: Training\n\nThe next step is model training - defining the cost function and the optimizer, feeding in the built model. But first I will define the data generators used in the model.\n\n\n```python\nbatch_size = 512\ntrain_generator = data_generator(train_S1, train_S2, batch_size)\nval_generator = data_generator(val_S1, val_S2, batch_size)\nprint(\"train_S1.shape \", train_S1.shape)\nprint(\"val_S1.shape \", val_S1.shape)\n```\n\nNow, I will define the training step. Each training iteration is defined as an `epoch`, each epoch being an iteration over all the data, using the training iterator.\n\n**The ideas behind:**\n- Two tasks are needed: `TrainTask` and `EvalTask`.\n- The training runs in a trax loop `trax.supervised.training.Loop`.\n- Pass the other parameters to a loop.\n\n\n```python\ndef train_model(Siamese, TripletLoss, lr_schedule, train_generator=train_generator, val_generator=val_generator, output_dir=\"trax_model/\"):\n \"\"\"Training the Siamese Model\n\n Args:\n Siamese (function): Function that returns the Siamese model.\n TripletLoss (function): Function that defines the TripletLoss loss function.\n lr_schedule (function): Trax multifactor schedule function.\n train_generator (generator, optional): Training generator. Defaults to train_generator.\n val_generator (generator, optional): Validation generator. Defaults to val_generator.\n output_dir (str, optional): Path to save model to. Defaults to \"trax_model/\".\n\n Returns:\n trax.supervised.training.Loop: Training loop for the model.\n \"\"\"\n\n output_dir = os.path.expanduser(output_dir)\n\n train_task = training.TrainTask(\n labeled_data=train_generator,\n loss_layer=TripletLoss(),\n optimizer=trax.optimizers.Adam(0.01),\n lr_schedule=lr_schedule\n )\n\n eval_task = training.EvalTask(\n labeled_data=val_generator,\n metrics=[TripletLoss()]\n )\n\n training_loop = training.Loop(Siamese(),\n train_task,\n eval_tasks=[eval_task],\n output_dir=output_dir,\n random_seed=34)\n\n return training_loop\n```\n\n\n```python\ntrain_steps = 1500\nlr_schedule = trax.lr.warmup_and_rsqrt_decay(400, 0.01)\ntraining_loop = train_model(Siamese, TripletLoss, lr_schedule)\ntraining_loop.run(train_steps)\n```\n\n\n\n# Part 4: Evaluation\n\nTo determine the accuracy of the model, the test set that was configured earlier is used. While the training used only positive examples, the test data, S1_test, S2_test and y_test, is setup as pairs of sentences, some of which are duplicates some are not. \nThis routine runs all the test sentences pairs through the model, computes the cosine simlarity of each pair, thresholds it and compares the result to y_test - the correct response from the data set. The results are accumulated to produce an accuracy.\n\n**The ideas behind:** \n - The model loops through the incoming data in batch_size chunks.\n - The output vectors are computed and their cosine similarity is thresholded.\n\n\n```python\ndef classify(test_S1, test_S2, y, threshold, model, data_generator=data_generator, batch_size=64):\n \"\"\"Function to test the model. Calculates some metrics, such as precision, accuracy, recall and F1 score.\n\n Args:\n test_S1 (numpy.ndarray): Array of S1 sentences.\n test_S2 (numpy.ndarray): Array of S2 sentences.\n y (numpy.ndarray): Array of actual target.\n threshold (float): Desired threshold.\n model (trax.layers.combinators.Parallel): The Siamese model.\n data_generator (function): Data generator function. Defaults to data_generator.\n batch_size (int, optional): Size of the batches. Defaults to 64.\n\n Returns:\n (float, float, float, float): Accuracy, precision, recall and F1 score of the model.\n \"\"\"\n\n true_pos = 0\n true_neg = 0\n false_pos = 0\n false_neg = 0\n\n for i in range(0, len(test_S1), batch_size):\n to_process = len(test_S1) - i\n\n if to_process < batch_size:\n batch_size = to_process\n\n s1, s2 = next(data_generator(test_S1[i : i + batch_size], test_S2[i : i + batch_size], batch_size, shuffle=False))\n y_test = y[i : i + batch_size]\n\n v1, v2 = model((s1, s2))\n\n for j in range(batch_size):\n d = np.dot(v1[j], v2[j].T)\n res = d > threshold\n\n if res == 1:\n if y_test[j] == res:\n true_pos += 1\n else:\n false_pos += 1\n else:\n if y_test[j] == res:\n true_neg += 1\n else:\n false_neg += 1\n\n accuracy = (true_pos + true_neg) / (true_pos + true_neg + false_pos + false_neg)\n precision = true_pos / (true_pos + false_pos)\n recall = true_pos / (true_pos + false_neg)\n f1_score = 2 * precision * recall / (precision + recall)\n \n print(\"fn = \" + str(false_neg) + \" fp = \" + str(false_pos) + \" tn = \" + str(true_neg) + \" tp = \" + str(true_pos))\n \n return (accuracy, precision, recall, f1_score)\n```\n\n\n```python\nprint(len(S1_test))\n```\n\n\n```python\n# Loading in the saved model\nmodel = Siamese()\nmodel.init_from_file(\"trax_model/model.pkl.gz\")\n# Evaluating it\naccuracy, precision, recall, f1_score = classify(S1_test, S2_test, y_test, 0.7, model, batch_size=512) \nprint(\"Accuracy\", accuracy)\nprint(\"Precision\", precision)\nprint(\"Recall\", recall)\nprint(\"F1 score\", f1_score)\n```\n\n\n\n# Part 5: Making predictions\n\nIn this section the model will be put to work. It will be wrapped in a function called `predict` which takes two sentences as input and returns $1$ or $0$, depending on whether the pair is a duplicate or not. \n\nBut first, we need to embed the sentences.\n\n\n```python\ndef predict(sentence1, sentence2, threshold, model, vocab, data_generator=data_generator, verbose=False):\n \"\"\"Function for predicting if two sentences are duplicates.\n\n Args:\n sentence1 (str): First sentence.\n sentence2 (str): Second sentence.\n threshold (float): Desired threshold.\n model (trax.layers.combinators.Parallel): The Siamese model.\n vocab (collections.defaultdict): The vocabulary used.\n data_generator (function): Data generator function. Defaults to data_generator.\n verbose (bool, optional): If the results should be printed out. Defaults to False.\n\n Returns:\n bool: True if the sentences are duplicates, False otherwise.\n \"\"\"\n\n s1 = data_tokenizer(sentence1) # tokenize\n s2 = data_tokenizer(sentence2) # tokenize\n\n S1 = [embeddings_index[word] for word in s1] # encode s1\n S2 = [embeddings_index[word] for word in s2] # encode s2\n\n S1, S2 = next(data_generator([S1], [S2], 1, vocab[\"\"]))\n\n v1, v2 = model((S1, S2))\n d = np.dot(v1[0], v2[0].T)\n res = d > threshold\n \n if verbose == True:\n print(\"S1 = \", S1, \"\\nS2 = \", S2)\n print(\"d = \", d)\n print(\"res = \", res)\n\n return res\n```\n\nNow we can test the model's ability to make predictions.\n\n\n```python\nsentence1 = \"I love running in the park.\"\nsentence2 = \"I like running in park?\"\n# 1 means it is duplicated, 0 otherwise\npredict(sentence1 , sentence2, 0.7, model, verbose=True)\n```\n\nThe Siamese network is capable of catching complicated structures. Concretely, it can identify sentence duplicates although the sentences do not have many words in common.\n", "meta": {"hexsha": "c283ec10ff02fbfcf946deeaa4159a71fa13fff9", "size": 50692, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/math_bot/model/GloVeSiameseLSTM2.ipynb", "max_stars_repo_name": "AlinGeorgescu/Math-Bot", "max_stars_repo_head_hexsha": "bf9d2a9c373bd85574a11c11e33526c46723df01", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-01T16:10:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-06T00:37:14.000Z", "max_issues_repo_path": "src/math_bot/model/GloVeSiameseLSTM2.ipynb", "max_issues_repo_name": "AlinGeorgescu/Math-Bot", "max_issues_repo_head_hexsha": "bf9d2a9c373bd85574a11c11e33526c46723df01", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/math_bot/model/GloVeSiameseLSTM2.ipynb", "max_forks_repo_name": "AlinGeorgescu/Math-Bot", "max_forks_repo_head_hexsha": "bf9d2a9c373bd85574a11c11e33526c46723df01", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8835562549, "max_line_length": 3904, "alphanum_fraction": 0.5844314685, "converted": true, "num_tokens": 9154, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.411110869232168, "lm_q2_score": 0.28140561345566495, "lm_q1q2_score": 0.11568890635456988}} {"text": "|

Name

|

Date

|\n| ---------------------------------------------------| ------------------------------------- |\n|

Diaaeldin SHALABY

| 07.052021 |\n\n

Hands-on AI II

\n

Unit 2 — The Vanishing Gradient Problem (Assignment)

\n\nAuthors: S. Lehner, J. Brandstetter, B. Sch\u00e4fl
\nDate: 16-04-2021\n\nThis file is part of the \"Hands-on AI II\" lecture material. The following copyright statement applies to all code within this file.\n\nCopyright statement:
\nThis material, no matter whether in printed or electronic form, may be used for personal and non-commercial educational use only. Any reproduction of this manuscript, no matter whether as a whole or in parts, no matter whether in printed or in electronic form, requires explicit prior acceptance of the authors.\n\n

Table of contents

\n
    \n
  1. Definition of Auxiliaries
  2. \n
      \n
    1. Loading and visualizing
    2. \n
    3. Downprojecting and interpreting
    4. \n
    5. Loading and preparing
    6. \n
    \n
  3. Training of a Neural Network
  4. \n
      \n
    1. Constructing an FNN
    2. \n
    3. Forward pass
    4. \n
    5. Backward pass
    6. \n
    \n
  5. Analyzing Gradients
  6. \n
      \n
    1. Collecting and visualizing
    2. \n
      \n
    3. Countermeasure and re-train
    4. \n
    5. Comparing gradients
    6. \n
    \n
  7. Deriving Derivatives
  8. \n
      \n
    1. Case hardsigmoid
    2. \n
      \n
    3. Case leaky_relu
    4. \n
    \n
\n\n

How to use this notebook

\nThis notebook is designed to run from start to finish. There are different tasks (displayed in orange boxes) which require your contribution (in form of code, plain text, ...). Most/All of the supplied functions are imported from the file u2_utils.py which can be seen and treated as a black box. However, for further understanding, you can look at the implementations of the helper functions. In order to run this notebook, the packages which are imported at the beginning of u2_utils.py need to be installed.\n\n\n```python\n# Import pre-defined utilities specific to this notebook.\nimport u2_utils as u2\n\n# Import additional utilities needed in this notebook.\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport torch\n\nfrom typing import Dict, Sequence\n\n# Setup Jupyter notebook (warning: this may affect all Jupyter notebooks running on the same Jupyter server).\nu2.setup_jupyter()\n```\n\n\n\n\n\n\n

Setting up notebook ... finished.

\n\n\n\n\n

Module versions

\nAs mentioned in the introductiory slides, specific minimum versions of Python itself as well as of used modules is recommended.\n\n\n```python\nu2.check_module_versions()\n```\n\n Installed Python version: 3.8 (\u2713)\n Installed numpy version: 1.19.1 (\u2713)\n Installed pandas version: 1.1.3 (\u2713)\n Installed PyTorch version: 1.7.1 (\u2713)\n Installed scikit-learn version: 0.23.2 (\u2713)\n Installed scipy version: 1.5.0 (\u2713)\n Installed matplotlib version: 3.3.1 (\u2713)\n Installed seaborn version: 0.11.0 (\u2713)\n Installed PIL version: 8.0.0 (\u2713)\n\n\n

Definition of Auxiliaries

\n

In this exercise you will be working with a data set composed of images of various handwritten digits. It is probably the most prominent data set in the domain of machine learning: the MNIST data set. The data set distinguishes ten different classes, one for each digit (zero to nine). For curious minds, more information regarding this data set can be found at:\n\n

\n LeCun, Y., 1998. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/.\n
\n \nBefore analyzing and tackling the vanishing gradient problem, the data sets needs to be inspected.

\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Load the MNIST data set using the appropriate function as supplied by us.
  • \n
  • Divide the data set between the training set and the test set in a ratio of $7:1$.
  • \n
  • Visualize the MNIST training set in tabular form. What is the size of both subsets with respect to sample and feature counts?
  • \n
\n
\n\n\n```python\ndata_mnist = u2.load_mnist()\ndata_mnist\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
PX_0PX_1PX_2PX_3PX_4PX_5PX_6PX_7PX_8PX_9...PX_775PX_776PX_777PX_778PX_779PX_780PX_781PX_782PX_783digit
00.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.05
10.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.00
20.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.04
30.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.01
40.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.09
..................................................................
699950.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.02
699960.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.03
699970.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.04
699980.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.05
699990.00.00.00.00.00.00.00.00.00.0...0.00.00.00.00.00.00.00.00.06
\n

70000 rows \u00d7 785 columns

\n
\n\n\n\n\n```python\n# Set default plotting style and random seed for reproducibility.\nsns.set()\nnp.random.seed(seed=42)\n\n# Split the Fashion-MNIST data set into training as well as test set and print their respective size.\ndata_mnist_train, data_mnist_test = u2.split_data(data_mnist, test_size=1.0 / 8.0)\nprint(f'{\"Full data set is of size:\":>27} {data_mnist.shape[0]:>5}')\nprint(f'Training subset is of size: {data_mnist_train.shape[0]:>5}')\nprint(f'{\"Testing subset is of size:\":>27} {data_mnist_test.shape[0]:>5}')\n```\n\n Full data set is of size: 70000\n Training subset is of size: 61250\n Testing subset is of size: 8750\n\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Reduce the dimensionality of the MNIST training set using PCA and visualize the downprojection.
  • \n
  • Comment on the separability of the MNIST training set with respect to the downprojection.
  • \n
\n
\n\n\n```python\n# Set default plotting style and random seed for reproducibility.\nsns.set()\nnp.random.seed(seed=42)\n\ndata_mnist_pca = u2.apply_pca(data=data_mnist,n_components=2)\nu2.plot_points_2d(data=data_mnist_pca, figsize=(14, 7))\n```\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Create a corresponding TensorDataset for the training as well as the test set.
  • \n
  • Wrap the previously defined TensorDataset instances in separate DataLoader instances with a batch size of $64$ (shuffle the training data set).
  • \n
  • Scale the features of the training as well as test set by a factor of $1\\,/\\,255$.
  • \n
\n
\n\n\n```python\n# Set random seed for reproducibility.\nnp.random.seed(seed=42)\ntorch.manual_seed(seed=42)\n\n# Create data loader for iterating the Fashion-MNIST training data set.\nloader_mnist_train = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(\n torch.from_numpy(data_mnist_train.drop(columns=[r'digit']).values.astype(\n dtype=np.float32) / 255.0).unsqueeze(1).reshape(len(data_mnist_train), 28 * 28),\n torch.from_numpy(data_mnist_train[r'digit'].values.astype(dtype=np.long))\n), batch_size=64, shuffle=True, drop_last=False)\n\n# Create data loader for iterating the Fashion-MNIST test data set.\nloader_mnist_test = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(\n torch.from_numpy(data_mnist_test.drop(columns=[r'digit']).values.astype(\n dtype=np.float32) / 255.0).unsqueeze(1).reshape(len(data_mnist_test), 28 * 28),\n torch.from_numpy(data_mnist_test[r'digit'].values.astype(dtype=np.long))\n), batch_size=64, shuffle=False, drop_last=False)\n```\n\n
\n The following code snippet is taken from the accompanying exercise notebook. You do not need to modify it for this assignment.\n
\n\n\n```python\ndef train_and_evaluate(model: torch.nn.Module, optimizer: torch.optim.Optimizer,\n device: torch.device, num_epochs: int,\n loader_train: torch.utils.data.DataLoader,\n loader_test: torch.utils.data.DataLoader) -> None:\n \"\"\"\n Auxiliary function for training and evaluating a corresponding model.\n \n :param model: model instance to train and evaluate\n :param optimizer: optimizer to use for model training\n :param device: device to use for model training and evaluation\n :param num_epochs: amount of epochs for model training\n :param loader_train: data loader supplying the training samples\n :param loader_test: data loader supplying the test samples\n :return: None\n \"\"\"\n for epoch in range(num_epochs):\n\n # Train model instance for one epoch.\n u2.train_network(\n model=model, data_loader=loader_train, device=device, optimizer=optimizer)\n\n # Evaluate current model instance.\n performance = u2.test_network(\n model=model, data_loader=loader_train, device=device)\n\n # Print result of current epoch to standard out.\n print(f'Epoch: {str(epoch + 1).zfill(len(str(num_epochs)))} ' +\n f'/ Loss: {performance[0]:.4f} / Accuracy: {performance[1]:.4f}')\n\n # Evaluate final model on test data set.\n performance = u2.test_network(model=model, data_loader=loader_test, device=device)\n print(f'\\nFinal loss: {performance[0]:.4f} / Final accuracy: {performance[1]:.4f}')\n```\n\n

Training of a Neural Network

\n

Loading and inspecting a new data set is always an exciting moment, but even more exciting is the implementation of a corresponding neural network and applying it to said data set. Hence, in this section you will have to implement and train an appropriate neural network model and revisit your knowledge about the forward as well as the backward pass.

\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Implement a class FNN_0 with the following architecture:
  • \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
PositionElementComment
0input-
1fully connectedsquare weight matrix
2sigmoid-
2fully connectedsquare weight matrix
3sigmoid-
4fully connectedsquare weight matrix
5sigmoid-
6fully connectedsquare weight matrix
7sigmoid-
8fully connectedsquare weight matrix
9sigmoid-
10fully connected$10$ output features
11output-
\n
    \n
  • Create an instance of FNN_0 as well as of a corresponding Adam optimizer with a learning rate of $0.0001$.
  • \n
  • Print the resulting model and verify the architecture by inspecting the output.
  • \n
  • Train an FNN_0 network for $5$ epochs, print the training accuracy as well as the loss per epoch and report the final test set loss and accuracy.
  • \n
\n
\n\n\n```python\nclass FNN_0 (torch.nn.Module):\n \n def __init__(self):\n super(FNN_0, self).__init__()\n self.fc1 = torch.nn.Linear(28 * 28, 28 * 28)\n self.ac1 = torch.nn.Sigmoid()\n self.fc2 = torch.nn.Linear(self.fc1.out_features, self.fc1.out_features)\n self.ac2 = torch.nn.Sigmoid()\n self.fc3 = torch.nn.Linear(self.fc2.out_features, self.fc2.out_features)\n self.ac3 = torch.nn.Sigmoid()\n self.fc4 = torch.nn.Linear(self.fc3.out_features, self.fc3.out_features)\n self.ac4 = torch.nn.Sigmoid()\n self.fc5 = torch.nn.Linear(self.fc4.out_features, self.fc4.out_features)\n self.ac5 = torch.nn.Sigmoid()\n \n self.fc6 = torch.nn.Linear(self.fc5.out_features, 10)\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n x = self.fc1(x)\n x = self.ac1(x)\n x = self.fc2(x)\n x = self.ac2(x)\n x = self.fc3(x)\n x = self.ac3(x)\n x = self.fc4(x)\n x = self.ac4(x)\n x = self.fc5(x)\n x = self.ac5(x)\n return self.fc6(x)\n```\n\n\n```python\n# Set random seed for reproducibility.\nnp.random.seed(seed=42)\ntorch.manual_seed(seed=42)\n\n# Create FNN_0 instance and the corresponding optimizer to use.\ntarget_device = torch.device(r'cpu')\nfnn_model = FNN_0().to(target_device)\noptimizer = torch.optim.Adam(fnn_model.parameters(), lr=1e-4)\n\n# Show the architecture of the FNN_0 model.\nprint(fnn_model)\n```\n\n FNN_0(\n (fc1): Linear(in_features=784, out_features=784, bias=True)\n (ac1): Sigmoid()\n (fc2): Linear(in_features=784, out_features=784, bias=True)\n (ac2): Sigmoid()\n (fc3): Linear(in_features=784, out_features=784, bias=True)\n (ac3): Sigmoid()\n (fc4): Linear(in_features=784, out_features=784, bias=True)\n (ac4): Sigmoid()\n (fc5): Linear(in_features=784, out_features=784, bias=True)\n (ac5): Sigmoid()\n (fc6): Linear(in_features=784, out_features=10, bias=True)\n )\n\n\n\n```python\n# Set random seed for reproducibility.\nnp.random.seed(seed=42)\ntorch.manual_seed(seed=42)\n\n# Train and evaluate FNN_0 instance on the MNIST training set.\ntrain_and_evaluate(\n model=fnn_model,\n optimizer=optimizer,\n device=target_device,\n num_epochs=5,\n loader_train=loader_mnist_train,\n loader_test=loader_mnist_test)\n```\n\n Epoch: 1 / Loss: 0.0149 / Accuracy: 0.6675\n Epoch: 2 / Loss: 0.0089 / Accuracy: 0.8292\n Epoch: 3 / Loss: 0.0067 / Accuracy: 0.8728\n Epoch: 4 / Loss: 0.0056 / Accuracy: 0.8949\n Epoch: 5 / Loss: 0.0053 / Accuracy: 0.8997\n \n Final loss: 0.0055 / Final accuracy: 0.8951\n\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Write down a formula for the corresponding forward pass of FNN_0. Use the same notation as presented during the exercise.
  • \n
\n
\n\n
\n \\begin{equation}\n \\hat{y} = f\\left(h_6(h_5(h_4(h_3(h_2(h_1(\\mathbf{x};\\mathbf{W}_1);\\mathbf{W}_2);\\mathbf{W}_3);\\mathbf{W}_4);\\mathbf{W}_5);\\mathbf{W}_6 \\right )\n \\end{equation}\n
\n\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Write down a formula for the corresponding backward pass of FNN_0. Use the same notation as presented during the exercise.
  • \n
\n
\n\n
\n \\begin{align*}\n \\mathbf{W}_6 & \\leftarrow \\mathbf{W}_6 - \\eta \\frac{\\partial L}{\\partial \\mathbf{W}_5} \\\\\n \\mathbf{W}_5 & \\leftarrow \\mathbf{W}_5 - \\eta \\frac{\\partial L}{\\partial h_5}\\frac{\\partial h_5}{\\partial \\mathbf{W}_5} \\\\\n \\mathbf{W}_4 & \\leftarrow \\mathbf{W}_4 - \\eta \\frac{\\partial L}{\\partial h_5}\\frac{\\partial h_5}{\\partial h_4}\\frac{\\partial h_4}{\\partial \\mathbf{W}_4} \\\\\n \\mathbf{W}_3 & \\leftarrow \\mathbf{W}_3 - \\eta \\frac{\\partial L}{\\partial h_5}\\frac{\\partial h_5}{\\partial h_4}\\frac{\\partial h_4}{\\partial h_3}\\frac{\\partial h_3}{\\partial \\mathbf{W}_3} \\\\\n \\mathbf{W}_2 & \\leftarrow \\mathbf{W}_2 - \\eta \\frac{\\partial L}{\\partial h_5}\\frac{\\partial h_5}{\\partial h_4}\\frac{\\partial h_4}{\\partial h_3}\\frac{\\partial h_3}{\\partial h_2}\\frac{\\partial h_2}{\\partial \\mathbf{W}_2} \\\\\n \\mathbf{W}_1 & \\leftarrow \\mathbf{W}_1 - \\eta \\frac{\\partial L}{\\partial h_5}\\frac{\\partial h_5}{\\partial h_4}\\frac{\\partial h_4}{\\partial h_3}\\frac{\\partial h_3}{\\partial h_2}\\frac{\\partial h_2}{\\partial h_1}\\frac{\\partial h_1}{\\partial \\mathbf{W}_1} \\\\\n \\end{align*}\n
\n\n

Analyzing Gradients

\n

Actually, the results of FNN_0 do not look that bad. Is there really a problem with a vanishing gradient? This is exactly the point you're going to figure out in this exercise. As a first step, the gradients of a freshly initialized model needs to be collected and analyzed. Afterwards, in case of a vanishing gradient problem, countermeasures need to get deployed.

\n\n
\n The following code snippet is taken from the accompanying exercise notebook. You do not need to modify it for this assignment.\n
\n\n\n```python\ndef collect_gradients(model: torch.nn.Module, device: torch.device,\n loader: torch.utils.data.DataLoader) -> Sequence[Dict[str, np.array]]:\n \"\"\"\n Auxiliary function for collecting gradients of a corresponding model.\n \n :param model: model instance to be used for collecting gradients\n :param device: device to use for gradient collection\n :param loader: data loader supplying the samples used for collecting gradients\n :return: sequence of parameter names and gradients, averaged over all parameter elements\n \"\"\"\n model_state = model.training\n model.train()\n model.zero_grad()\n \n # Iterating over the data set and computing the corresponding gradients.\n gradients = {}\n criterion = torch.nn.CrossEntropyLoss()\n for batch_index, (data, target) in enumerate(loader):\n data, target = data.float().to(device), target.long().to(device)\n optimizer.zero_grad()\n output = model(data)\n loss = criterion(output, target)\n loss.backward()\n \n # Collecting the gradients from the current model.\n for name, parameter in model.named_parameters():\n if parameter.grad is not None:\n gradients.setdefault(name, []).append(parameter.grad.view(-1).abs().mean().item())\n model.zero_grad()\n \n # Reset model state and return collected gradients.\n model.train(mode=model_state)\n return gradients\n```\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Create a fresh instance of FNN_0 and collect its gradients using the MNIST training set.
  • \n
  • Visualize the gradients of each weight parameter accordingly.
  • \n
  • Do the gradients vanish?
  • \n
\n
\n\n\n```python\nfnn_model = FNN_0()\ngradient = collect_gradients(\n model=fnn_model, device=target_device, loader=loader_mnist_train\n)\n```\n\n\n```python\n# Set default plotting style.\nsns.set()\n\n# Prepare collected gradients for plotting.\n\ngradient_data = pd.DataFrame([\n v for k, v in sorted(gradient.items(), key=lambda _: _[0]) if r'weight' in k\n]).transpose().rename(columns=lambda _: f'Layer {_}')\ngradient_data = pd.melt(gradient_data, value_vars=gradient_data.columns)\ngradient_data[r'Model'] = type(fnn_model).__name__\n\n\n# Combine all gradients in a single data frame.\ngradient_data = gradient_data.rename(\n columns={r'variable': r'Layer', r'value': r'Gradient Magnitude'})\n\n# Define plotting figure and corresponding attributes.\nfig, ax = plt.subplots(1, 1, figsize=(18, 7))\nax.set(yscale=r'log')\n\n# Plot pre-processed gradients.\n_ = sns.boxplot(x=r'Model', y=r'Gradient Magnitude', hue=r'Layer', data=gradient_data, ax=ax)\n```\n\nAs shown by the graph, the gradient does vanish.\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Assume a vanishing gradient. Apply the countermeasure presented during the accompanying exercise by implementing a corresponding FNN_1.
  • \n
  • Create an instance of FNN_1 as well as of a corresponding Adam optimizer with a learning rate of $0.0001$.
  • \n
  • Print the resulting model and verify the architecture by inspecting the output.
  • \n
  • Train an FNN_1 network for $5$ epochs, print the training accuracy as well as the loss per epoch and report the final test set loss and accuracy.
  • \n
\n
\n\n\n```python\nclass FNN_1 (torch.nn.Module):\n \n def __init__(self):\n super(FNN_1, self).__init__()\n self.fc1 = torch.nn.Linear(28 * 28, 28 * 28)\n self.ac1 = torch.nn.ReLU()\n self.fc2 = torch.nn.Linear(self.fc1.out_features, self.fc1.out_features)\n self.ac2 = torch.nn.ReLU()\n self.fc3 = torch.nn.Linear(self.fc2.out_features, self.fc2.out_features)\n self.ac3 = torch.nn.ReLU()\n self.fc4 = torch.nn.Linear(self.fc3.out_features, self.fc3.out_features)\n self.ac4 = torch.nn.ReLU()\n self.fc5 = torch.nn.Linear(self.fc4.out_features, self.fc4.out_features)\n self.ac5 = torch.nn.ReLU()\n \n self.fc6 = torch.nn.Linear(self.fc5.out_features, 10)\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n x = self.fc1(x)\n x = self.ac1(x)\n x = self.fc2(x)\n x = self.ac2(x)\n x = self.fc3(x)\n x = self.ac3(x)\n x = self.fc4(x)\n x = self.ac4(x)\n x = self.fc5(x)\n x = self.ac5(x)\n return self.fc6(x)\n```\n\n\n```python\n# Set random seed for reproducibility.\nnp.random.seed(seed=42)\ntorch.manual_seed(seed=42)\n\n# Create FNN_1 instance and the corresponding optimizer to use.\ntarget_device = torch.device(r'cpu')\nfnn_model = FNN_1().to(target_device)\noptimizer = torch.optim.Adam(fnn_model.parameters(), lr=1e-4)\n\n# Show the architecture of the FNN_1 model.\nprint(fnn_model)\n```\n\n FNN_1(\n (fc1): Linear(in_features=784, out_features=784, bias=True)\n (ac1): ReLU()\n (fc2): Linear(in_features=784, out_features=784, bias=True)\n (ac2): ReLU()\n (fc3): Linear(in_features=784, out_features=784, bias=True)\n (ac3): ReLU()\n (fc4): Linear(in_features=784, out_features=784, bias=True)\n (ac4): ReLU()\n (fc5): Linear(in_features=784, out_features=784, bias=True)\n (ac5): ReLU()\n (fc6): Linear(in_features=784, out_features=10, bias=True)\n )\n\n\n\n```python\n# Set random seed for reproducibility.\nnp.random.seed(seed=42)\ntorch.manual_seed(seed=42)\n\n# Train and evaluate FNN_1 instance on the MNIST training set.\ntrain_and_evaluate(\n model=fnn_model,\n optimizer=optimizer,\n device=target_device,\n num_epochs=5,\n loader_train=loader_mnist_train,\n loader_test=loader_mnist_test)\n```\n\n Epoch: 1 / Loss: 0.0032 / Accuracy: 0.9373\n Epoch: 2 / Loss: 0.0016 / Accuracy: 0.9679\n Epoch: 3 / Loss: 0.0009 / Accuracy: 0.9820\n Epoch: 4 / Loss: 0.0006 / Accuracy: 0.9877\n Epoch: 5 / Loss: 0.0009 / Accuracy: 0.9813\n \n Final loss: 0.0020 / Final accuracy: 0.9635\n\n\n
\n Execute the notebook until here and try to solve the following tasks:\n
    \n
  • Create a fresh instance of FNN_1 and collect its gradients using the MNIST training set.
  • \n
  • Visualize the gradients of each weight parameter accordingly (include the gradient visualization of FNN_0).
  • \n
  • Do the gradients vanish?
  • \n
\n
\n\n\n```python\nfnn_model = FNN_1()\ngradient = collect_gradients(\n model=fnn_model, device=target_device, loader=loader_mnist_train\n)\n```\n\n\n```python\n# Set default plotting style.\nsns.set()\n\n# Prepare collected gradients for plotting.\n\ngradient_data = pd.DataFrame([\n v for k, v in sorted(gradient.items(), key=lambda _: _[0]) if r'weight' in k\n]).transpose().rename(columns=lambda _: f'Layer {_}')\ngradient_data = pd.melt(gradient_data, value_vars=gradient_data.columns)\ngradient_data[r'Model'] = type(fnn_model).__name__\n\n\n# Combine all gradients in a single data frame.\ngradient_data = gradient_data.rename(\n columns={r'variable': r'Layer', r'value': r'Gradient Magnitude'})\n\n# Define plotting figure and corresponding attributes.\nfig, ax = plt.subplots(1, 1, figsize=(18, 7))\nax.set(yscale=r'log')\n\n# Plot pre-processed gradients.\n_ = sns.boxplot(x=r'Model', y=r'Gradient Magnitude', hue=r'Layer', data=gradient_data, ax=ax)\n```\n\nAs shown by the graph above, diffrence between the gradient magnitude of each layer are much smaller now after applying ReLU activation layers instead of Sigmoid. Also the accuracies after each epoch and the final accuraccy have improved drastecally. Hence, we could say that this technique solved or at least minimized the vanishing gradient problem.\n\n

Deriving Derivatives

\n

It is already known from the lecture as well as the exercise that activation functions are the primary culprit of the Vanishing Gradient Problem. Hence, it is important to know how the chosen activation functions activate the input and consequently what the derivative is.

\n\n
\n Execute the notebook until here and try to solve the following tasks (hint: have a look at the official PyTorch documentation):\n
    \n
  • Implement the hardsigmoid activation function as it was done for relu in the exercise.
  • \n
  • Implement the derivative of the hardsigmoid activation function accordingly.
  • \n
  • Find $3$ different inputs showing the value range of the hardsigmoid activation function.
  • \n
  • Plot the hardsigmoid activation function indluding its derivative for the input range $[-6; 6]$.
  • \n
\n
\n\n\n```python\ndef hardsigmoid(x: float) -> float:\n \"\"\"\n Compute the hardsigmoid function.\n \n :param x: the input on which to apply the hardsigmoid function\n :return: the result of the hardsigmoid function applied to its input\n \"\"\"\n if x <= -3:\n return 0\n elif x >= 3:\n return 1\n else:\n return (x/6)+0.5\n\n\ndef hardsigmoid_d(x: float) -> float:\n \"\"\"\n Compute the derivate of the hardsigmoid function.\n \n :param x: the input to the hardsigmoid function for computing its derivative\n :return: the derivative of the hardsigmoid function with respect to its input\n \"\"\"\n if x <= -3:\n return 0\n elif x >= 3:\n return 0\n else:\n return (1/6)\n\n\n# Crudly check the value range of the sigmoid function and its derivative.\nprint(f'hardsigmoid(-10): {hardsigmoid(-10.0):.4f} | hardsigmoid\\'(-10): {hardsigmoid_d(-10.0):.4f}')\nprint(f'hardsigmoid( 0): {hardsigmoid(0.0):.4f} | hardsigmoid\\'( 0): {hardsigmoid_d(0.0):.4f}')\nprint(f'hardsigmoid(+10): {hardsigmoid(10.0):.4f} | hardsigmoid\\'(+10): {hardsigmoid_d(10.0):.4f}')\n```\n\n hardsigmoid(-10): 0.0000 | hardsigmoid'(-10): 0.0000\n hardsigmoid( 0): 0.5000 | hardsigmoid'( 0): 0.1667\n hardsigmoid(+10): 1.0000 | hardsigmoid'(+10): 0.0000\n\n\n\n```python\ndef plot_hardsigmoid_with_derivative(x_min: float = -6.0, x_max: float = 6.0, granularity: int = 100) -> None:\n \"\"\"\n Plot the hardsigmoid function including its derivative.\n \n :param x_min: minimum value of the input value range\n :param x_max: maximum value of the input value range\n :param granularity: granularity controlling the stepsize of the input value range\n :return: None\n \"\"\"\n data = np.linspace(x_min, x_max, granularity)\n\n fig, ax = plt.subplots(figsize=(14, 7))\n ax.spines[r'left'].set_position(r'center')\n ax.spines[r'right'].set_color(None)\n ax.spines[r'top'].set_color(None)\n\n plt.plot(data, tuple(map(hardsigmoid, data)), color=r'#307EC7', linewidth=3, label=r'hardsigmoid')\n plt.plot(data, tuple(map(hardsigmoid_d, data)), color=r'#accbe8', linewidth=3, label=r\"hardsigmoid_d'\")\n plt.locator_params(axis=r'y', nbins=6)\n plt.title(r'Hard Sigmoid function', fontsize=20)\n plt.legend(prop={'size': 15})\n plt.show()\n```\n\n\n```python\n# Set default plotting style.\nsns.set()\n \n# Plot logistic function including its derivative.\nplot_hardsigmoid_with_derivative()\n```\n\n
\n Execute the notebook until here and try to solve the following tasks (hint: have a look at the official PyTorch documentation):\n
    \n
  • Implement the leaky_relu activation function as it was done for relu in the exercise. Use a negative slope of $0.37$.
  • \n
  • Implement the derivative of the leaky_relu activation function accordingly.
  • \n
  • Find $3$ different inputs showing the value range of the leaky_relu activation function.
  • \n
  • Plot the leaky_relu activation function indluding its derivative for the input range $[-6; 6]$.
  • \n
\n
\n\n\n```python\ndef leakyReLU(x: float) -> float:\n \"\"\"\n Compute the leakyReLU function.\n \n :param x: the input on which to apply the leakyReLU function\n :return: the result of the leakyReLU function applied to its input\n \"\"\"\n return x if x >= 0 else 0.01 * x\n\n\ndef leakyReLU_d(x: float) -> float:\n \"\"\"\n Compute the derivate of the leakyReLU function.\n \n :param x: the input to the leakyReLU function for computing its derivative\n :return: the derivative of the leakyReLU function with respect to its input\n \"\"\"\n return 1 if x >= 0 else 0.01\n\n\n# Crudly check the value range of the sigmoid function and its derivative.\nprint(f'leakyReLU(-10): {hardsigmoid(-10.0):.4f} | leakyReLU\\'(-10): {hardsigmoid_d(-10.0):.4f}')\nprint(f'leakyReLU( 0): {hardsigmoid(0.0):.4f} | leakyReLU\\'( 0): {hardsigmoid_d(0.0):.4f}')\nprint(f'leakyReLU(+10): {hardsigmoid(10.0):.4f} | leakyReLU\\'(+10): {hardsigmoid_d(10.0):.4f}')\n```\n\n leakyReLU(-10): 0.0000 | leakyReLU'(-10): 0.0000\n leakyReLU( 0): 0.5000 | leakyReLU'( 0): 0.1667\n leakyReLU(+10): 1.0000 | leakyReLU'(+10): 0.0000\n\n\n\n```python\ndef plot_leakyReLU_with_derivative(x_min: float = -6.0, x_max: float = 6.0, granularity: int = 100) -> None:\n \"\"\"\n Plot the leakyReLU function including its derivative.\n \n :param x_min: minimum value of the input value range\n :param x_max: maximum value of the input value range\n :param granularity: granularity controlling the stepsize of the input value range\n :return: None\n \"\"\"\n data = np.linspace(x_min, x_max, granularity)\n\n fig, ax = plt.subplots(figsize=(14, 7))\n ax.spines[r'left'].set_position(r'center')\n ax.spines[r'right'].set_color(None)\n ax.spines[r'top'].set_color(None)\n\n plt.plot(data, tuple(map(leakyReLU, data)), color=r'#307EC7', linewidth=3, label=r'leakyReLU')\n plt.plot(data, tuple(map(leakyReLU_d, data)), color=r'#accbe8', linewidth=3, label=r\"leakyReLU_d'\")\n plt.locator_params(axis=r'y', nbins=6)\n plt.title(r'Leaky ReLU function', fontsize=20)\n plt.legend(prop={'size': 15})\n plt.show()\n```\n\n\n```python\n# Set default plotting style.\nsns.set()\n \n# Plot logistic function including its derivative.\nplot_leakyReLU_with_derivative()\n```\n", "meta": {"hexsha": "9186f40cce5e5c70ca712e858234405893e796b7", "size": 590401, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "The Vanishing Gradient Problem.ipynb", "max_stars_repo_name": "diaa-shalaby/AI-microprojects", "max_stars_repo_head_hexsha": "536e72ddbf0bc329603d1428c1b6149afa4cadad", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "The Vanishing Gradient Problem.ipynb", "max_issues_repo_name": "diaa-shalaby/AI-microprojects", "max_issues_repo_head_hexsha": "536e72ddbf0bc329603d1428c1b6149afa4cadad", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "The Vanishing Gradient Problem.ipynb", "max_forks_repo_name": "diaa-shalaby/AI-microprojects", "max_forks_repo_head_hexsha": "536e72ddbf0bc329603d1428c1b6149afa4cadad", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 366.0266584005, "max_line_length": 418908, "alphanum_fraction": 0.9186451242, "converted": true, "num_tokens": 11961, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4610167793123159, "lm_q2_score": 0.2509127924867847, "lm_q1q2_score": 0.11567500748051694}} {"text": "```python\nfrom IPython.display import HTML\n\nHTML('''\n
''')\n```\n\n\n\n\n\n
\n\n\n\n# 9. Quantum cryptography\n\nThe advent of quantum computation, which introduces the possibility of using quantum mechanics for information processing, gave rise to the following question: can quantum information be shared more securely than classical information?\n\nIn 1982, a very interesting property of quantum states was discovered [1,2]. This is the so-called \"no-cloning theorem\", which proved how the laws of quantum mechanics prohibit the copy of an unknown quantum state. Therefore, the no-cloning theorem assures us that qubits can hide the quantum information better than classical bits. \n\nThis has important implication for example for secure communications, where it allows for the sharing of private keys which cannot be eavesdropped by a third party. We consider the first protocol, the BB84 protocol, which exploits the quantum mechanical properties of qubits for secure exchange of a secret key between two parties. \n\n\n## 9.1 No-cloning theorem\n\nLet us prove the no-cloning theorem, the fact that an unknown quantum state cannot be copied.\nFirst let us clearly state our problem:\n\nWe have a qubit in an unknown quantum state $\\lvert \\psi \\rangle$ and we wish to copy his state on another qubit initilized to the state $\\lvert s \\rangle$. Therefore, we want to implement the following quantum gate:\n\n\\begin{equation}\nU\\lvert \\psi \\rangle \\lvert s \\rangle =\\lvert \\psi \\rangle \\lvert \\psi \\rangle \n\\end{equation}\n\nLet us take the unknown quantum state to be \n\n\\begin{equation}\n\\lvert \\psi \\rangle =\\alpha \\lvert 0 \\rangle + \\beta \\lvert 1 \\rangle \n\\end{equation}\n\nwhere the amplitudes $\\alpha$ and $\\beta$ are unknown.\nTherefore we have:\n\n\\begin{equation}\nU\\lvert \\psi \\rangle \\lvert s \\rangle =\\lvert \\psi \\rangle \\lvert \\psi \\rangle = (\\alpha \\lvert 0\\rangle +\\beta \\lvert 1\\rangle) (\\alpha \\lvert 0\\rangle +\\beta \\lvert 1\\rangle) = (\\alpha^2 \\lvert 0\\rangle \\lvert 0\\rangle + \\alpha \\beta \\lvert 0\\rangle \\lvert 1\\rangle + \\beta \\alpha \\lvert 1 \\rangle \\lvert 0\\rangle + \\beta^2 \\lvert 1\\rangle \\lvert 1\\rangle)\n\\tag{1}\n\\end{equation}\n\n\nBecause of the linearity of operators, we can equivalently write:\n\n\\begin{equation}\nU\\lvert \\psi \\rangle \\lvert s \\rangle = U(\\alpha \\lvert 0\\rangle + \\beta \\lvert 1\\rangle )\\lvert s\\rangle = U(\\alpha \\lvert 0\\rangle \\lvert s\\rangle + \\beta \\lvert 1\\rangle \\lvert s\\rangle )=\\alpha \\lvert 00\\rangle +\\beta \\lvert 11\\rangle \n\\tag{2}\n\\end{equation}\n\nComparing Eqs. (1) and (2), one can see that we come to a contraddiction! Thus, the operation $U$ which copies an unknown quantum state of a qubit onto another qubit is not possible.\n\n## 9.2 BB84 protocol\n\n\n$$\\text{1. BB84 protocol overview.}$$\n\nIn Ref. [3], the first protocol for the distribution of a secret quantum key between two parties is described.\n\nFirst, let us assume that Alice and Bob may exchange qubits and classical information. Also, Alice can prepare a qubit in the $\\lvert 0 \\rangle$, $\\lvert 1 \\rangle$, $\\lvert + \\rangle = \\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle + \\lvert 1 \\rangle\\right)$ and $\\lvert - \\rangle = \\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle - \\lvert 1 \\rangle\\right)$ state, and Bob can measure in the standard (Z) $\\left\\{ \\lvert 0 \\rangle, \\lvert 1 \\rangle \\right\\}$ basis and in the Hadamard (H) $\\left\\{ \\lvert + \\rangle, \\lvert - \\rangle \\right\\} $ basis. Note that the two bases are non-orthogonal with respect to each other. Measuring in the $\\left\\{ \\lvert + \\rangle, \\lvert - \\rangle \\right\\} $ basis means that before the standard measurement in the $\\left\\{ \\lvert 0 \\rangle, \\lvert 1 \\rangle \\right\\} $ basis, Bob applies the Hadamard gate to the qubit. Thus\n\n\\begin{equation}\n\\lvert + \\rangle =\\frac{1}{\\sqrt{2}}(\\lvert 0\\rangle +\\lvert 1\\rangle )\n\\end{equation}\n\ngives $\\lvert 0 \\rangle$ when measured in the Hadamard basis, and\n\n\\begin{equation}\n\\lvert - \\rangle =\\frac{1}{\\sqrt{2}}(\\lvert 0\\rangle -\\lvert 1\\rangle )\n\\end{equation}\n\ngives $\\lvert 1 \\rangle$ when measured in the Hadamard basis.\n\n\n\nThe protocol then works in the following way. Alice picks the bit that she wants to transmit to Bob, either $0$ or $1$. She then prepares a qubit in the corresponding state $\\lvert 0 \\rangle$ or $\\lvert 1 \\rangle$, respectively. After that, she randomly decides whether or not to transform her qubit from the standard (Z) basis to the Hadamard (H) basis by applying or not the Hadamard gate her qubit, thus preparing the state $\\lvert + \\rangle$ or $\\lvert - \\rangle$. \n\nThen Alice sends her first qubit to Bob. Bob receives Alice's qubit, selects one of the measurement bases at random and measures it. After that, Alice and Bob tell each other which basis they used through a classical communication channel. \n\nIn general, for every qubit Alice sends to Bob there are four possible scenarios:\n\n
    \n
  1. \nBoth Alice and Bob used the Hadamard basis.\n
  2. \n\n
  3. \nThey both used the standard basis.\n
  4. \n\n
  5. \nAlice transformed to the Hadamard basis, and Bob measured in the standard basis.\n
  6. \n\n
  7. \nAlice used the standard basis, and Bob the Hadamard basis.\n
  8. \n
\n\nWhen Alice and Bob agree on the same basis, they keep the transferred bit. When they disagree, they discard it. Thus, it is possible for Alice and Bob to securely communicate an $n$ bit private key using $2n$ qubits.\n\n\n#### Example \n\nFor example, let us consider the case where Alice wants to send the bit $0$. She prepares her qubit in the $\\lvert 0 \\rangle$ state and then randomly selects whether or not she applies the Hadamard gate to it. Let's say she does apply the Hadamard gate to her qubit, obtaining the $\\lvert + \\rangle$ state. \n\nThen, consider, the cased where Bob measures the qubit in the standard basis. After Bob's measurement, Alice and Bob communicate through the classical channel. Alice tells Bob that she applied the Hadamard gate to her qubit and Bob tells Alice that he measured it in the standard basis. So, they abandon the first bit.\n\n\n$$\\text{2. Example of one application of the BB84 protocol. In this case, Alice and Bob will discard this bit.}$$\n\n\nNext, Alice picks a second bit, $1$, encodes it into a qubit and selects at random whether to apply or not the Hadamard gate. Let us now assume that she does not apply the Hadamard gate. Thus, the qubit is in the state $\\lvert 1\\rangle $. Alice then sends her qubit to Bob. Bob selects at random one of his two measurement bases. Let us consider in this\ncase that he measures in the standard basis. As the qubit is in the state $\\lvert 1\\rangle $ the outcome of the measurement will be $1$. Thus, Bob chooses value $1$ for his second classical bit, the same as Alice did. Finally, Alice tells Bob that she did not apply the Hadamard gate, and Bob tells Alice that he measured in the standard basis. So, both Alice and Bob will use the bit with the value $1$ as the first bit in their secret key.\n\n\n\n$$\\text{3. Example of another application of the BB84 protocol.} \\\\ \\text{In this case, Alice and Bob successfully communicate the value of a bit.}$$\n\n\n### QISKit: BB84 protocol \n\n#### 1) Show the communication of one bit \n\n\n```python\nfrom initialize import *\nimport random\n\n#initialize quantum program\nmy_alg = initialize(circuit_name = 'bb84', qubit_number=1, bit_number=1, backend = 'local_qasm_simulator', shots = 1)\n\n#add gates to the circuit\n\n# Alice encodes the bit 1 into a qubit\nmy_alg.q_circuit.x(my_alg.q_reg[0])\n\n# Alice randomly applies the Hadamard gate to go to the Hadamard basis\na = random.randint(0,1)\nif a==1:\n my_alg.q_circuit.h(my_alg.q_reg[0])\n \n# Bob randomly applies the Hadamard gate to go to the Hadamard basis\nb = random.randint(0,1)\nif b==1:\n my_alg.q_circuit.h(my_alg.q_reg[0])\n\nmy_alg.q_circuit.measure(my_alg.q_reg[0], my_alg.c_reg[0]) # measures first qubit\n\n# print list of gates in the circuit\nprint('List of gates:')\nfor circuit in my_alg.q_circuit:\n print(circuit.name)\n\n#Execute the quantum algorithm\nresult = my_alg.Q_program.execute(my_alg.circ_name, backend=my_alg.backend, shots= my_alg.shots)\n\n#Show the results obtained from the quantum algorithm \ncounts = result.get_counts(my_alg.circ_name)\n\nprint('\\nThe measured outcomes of the circuits are:',counts)\n\nif a == b:\n print('Alice and Bob agree on the basis, thus they keep the bit')\nelse: \n print(\"Alice and Bob don't agree the same basis, thus they discard the bit\")\n \n```\n\n List of gates:\n x\n h\n measure\n \n The measured outcomes of the circuits are: {'0': 1}\n Alice and Bob don't agree the same basis, thus they discard the bit\n\n\n /anaconda/lib/python3.6/site-packages/qiskit/backends/local/qasm_simulator_cpp.py:89: DeprecationWarning: The behavior of getting statevector from simulators by setting shots=1 is deprecated and will be removed. Use the local_statevector_simulator instead, or place explicit snapshot instructions.\n DeprecationWarning)\n\n\n## Problems\n\n\n
    \n\n
  1. \nAlice wants to send Bob the following private key\n\n\\begin{equation}\n101011\n\\end{equation}\n\nShe encodes those bits into the correspondig states of qubits and performs the gates H-H-I-I-I-H on each qubit. Bob measures the qubits in the following bases: Z-H-H-Z-H-H\n\n
      \n
    1. \nFind the possible outcomes of Bob's measurements\n
    2. \n\n
    3. \nFind the bits of the private key accepted by Alice and Bob\n
    4. \n
    \n\n
  2. \n\n
  3. \nImagine that a third party, Eve, intercepts Alice's qubits. She measures the intercepted qubit by randomly selecting either the Hadamard or the standard basis and then forwards the qubits to Bob.\n\n
      \n
    1. \nIs it possible for Eve to find out the bit that Alice is sending to Bob without being discovered?\n
    2. \n\n
    3. \nWhat is the probability that Eve successfully finds out the value of a bit?
    4. \n
    \n\n
  4. \n\n
  5. \nWrite a QISKit program for the transmission of a 1024 bits provate key between Alice and Bob\n
  6. \n\n
\n\n## References\n\n[1] D. Dieks, Physics Letters A, 92, 271 (1982).\n\n[2] W. K. Wootters and W. H. Zurek, Nature, 299.802 (1982).\n\n[3] C. H. Bennett and G. Brassard, In Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, volume 175, page 8. New York, 1984.\n", "meta": {"hexsha": "9b640f7b1395d2af36c4b2d56385aa15a368f0b8", "size": 14980, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "9.Quantum criptography.ipynb", "max_stars_repo_name": "miamico/IBM-Teach-Me-Quantum", "max_stars_repo_head_hexsha": "2229f7cccb656aa89ea12be14d8e78ea80115936", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9.Quantum criptography.ipynb", "max_issues_repo_name": "miamico/IBM-Teach-Me-Quantum", "max_issues_repo_head_hexsha": "2229f7cccb656aa89ea12be14d8e78ea80115936", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9.Quantum criptography.ipynb", "max_forks_repo_name": "miamico/IBM-Teach-Me-Quantum", "max_forks_repo_head_hexsha": "2229f7cccb656aa89ea12be14d8e78ea80115936", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.9226361032, "max_line_length": 917, "alphanum_fraction": 0.6036715621, "converted": true, "num_tokens": 2844, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.388618026705849, "lm_q2_score": 0.29746995506106744, "lm_q1q2_score": 0.11560218694010961}} {"text": "\n\n##### Copyright 2020 The TensorFlow Authors.\n\n\n```python\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# Quantum data\n\n\n \n \n \n \n
\n View on TensorFlow.org\n \n Run in Google Colab\n \n View source on GitHub\n \n Download notebook\n
\n\nBuilding off of the comparisons made in the [MNIST](https://www.tensorflow.org/quantum/tutorials/mnist) tutorial, this tutorial explores the recent work of [Huang et al.](https://arxiv.org/abs/2011.01938) that shows how different datasets affect performance comparisons. In the work, the authors seek to understand how and when classical machine learning models can learn as well as (or better than) quantum models. The work also showcases an empirical performance separation between classical and quantum machine learning model via a carefully crafted dataset. You will:\n\n1. Prepare a reduced dimension Fashion-MNIST dataset.\n2. Use quantum circuits to re-label the dataset and compute Projected Quantum Kernel features (PQK).\n3. Train a classical neural network on the re-labeled dataset and compare the performance with a model that has access to the PQK features.\n\n## Setup\n\n\n```python\n!pip install tensorflow==2.7.0 tensorflow-quantum\n```\n\n Collecting tensorflow==2.7.0\n Downloading tensorflow-2.7.0-cp37-cp37m-manylinux2010_x86_64.whl (489.6 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 489.6 MB 15 kB/s \n \u001b[?25hCollecting tensorflow-quantum\n Downloading tensorflow_quantum-0.6.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (10.5 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10.5 MB 20.1 MB/s \n \u001b[?25hRequirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.1.0)\n Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.10.0.2)\n Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.1.0)\n Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.6.3)\n Requirement already satisfied: tensorboard~=2.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (2.8.0)\n Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.44.0)\n Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.1.2)\n Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.2.0)\n Requirement already satisfied: wheel<1.0,>=0.32.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.37.1)\n Collecting keras<2.8,>=2.7.0rc0\n Downloading keras-2.7.0-py2.py3-none-any.whl (1.3 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.3 MB 29.1 MB/s \n \u001b[?25hRequirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (13.0.0)\n Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.17.3)\n Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.15.0)\n Collecting gast<0.5.0,>=0.2.1\n Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)\n Requirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.0.0)\n Collecting tensorflow-estimator<2.8,~=2.7.0rc0\n Downloading tensorflow_estimator-2.7.0-py2.py3-none-any.whl (463 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 463 kB 34.1 MB/s \n \u001b[?25hRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.3.0)\n Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.14.0)\n Requirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.21.5)\n Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.24.0)\n Requirement already satisfied: flatbuffers<3.0,>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (2.0)\n Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow==2.7.0) (1.5.2)\n Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.35.0)\n Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (57.4.0)\n Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (0.4.6)\n Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (2.23.0)\n Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.0.1)\n Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (0.6.1)\n Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (3.3.6)\n Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.8.1)\n Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (0.2.8)\n Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (4.8)\n Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (4.2.4)\n Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow==2.7.0) (1.3.1)\n Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.7.0) (4.11.3)\n Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.7.0) (3.7.0)\n Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (0.4.8)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (2.10)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (1.24.3)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (2021.10.8)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (3.0.4)\n Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow==2.7.0) (3.2.0)\n Collecting cirq-core>=0.13.1\n Downloading cirq_core-0.14.0-py3-none-any.whl (1.8 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 34.0 MB/s \n \u001b[?25hCollecting googleapis-common-protos==1.52.0\n Downloading googleapis_common_protos-1.52.0-py2.py3-none-any.whl (100 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100 kB 8.2 MB/s \n \u001b[?25hCollecting sympy==1.8\n Downloading sympy-1.8-py3-none-any.whl (6.1 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6.1 MB 24.1 MB/s \n \u001b[?25hCollecting cirq-google>=0.13.1\n Downloading cirq_google-0.14.0-py3-none-any.whl (541 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 541 kB 47.0 MB/s \n \u001b[?25hCollecting google-api-core==1.21.0\n Downloading google_api_core-1.21.0-py2.py3-none-any.whl (90 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 90 kB 8.3 MB/s \n \u001b[?25hCollecting google-auth<3,>=1.6.3\n Downloading google_auth-1.18.0-py2.py3-none-any.whl (90 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 90 kB 7.8 MB/s \n \u001b[?25hRequirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from google-api-core==1.21.0->tensorflow-quantum) (2018.9)\n Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy==1.8->tensorflow-quantum) (1.2.1)\n Collecting backports.cached-property~=1.0.1\n Downloading backports.cached_property-1.0.1-py3-none-any.whl (5.7 kB)\n Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from cirq-core>=0.13.1->tensorflow-quantum) (4.63.0)\n Collecting duet~=0.2.0\n Downloading duet-0.2.5-py3-none-any.whl (28 kB)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from cirq-core>=0.13.1->tensorflow-quantum) (1.3.5)\n Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.7/dist-packages (from cirq-core>=0.13.1->tensorflow-quantum) (2.4.0)\n Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.7/dist-packages (from cirq-core>=0.13.1->tensorflow-quantum) (2.6.3)\n Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from cirq-core>=0.13.1->tensorflow-quantum) (1.4.1)\n Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.7/dist-packages (from cirq-core>=0.13.1->tensorflow-quantum) (3.2.2)\n Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from cirq-google>=0.13.1->tensorflow-quantum) (1.26.3)\n Collecting typing-extensions>=3.6.6\n Downloading typing_extensions-3.10.0.0-py3-none-any.whl (26 kB)\n Collecting google-api-core[grpc]<2.0.0dev,>=1.14.0\n Downloading google_api_core-1.31.5-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.0 MB/s \n \u001b[?25h Downloading google_api_core-1.31.4-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.5 MB/s \n \u001b[?25h Downloading google_api_core-1.31.3-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.5 MB/s \n \u001b[?25h Downloading google_api_core-1.31.2-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.4 MB/s \n \u001b[?25h Downloading google_api_core-1.31.1-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.3 MB/s \n \u001b[?25h Downloading google_api_core-1.31.0-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.3 MB/s \n \u001b[?25h Downloading google_api_core-1.30.0-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.4 MB/s \n \u001b[?25h Downloading google_api_core-1.29.0-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.2 MB/s \n \u001b[?25h Downloading google_api_core-1.28.0-py2.py3-none-any.whl (92 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 92 kB 1.1 MB/s \n \u001b[?25h Downloading google_api_core-1.27.0-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.4 MB/s \n \u001b[?25h Downloading google_api_core-1.26.2-py2.py3-none-any.whl (93 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 93 kB 1.1 MB/s \n \u001b[?25h Downloading google_api_core-1.26.1-py2.py3-none-any.whl (92 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 92 kB 948 kB/s \n \u001b[?25h Downloading google_api_core-1.26.0-py2.py3-none-any.whl (92 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 92 kB 1.0 MB/s \n \u001b[?25h Downloading google_api_core-1.25.1-py2.py3-none-any.whl (92 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 92 kB 229 kB/s \n \u001b[?25h Downloading google_api_core-1.25.0-py2.py3-none-any.whl (92 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 92 kB 171 kB/s \n \u001b[?25h Downloading google_api_core-1.24.1-py2.py3-none-any.whl (92 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 92 kB 10.3 MB/s \n \u001b[?25h Downloading google_api_core-1.24.0-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 9.5 MB/s \n \u001b[?25h Downloading google_api_core-1.23.0-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 9.8 MB/s \n \u001b[?25h Downloading google_api_core-1.22.4-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 9.0 MB/s \n \u001b[?25h Downloading google_api_core-1.22.3-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 9.2 MB/s \n \u001b[?25h Downloading google_api_core-1.22.2-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 8.5 MB/s \n \u001b[?25h Downloading google_api_core-1.22.1-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 8.5 MB/s \n \u001b[?25h Downloading google_api_core-1.22.0-py2.py3-none-any.whl (91 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 91 kB 9.1 MB/s \n \u001b[?25hRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq-core>=0.13.1->tensorflow-quantum) (3.0.7)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq-core>=0.13.1->tensorflow-quantum) (0.11.0)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq-core>=0.13.1->tensorflow-quantum) (1.4.0)\n Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq-core>=0.13.1->tensorflow-quantum) (2.8.2)\n Installing collected packages: typing-extensions, googleapis-common-protos, google-auth, sympy, google-api-core, duet, backports.cached-property, cirq-core, tensorflow-estimator, keras, gast, cirq-google, tensorflow-quantum, tensorflow\n Attempting uninstall: typing-extensions\n Found existing installation: typing-extensions 3.10.0.2\n Uninstalling typing-extensions-3.10.0.2:\n Successfully uninstalled typing-extensions-3.10.0.2\n Attempting uninstall: googleapis-common-protos\n Found existing installation: googleapis-common-protos 1.56.0\n Uninstalling googleapis-common-protos-1.56.0:\n Successfully uninstalled googleapis-common-protos-1.56.0\n Attempting uninstall: google-auth\n Found existing installation: google-auth 1.35.0\n Uninstalling google-auth-1.35.0:\n Successfully uninstalled google-auth-1.35.0\n Attempting uninstall: sympy\n Found existing installation: sympy 1.7.1\n Uninstalling sympy-1.7.1:\n Successfully uninstalled sympy-1.7.1\n Attempting uninstall: google-api-core\n Found existing installation: google-api-core 1.26.3\n Uninstalling google-api-core-1.26.3:\n Successfully uninstalled google-api-core-1.26.3\n Attempting uninstall: tensorflow-estimator\n Found existing installation: tensorflow-estimator 2.8.0\n Uninstalling tensorflow-estimator-2.8.0:\n Successfully uninstalled tensorflow-estimator-2.8.0\n Attempting uninstall: keras\n Found existing installation: keras 2.8.0\n Uninstalling keras-2.8.0:\n Successfully uninstalled keras-2.8.0\n Attempting uninstall: gast\n Found existing installation: gast 0.5.3\n Uninstalling gast-0.5.3:\n Successfully uninstalled gast-0.5.3\n Attempting uninstall: tensorflow\n Found existing installation: tensorflow 2.8.0\n Uninstalling tensorflow-2.8.0:\n Successfully uninstalled tensorflow-2.8.0\n \u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n pydata-google-auth 1.4.0 requires google-auth<3.0dev,>=1.25.0; python_version >= \"3.6\", but you have google-auth 1.18.0 which is incompatible.\u001b[0m\n Successfully installed backports.cached-property-1.0.1 cirq-core-0.14.0 cirq-google-0.14.0 duet-0.2.5 gast-0.4.0 google-api-core-1.21.0 google-auth-1.18.0 googleapis-common-protos-1.52.0 keras-2.7.0 sympy-1.8 tensorflow-2.7.0 tensorflow-estimator-2.7.0 tensorflow-quantum-0.6.1 typing-extensions-3.10.0.0\n\n\n\n\n\n```python\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)\n```\n\n\n\n\n \n\n\n\n\n```python\nimport cirq\nimport sympy\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\nnp.random.seed(1234)\n```\n\n## 1. Data preparation\n\nYou will begin by preparing the fashion-MNIST dataset for running on a quantum computer.\n\n### 1.1 Download fashion-MNIST\n\nThe first step is to get the traditional fashion-mnist dataset. This can be done using the `tf.keras.datasets` module.\n\n\n```python\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n# Rescale the images from [0,255] to the [0.0,1.0] range.\nx_train, x_test = x_train/255.0, x_test/255.0\n\nprint(\"Number of original training examples:\", len(x_train))\nprint(\"Number of original test examples:\", len(x_test))\n```\n\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n 32768/29515 [=================================] - 0s 0us/step\n 40960/29515 [=========================================] - 0s 0us/step\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n 26427392/26421880 [==============================] - 0s 0us/step\n 26435584/26421880 [==============================] - 0s 0us/step\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n 16384/5148 [===============================================================================================] - 0s 0us/step\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n 4423680/4422102 [==============================] - 0s 0us/step\n 4431872/4422102 [==============================] - 0s 0us/step\n Number of original training examples: 60000\n Number of original test examples: 10000\n\n\nFilter the dataset to keep just the T-shirts/tops and dresses, remove the other classes. At the same time convert the label, `y`, to boolean: True for 0 and False for 3.\n\n\n```python\ndef filter_03(x, y):\n keep = (y == 0) | (y == 3)\n x, y = x[keep], y[keep]\n y = y == 0\n return x,y\n```\n\n\n```python\nx_train, y_train = filter_03(x_train, y_train)\nx_test, y_test = filter_03(x_test, y_test)\n\nprint(\"Number of filtered training examples:\", len(x_train))\nprint(\"Number of filtered test examples:\", len(x_test))\n```\n\n Number of filtered training examples: 12000\n Number of filtered test examples: 2000\n\n\n\n```python\nprint(y_train[0])\n\nplt.imshow(x_train[0, :, :])\nplt.colorbar()\n```\n\n### 1.2 Downscale the images\n\nJust like the MNIST example, you will need to downscale these images in order to be within the boundaries for current quantum computers. This time however you will use a PCA transformation to reduce the dimensions instead of a `tf.image.resize` operation.\n\n\n```python\ndef truncate_x(x_train, x_test, n_components=10):\n \"\"\"Perform PCA on image dataset keeping the top `n_components` components.\"\"\"\n n_points_train = tf.gather(tf.shape(x_train), 0)\n n_points_test = tf.gather(tf.shape(x_test), 0)\n\n # Flatten to 1D\n x_train = tf.reshape(x_train, [n_points_train, -1])\n x_test = tf.reshape(x_test, [n_points_test, -1])\n\n # Normalize.\n feature_mean = tf.reduce_mean(x_train, axis=0)\n x_train_normalized = x_train - feature_mean\n x_test_normalized = x_test - feature_mean\n\n # Truncate.\n e_values, e_vectors = tf.linalg.eigh(\n tf.einsum('ji,jk->ik', x_train_normalized, x_train_normalized))\n return tf.einsum('ij,jk->ik', x_train_normalized, e_vectors[:,-n_components:]), \\\n tf.einsum('ij,jk->ik', x_test_normalized, e_vectors[:, -n_components:])\n```\n\n\n```python\nDATASET_DIM = 10\nx_train, x_test = truncate_x(x_train, x_test, n_components=DATASET_DIM)\nprint(f'New datapoint dimension:', len(x_train[0]))\n```\n\n New datapoint dimension: 10\n\n\nThe last step is to reduce the size of the dataset to just 1000 training datapoints and 200 testing datapoints.\n\n\n```python\nN_TRAIN = 1000\nN_TEST = 200\nx_train, x_test = x_train[:N_TRAIN], x_test[:N_TEST]\ny_train, y_test = y_train[:N_TRAIN], y_test[:N_TEST]\n```\n\n\n```python\nprint(\"New number of training examples:\", len(x_train))\nprint(\"New number of test examples:\", len(x_test))\n```\n\n New number of training examples: 1000\n New number of test examples: 200\n\n\n## 2. Relabeling and computing PQK features\n\nYou will now prepare a \"stilted\" quantum dataset by incorporating quantum components and re-labeling the truncated fashion-MNIST dataset you've created above. In order to get the most seperation between quantum and classical methods, you will first prepare the PQK features and then relabel outputs based on their values. \n\n### 2.1 Quantum encoding and PQK features\nYou will create a new set of features, based on `x_train`, `y_train`, `x_test` and `y_test` that is defined to be the 1-RDM on all qubits of: \n\n$V(x_{\\text{train}} / n_{\\text{trotter}}) ^ {n_{\\text{trotter}}} U_{\\text{1qb}} | 0 \\rangle$\n\nWhere $U_\\text{1qb}$ is a wall of single qubit rotations and $V(\\hat{\\theta}) = e^{-i\\sum_i \\hat{\\theta_i} (X_i X_{i+1} + Y_i Y_{i+1} + Z_i Z_{i+1})}$\n\nFirst, you can generate the wall of single qubit rotations:\n\n\n```python\ndef single_qubit_wall(qubits, rotations):\n \"\"\"Prepare a single qubit X,Y,Z rotation wall on `qubits`.\"\"\"\n wall_circuit = cirq.Circuit()\n for i, qubit in enumerate(qubits):\n for j, gate in enumerate([cirq.X, cirq.Y, cirq.Z]):\n wall_circuit.append(gate(qubit) ** rotations[i][j])\n\n return wall_circuit\n```\n\nYou can quickly verify this works by looking at the circuit:\n\n\n```python\nSVGCircuit(single_qubit_wall(\n cirq.GridQubit.rect(1,4), np.random.uniform(size=(4, 3))))\n```\n\n findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.\n\n\n\n\n\n \n\n \n\n\n\nNext you can prepare $V(\\hat{\\theta})$ with the help of `tfq.util.exponential` which can exponentiate any commuting `cirq.PauliSum` objects:\n\n\n```python\ndef v_theta(qubits):\n \"\"\"Prepares a circuit that generates V(\\theta).\"\"\"\n ref_paulis = [\n cirq.X(q0) * cirq.X(q1) + \\\n cirq.Y(q0) * cirq.Y(q1) + \\\n cirq.Z(q0) * cirq.Z(q1) for q0, q1 in zip(qubits, qubits[1:])\n ]\n exp_symbols = list(sympy.symbols('ref_0:'+str(len(ref_paulis))))\n return tfq.util.exponential(ref_paulis, exp_symbols), exp_symbols\n```\n\nThis circuit might be a little bit harder to verify by looking at, but you can still examine a two qubit case to see what is happening:\n\n\n```python\ntest_circuit, test_symbols = v_theta(cirq.GridQubit.rect(1, 2))\nprint(f'Symbols found in circuit:{test_symbols}')\nSVGCircuit(test_circuit)\n```\n\n Symbols found in circuit:[ref_0]\n\n\n\n\n\n \n\n \n\n\n\nNow you have all the building blocks you need to put your full encoding circuits together:\n\n\n```python\ndef prepare_pqk_circuits(qubits, classical_source, n_trotter=10):\n \"\"\"Prepare the pqk feature circuits around a dataset.\"\"\"\n n_qubits = len(qubits)\n n_points = len(classical_source)\n\n # Prepare random single qubit rotation wall.\n random_rots = np.random.uniform(-2, 2, size=(n_qubits, 3))\n initial_U = single_qubit_wall(qubits, random_rots)\n\n # Prepare parametrized V\n V_circuit, symbols = v_theta(qubits)\n exp_circuit = cirq.Circuit(V_circuit for t in range(n_trotter))\n \n # Convert to `tf.Tensor`\n initial_U_tensor = tfq.convert_to_tensor([initial_U])\n initial_U_splat = tf.tile(initial_U_tensor, [n_points])\n\n full_circuits = tfq.layers.AddCircuit()(\n initial_U_splat, append=exp_circuit)\n # Replace placeholders in circuits with values from `classical_source`.\n return tfq.resolve_parameters(\n full_circuits, tf.convert_to_tensor([str(x) for x in symbols]),\n tf.convert_to_tensor(classical_source*(n_qubits/3)/n_trotter))\n```\n\nChoose some qubits and prepare the data encoding circuits:\n\n\n```python\nqubits = cirq.GridQubit.rect(1, DATASET_DIM + 1)\nq_x_train_circuits = prepare_pqk_circuits(qubits, x_train)\nq_x_test_circuits = prepare_pqk_circuits(qubits, x_test)\n```\n\nNext, compute the PQK features based on the 1-RDM of the dataset circuits above and store the results in `rdm`, a `tf.Tensor` with shape `[n_points, n_qubits, 3]`. The entries in `rdm[i][j][k]` = $\\langle \\psi_i | OP^k_j | \\psi_i \\rangle$ where `i` indexes over datapoints, `j` indexes over qubits and `k` indexes over $\\lbrace \\hat{X}, \\hat{Y}, \\hat{Z} \\rbrace$ .\n\n\n```python\ndef get_pqk_features(qubits, data_batch):\n \"\"\"Get PQK features based on above construction.\"\"\"\n ops = [[cirq.X(q), cirq.Y(q), cirq.Z(q)] for q in qubits]\n ops_tensor = tf.expand_dims(tf.reshape(tfq.convert_to_tensor(ops), -1), 0)\n batch_dim = tf.gather(tf.shape(data_batch), 0)\n ops_splat = tf.tile(ops_tensor, [batch_dim, 1])\n exp_vals = tfq.layers.Expectation()(data_batch, operators=ops_splat)\n rdm = tf.reshape(exp_vals, [batch_dim, len(qubits), -1])\n return rdm\n```\n\n\n```python\nx_train_pqk = get_pqk_features(qubits, q_x_train_circuits)\nx_test_pqk = get_pqk_features(qubits, q_x_test_circuits)\nprint('New PQK training dataset has shape:', x_train_pqk.shape)\nprint('New PQK testing dataset has shape:', x_test_pqk.shape)\n```\n\n New PQK training dataset has shape: (1000, 11, 3)\n New PQK testing dataset has shape: (200, 11, 3)\n\n\n### 2.2 Re-labeling based on PQK features\nNow that you have these quantum generated features in `x_train_pqk` and `x_test_pqk`, it is time to re-label the dataset. To achieve maximum seperation between quantum and classical performance you can re-label the dataset based on the spectrum information found in `x_train_pqk` and `x_test_pqk`.\n\nNote: This preparation of your dataset to explicitly maximize the seperation in performance between the classical and quantum models might feel like cheating, but it provides a **very** important proof of existance for datasets that are hard for classical computers and easy for quantum computers to model. There would be no point in searching for quantum advantage in QML if you couldn't first create something like this to demonstrate advantage.\n\n\n```python\ndef compute_kernel_matrix(vecs, gamma):\n \"\"\"Computes d[i][j] = e^ -gamma * (vecs[i] - vecs[j]) ** 2 \"\"\"\n scaled_gamma = gamma / (\n tf.cast(tf.gather(tf.shape(vecs), 1), tf.float32) * tf.math.reduce_std(vecs))\n return scaled_gamma * tf.einsum('ijk->ij',(vecs[:,None,:] - vecs) ** 2)\n\ndef get_spectrum(datapoints, gamma=1.0):\n \"\"\"Compute the eigenvalues and eigenvectors of the kernel of datapoints.\"\"\"\n KC_qs = compute_kernel_matrix(datapoints, gamma)\n S, V = tf.linalg.eigh(KC_qs)\n S = tf.math.abs(S)\n return S, V\n```\n\n\n```python\nS_pqk, V_pqk = get_spectrum(\n tf.reshape(tf.concat([x_train_pqk, x_test_pqk], 0), [-1, len(qubits) * 3]))\n\nS_original, V_original = get_spectrum(\n tf.cast(tf.concat([x_train, x_test], 0), tf.float32), gamma=0.005)\n\nprint('Eigenvectors of pqk kernel matrix:', V_pqk)\nprint('Eigenvectors of original kernel matrix:', V_original)\n```\n\n Eigenvectors of pqk kernel matrix: tf.Tensor(\n [[ 0.02095697 0.01059745 0.02166322 ... 0.09526508 0.00300356\n 0.02826785]\n [ 0.02293038 0.04663572 0.00791177 ... 0.00220983 -0.6957587\n 0.02859015]\n [ 0.01778554 -0.0030075 -0.02552235 ... 0.02335721 0.00414519\n 0.02690097]\n ...\n [-0.0605794 0.01324826 0.02695336 ... 0.00716051 0.03977184\n 0.03853431]\n [-0.06333087 -0.00304116 0.00977427 ... -0.03250755 0.02224028\n 0.03674842]\n [-0.05860277 0.00584422 0.00264832 ... -0.04459745 -0.01932838\n 0.03299437]], shape=(1200, 1200), dtype=float32)\n Eigenvectors of original kernel matrix: tf.Tensor(\n [[ 3.8356818e-02 2.8347293e-02 -1.1697864e-02 ... -4.0755421e-02\n 2.0624822e-02 3.2069720e-02]\n [-4.0181600e-02 8.8809701e-03 -1.3882567e-02 ... -7.6112538e-03\n 7.1638334e-01 2.8819481e-02]\n [-1.6671857e-02 1.3503703e-02 -3.6638588e-02 ... 4.2131193e-02\n -3.7604037e-03 2.1954076e-02]\n ...\n [-3.0156480e-02 -1.6716314e-02 -1.6033923e-02 ... 2.1481956e-03\n -5.8309413e-03 2.3656894e-02]\n [ 3.9776899e-03 -4.9988784e-02 -5.2833343e-03 ... -2.2350436e-02\n -4.1845851e-02 2.7820019e-02]\n [-1.6657291e-02 -8.1861708e-03 -4.3234091e-02 ... -3.2867838e-04\n 9.1463570e-03 1.8750878e-02]], shape=(1200, 1200), dtype=float32)\n\n\nNow you have everything you need to re-label the dataset! Now you can consult with the flowchart to better understand how to maximize performance seperation when re-labeling the dataset:\n\n\n\nIn order to maximize the seperation between quantum and classical models, you will attempt to maximize the geometric difference between the original dataset and the PQK features kernel matrices $g(K_1 || K_2) = \\sqrt{ || \\sqrt{K_2} K_1^{-1} \\sqrt{K_2} || _\\infty}$ using `S_pqk, V_pqk` and `S_original, V_original`. A large value of $g$ ensures that you initially move to the right in the flowchart down towards a prediction advantage in the quantum case.\n\nNote: Computing quantities for $s$ and $d$ are also very useful when looking to better understand performance seperations. In this case ensuring a large $g$ value is enough to see performance seperation.\n\n\n```python\ndef get_stilted_dataset(S, V, S_2, V_2, lambdav=1.1):\n \"\"\"Prepare new labels that maximize geometric distance between kernels.\"\"\"\n S_diag = tf.linalg.diag(S ** 0.5)\n S_2_diag = tf.linalg.diag(S_2 / (S_2 + lambdav) ** 2)\n scaling = S_diag @ tf.transpose(V) @ \\\n V_2 @ S_2_diag @ tf.transpose(V_2) @ \\\n V @ S_diag\n\n # Generate new lables using the largest eigenvector.\n _, vecs = tf.linalg.eig(scaling)\n new_labels = tf.math.real(\n tf.einsum('ij,j->i', tf.cast(V @ S_diag, tf.complex64), vecs[-1])).numpy()\n # Create new labels and add some small amount of noise.\n final_y = new_labels > np.median(new_labels)\n noisy_y = (final_y ^ (np.random.uniform(size=final_y.shape) > 0.95))\n return noisy_y\n```\n\n\n```python\ny_relabel = get_stilted_dataset(S_pqk, V_pqk, S_original, V_original)\ny_train_new, y_test_new = y_relabel[:N_TRAIN], y_relabel[N_TRAIN:]\n```\n\n## 3. Comparing models\nNow that you have prepared your dataset it is time to compare model performance. You will create two small feedforward neural networks and compare performance when they are given access to the PQK features found in `x_train_pqk`.\n\n### 3.1 Create PQK enhanced model\nUsing standard `tf.keras` library features you can now create and a train a model on the `x_train_pqk` and `y_train_new` datapoints:\n\n\n```python\n#docs_infra: no_execute\ndef create_pqk_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[len(qubits) * 3,]))\n model.add(tf.keras.layers.Dense(16, activation='sigmoid'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\npqk_model = create_pqk_model()\npqk_model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.003),\n metrics=['accuracy'])\n\npqk_model.summary()\n```\n\n Model: \"sequential\"\n _________________________________________________________________\n Layer (type) Output Shape Param # \n =================================================================\n dense (Dense) (None, 32) 1088 \n \n dense_1 (Dense) (None, 16) 528 \n \n dense_2 (Dense) (None, 1) 17 \n \n =================================================================\n Total params: 1,633\n Trainable params: 1,633\n Non-trainable params: 0\n _________________________________________________________________\n\n\n\n```python\n#docs_infra: no_execute\npqk_history = pqk_model.fit(tf.reshape(x_train_pqk, [N_TRAIN, -1]),\n y_train_new,\n batch_size=32,\n epochs=1000,\n verbose=0,\n validation_data=(tf.reshape(x_test_pqk, [N_TEST, -1]), y_test_new))\n```\n\n### 3.2 Create a classical model\nSimilar to the code above you can now also create a classical model that doesn't have access to the PQK features in your stilted dataset. This model can be trained using `x_train` and `y_label_new`.\n\n\n```python\n#docs_infra: no_execute\ndef create_fair_classical_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[DATASET_DIM,]))\n model.add(tf.keras.layers.Dense(16, activation='sigmoid'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\nmodel = create_fair_classical_model()\nmodel.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.03),\n metrics=['accuracy'])\n\nmodel.summary()\n```\n\n Model: \"sequential_1\"\n _________________________________________________________________\n Layer (type) Output Shape Param # \n =================================================================\n dense_3 (Dense) (None, 32) 352 \n \n dense_4 (Dense) (None, 16) 528 \n \n dense_5 (Dense) (None, 1) 17 \n \n =================================================================\n Total params: 897\n Trainable params: 897\n Non-trainable params: 0\n _________________________________________________________________\n\n\n\n```python\n#docs_infra: no_execute\nclassical_history = model.fit(x_train,\n y_train_new,\n batch_size=32,\n epochs=1000,\n verbose=0,\n validation_data=(x_test, y_test_new))\n```\n\n### 3.3 Compare performance\nNow that you have trained the two models you can quickly plot the performance gaps in the validation data between the two. Typically both models will achieve > 0.9 accuaracy on the training data. However on the validation data it becomes clear that only the information found in the PQK features is enough to make the model generalize well to unseen instances.\n\n\n```python\n#docs_infra: no_execute\nplt.figure(figsize=(10,5))\nplt.plot(classical_history.history['accuracy'], label='accuracy_classical')\nplt.plot(classical_history.history['val_accuracy'], label='val_accuracy_classical')\nplt.plot(pqk_history.history['accuracy'], label='accuracy_quantum')\nplt.plot(pqk_history.history['val_accuracy'], label='val_accuracy_quantum')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend()\n```\n\nSuccess: You have engineered a stilted quantum dataset that can intentionally defeat classical models in a fair (but contrived) setting. Try comparing results using other types of classical models. The next step is to try and see if you can find new and interesting datasets that can defeat classical models without needing to engineer them yourself!\n\n## 4. Important conclusions\n\nThere are several important conclusions you can draw from this and the [MNIST](https://www.tensorflow.org/quantum/tutorials/mnist) experiments:\n\n1. It's very unlikely that the quantum models of today will beat classical model performance on classical data. Especially on today's classical datasets that can have upwards of a million datapoints.\n\n2. Just because the data might come from a hard to classically simulate quantum circuit, doesn't necessarily make the data hard to learn for a classical model.\n\n3. Datasets (ultimately quantum in nature) that are easy for quantum models to learn and hard for classical models to learn do exist, regardless of model architecture or training algorithms used.\n", "meta": {"hexsha": "ba5114f57719b79e7396a52e301020cd8e4efa19", "size": 143809, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/tutorials/quantum_data.ipynb", "max_stars_repo_name": "artiseza/quantum", "max_stars_repo_head_hexsha": "72f6e5bae843c841117426a0c8e1ee1d6557995e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/tutorials/quantum_data.ipynb", "max_issues_repo_name": "artiseza/quantum", "max_issues_repo_head_hexsha": "72f6e5bae843c841117426a0c8e1ee1d6557995e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/tutorials/quantum_data.ipynb", "max_forks_repo_name": "artiseza/quantum", "max_forks_repo_head_hexsha": "72f6e5bae843c841117426a0c8e1ee1d6557995e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 106.2889874353, "max_line_length": 61545, "alphanum_fraction": 0.7531587036, "converted": true, "num_tokens": 11648, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4882833952958347, "lm_q2_score": 0.23651624720889433, "lm_q1q2_score": 0.11548695622978791}} {"text": "```python\nimport numpy as np\nimport pandas as pd\nimport linearsolve as ls\nimport matplotlib.pyplot as plt\nplt.style.use('classic')\n%matplotlib inline\n```\n\n# Homework 7\n\n**Instructions:** Complete the notebook below. Download the completed notebook in HTML format. Upload assignment using Canvas.\n\n**Due:** Feb. 23 at **2pm.**\n\n## Exercise: The Labor-Leisure Tradeoff\n\n\\begin{align}\n\\frac{\\varphi}{1-L_t} & = \\frac{(1-\\alpha)A_tK_t^{\\alpha}L_t^{-\\alpha}}{C_t} \\tag{1}\n\\end{align}\n\n**Questions** \n\n1. Explain words why the left-hand side of equation (1) represents the marginal cost to the household of working. A complete answer will make use of the term *marginal utility* .\n2. Explain words why the right-hand side of equation (1) represents the marginal benefit to the household of supplying labor (i.e., working). A complete answer will make use of the terms *marginal utility* and *marginal product*.\n3. Holding everything else constant, according to equation (1), what effect will an increase in TFP have on equilibrium labor? Explain the economic intuition behind your answer.\n4. Holding everything else constant, according to equation (1), what effect will an increase in household consumption have on equilibrium labor? Explain the economic intuition behind your answer.\n\n**Answers**\n\n1. \n\n2. \n\n3. \n\n4. \n\n## Exercise: The Euler Equation\n\n\\begin{align}\n\\frac{1}{C_t} & = \\beta \\left[\\frac{\\alpha A_{t+1}K_{t+1}^{\\alpha-1}L_{t+1}^{1-\\alpha} +1-\\delta }{C_{t+1}}\\right]\\tag{2}\n\\end{align}\n\n**Questions** \n\n1. Explain words why the left-hand side of equation (2) represents the marginal cost to the household of saving (i.e., building new capital). A complete answer will make use of the term *marginal utility* .\n2. Explain words why the right-hand side of equation (2) represents the marginal benefit to the household of saving. A complete answer will make use of the terms *marginal utility* and *marginal product*.\n3. Holding everything else constant, according to equation (2), what effect will an increase in TFP in period $t+1$ have on the household's choice for capital in period $t+1$? Explain the economic intuition behind your answer.\n3. Holding everything else constant, according to equation (2), what effect will an increase in consumption in period $t$ have on the household's choice for capital in period $t+1$? Explain the economic intuition behind your answer.\n3. Holding everything else constant, according to equation (2), what effect will an increase in consumption in period $t+1$ have on the household's choice for capital in period $t+1$? Explain the economic intuition behind your answer.\n\n**Answers**\n\n1. \n\n2. \n\n3. \n\n3. \n\n4. \n", "meta": {"hexsha": "695da548aaa5247ce7c1907179fadfd4d281e21f", "size": 4092, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Homework Notebooks/Econ126_Winter2021_Homework_07_blank.ipynb", "max_stars_repo_name": "letsgoexploring/econ126", "max_stars_repo_head_hexsha": "05f50d2392dd1c7c38b14950cb8d7eff7ff775ee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-12T16:28:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-24T12:11:04.000Z", "max_issues_repo_path": "Homework Notebooks/Econ126_Winter2021_Homework_07_blank.ipynb", "max_issues_repo_name": "letsgoexploring/econ126", "max_issues_repo_head_hexsha": "05f50d2392dd1c7c38b14950cb8d7eff7ff775ee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-29T08:50:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-29T08:51:05.000Z", "max_forks_repo_path": "Homework Notebooks/Econ126_Winter2021_Homework_07_blank.ipynb", "max_forks_repo_name": "letsgoexploring/econ126", "max_forks_repo_head_hexsha": "05f50d2392dd1c7c38b14950cb8d7eff7ff775ee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2019-03-08T18:49:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T23:27:16.000Z", "avg_line_length": 34.1, "max_line_length": 241, "alphanum_fraction": 0.605083089, "converted": true, "num_tokens": 670, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4882833952958347, "lm_q2_score": 0.2365162364457076, "lm_q1q2_score": 0.11548695097430255}} {"text": "```python\nname = \"Ricardo Hideki Hangai Kojo\" # write YOUR NAME\n\nhonorPledge = \"I affirm that I have not given or received any unauthorized \" \\\n \"help on this assignment, and that this work is my own.\\n\"\n\n\nprint(\"\\nName: \", name)\nprint(\"\\nHonor pledge: \", honorPledge)\n```\n\n \n Name: Ricardo Hideki Hangai Kojo\n \n Honor pledge: I affirm that I have not given or received any unauthorized help on this assignment, and that this work is my own.\n \n\n\n# MAC0460 / MAC5832 (2021)\n
\n\n# EP2: Linear regression - analytic solution\n\n### Objectives:\n\n- to implement and test the analytic solution for the linear regression task (see, for instance, Slides of Lecture 03 and Lecture 03 of *Learning from Data*)\n- to understand the core idea (*optimization of a loss or cost function*) for parameter adjustment in machine learning\n
\n\n# Linear regression\n\nGiven a dataset $\\{(\\mathbf{x}^{(1)}, y^{(1)}), \\dots ,(\\mathbf{x}^{(N)}, y^{(N)})\\}$ with $\\mathbf{x}^{(i)} \\in \\mathbb{R}^{d}$ and $y^{(i)} \\in \\mathbb{R}$, we would like to approximate the unknown function $f:\\mathbb{R}^{d} \\rightarrow \\mathbb{R}$ (recall that $y^{(i)} =f(\\mathbf{x}^{(i)})$) by means of a linear model $h$:\n$$\nh(\\mathbf{x}^{(i)}; \\mathbf{w}, b) = \\mathbf{w}^\\top \\mathbf{x}^{(i)} + b\n$$\n\nNote that $h(\\mathbf{x}^{(i)}; \\mathbf{w}, b)$ is, in fact, an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation) of $\\mathbf{x}^{(i)}$. As commonly done, we will use the term \"linear\" to refer to an affine transformation.\n\nThe output of $h$ is a linear transformation of $\\mathbf{x}^{(i)}$. We use the notation $h(\\mathbf{x}^{(i)}; \\mathbf{w}, b)$ to make clear that $h$ is a parametric model, i.e., the transformation $h$ is defined by the parameters $\\mathbf{w}$ and $b$. We can view vector $\\mathbf{w}$ as a *weight* vector that controls the effect of each *feature* in the prediction.\n\nBy adding one component with value equal to 1 to the observations $\\mathbf{x}$ (an artificial coordinate), we have:\n\n$$\\tilde{\\mathbf{x}} = (1, x_1, \\ldots, x_d) \\in \\mathbb{R}^{1+d}$$\n\nand then we can simplify the notation:\n$$\nh(\\mathbf{x}^{(i)}; \\mathbf{w}) = \\hat{y}^{(i)} = \\mathbf{w}^\\top \\tilde{\\mathbf{x}}^{(i)}\n$$\n\nWe would like to determine the optimal parameters $\\mathbf{w}$ such that prediction $\\hat{y}^{(i)}$ is as closest as possible to $y^{(i)}$ according to some error metric. Adopting the *mean square error* as such metric we have the following cost function:\n\n\\begin{equation}\nJ(\\mathbf{w}) = \\frac{1}{N}\\sum_{i=1}^{N}\\big(\\hat{y}^{(i)} - y^{(i)}\\big)^{2}\n\\end{equation}\n\nThus, the task of determining a function $h$ that is closest to $f$ is reduced to the task of finding the values $\\mathbf{w}$ that minimize $J(\\mathbf{w})$.\n\n**Now we will explore this model, starting with a simple dataset.**\n\n\n### Auxiliary functions\n\n\n```python\n# some imports\nimport numpy as np\nimport time\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n```\n\n\n```python\n# An auxiliary function\ndef get_housing_prices_data(N, verbose=True):\n \"\"\"\n Generates artificial linear data,\n where x = square meter, y = house price\n\n :param N: data set size\n :type N: int\n \n :param verbose: param to control print\n :type verbose: bool\n :return: design matrix, regression targets\n :rtype: np.array, np.array\n \"\"\"\n cond = False\n while not cond:\n x = np.linspace(90, 1200, N)\n gamma = np.random.normal(30, 10, x.size)\n y = 50 * x + gamma * 400\n x = x.astype(\"float32\")\n x = x.reshape((x.shape[0], 1))\n y = y.astype(\"float32\")\n y = y.reshape((y.shape[0], 1))\n cond = min(y) > 0\n \n xmean, xsdt, xmax, xmin = np.mean(x), np.std(x), np.max(x), np.min(x)\n ymean, ysdt, ymax, ymin = np.mean(y), np.std(y), np.max(y), np.min(y)\n if verbose:\n print(\"\\nX shape = {}\".format(x.shape))\n print(\"y shape = {}\\n\".format(y.shape))\n print(\"X: mean {}, sdt {:.2f}, max {:.2f}, min {:.2f}\".format(xmean,\n xsdt,\n xmax,\n xmin))\n print(\"y: mean {:.2f}, sdt {:.2f}, max {:.2f}, min {:.2f}\".format(ymean,\n ysdt,\n ymax,\n ymin))\n return x, y\n```\n\n\n```python\n# Another auxiliary function\ndef plot_points_regression(x,\n y,\n title,\n xlabel,\n ylabel,\n prediction=None,\n legend=False,\n r_squared=None,\n position=(90, 100)):\n \"\"\"\n Plots the data points and the prediction,\n if there is one.\n\n :param x: design matrix\n :type x: np.array\n :param y: regression targets\n :type y: np.array\n :param title: plot's title\n :type title: str\n :param xlabel: x axis label\n :type xlabel: str\n :param ylabel: y axis label\n :type ylabel: str\n :param prediction: model's prediction\n :type prediction: np.array\n :param legend: param to control print legends\n :type legend: bool\n :param r_squared: r^2 value\n :type r_squared: float\n :param position: text position\n :type position: tuple\n \"\"\"\n fig, ax = plt.subplots(1, 1, figsize=(8, 8))\n line1, = ax.plot(x, y, 'bo', label='Real data')\n if prediction is not None:\n line2, = ax.plot(x, prediction, 'r', label='Predicted data')\n if legend:\n plt.legend(handles=[line1, line2], loc=2)\n ax.set_title(title,\n fontsize=20,\n fontweight='bold')\n if r_squared is not None:\n bbox_props = dict(boxstyle=\"square,pad=0.3\",\n fc=\"white\", ec=\"black\", lw=0.2)\n t = ax.text(position[0], position[1], \"$R^2 ={:.4f}$\".format(r_squared),\n size=15, bbox=bbox_props)\n\n ax.set_xlabel(xlabel, fontsize=20)\n ax.set_ylabel(ylabel, fontsize=20)\n plt.show()\n\n```\n\n### The dataset \n\nThe first dataset we will use is a toy dataset. We will generate $N=100$ observations with only one *feature* and a real value associated to each of them. We can view these observations as being pairs *(area of a real state in square meters, price of the real state)*. Our task is to construct a model that is able to predict the price of a real state, given its area.\n\n\n```python\nX, y = get_housing_prices_data(N=100)\n```\n\n \n X shape = (100, 1)\n y shape = (100, 1)\n \n X: mean 645.0, sdt 323.65, max 1200.00, min 90.00\n y: mean 44221.50, sdt 16674.25, max 79182.43, min 12196.20\n\n\n### Ploting the data\n\n\n```python\nplot_points_regression(X,\n y,\n title='Real estate prices prediction',\n xlabel=\"m\\u00b2\",\n ylabel='$')\n```\n\n### The solution\n\nGiven $f:\\mathbb{R}^{N\\times M} \\rightarrow \\mathbb{R}$ and $\\mathbf{A} \\in \\mathbb{R}^{N\\times M}$, we define the gradient of $f$ with respect to $\\mathbf{A}$ as:\n\n\\begin{equation*}\n\\nabla_{\\mathbf{A}}f = \\frac{\\partial f}{\\partial \\mathbf{A}} = \\begin{bmatrix}\n\\frac{\\partial f}{\\partial \\mathbf{A}_{1,1}} & \\dots & \\frac{\\partial f}{\\partial \\mathbf{A}_{1,m}} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f}{\\partial \\mathbf{A}_{n,1}} & \\dots & \\frac{\\partial f}{\\partial \\mathbf{A}_{n,m}}\n\\end{bmatrix}\n\\end{equation*}\n\nLet $\\mathbf{X} \\in \\mathbb{R}^{N\\times d}$ be a matrix (sometimes also called the *design matrix*) whose rows are the observations of the dataset and let $\\mathbf{y} \\in \\mathbb{R}^{N}$ be the vector consisting of all values $y^{(i)}$ (i.e., $\\mathbf{X}^{(i,:)} = \\mathbf{x}^{(i)}$ and $\\mathbf{y}^{(i)} = y^{(i)}$). It can be verified that: \n\n\\begin{equation}\nJ(\\mathbf{w}) = \\frac{1}{N}(\\mathbf{X}\\mathbf{w} - \\mathbf{y})^{T}(\\mathbf{X}\\mathbf{w} - \\mathbf{y})\n\\end{equation}\n\nUsing basic matrix derivative concepts we can compute the gradient of $J(\\mathbf{w})$ with respect to $\\mathbf{w}$:\n\n\\begin{equation}\n\\nabla_{\\mathbf{w}}J(\\mathbf{w}) = \\frac{2}{N} (\\mathbf{X}^{T}\\mathbf{X}\\mathbf{w} -\\mathbf{X}^{T}\\mathbf{y}) \n\\end{equation}\n\nThus, when $\\nabla_{\\mathbf{w}}J(\\mathbf{w}) = 0$ we have \n\n\\begin{equation}\n\\mathbf{X}^{T}\\mathbf{X}\\mathbf{w} = \\mathbf{X}^{T}\\mathbf{y}\n\\end{equation}\n\nHence,\n\n\\begin{equation}\n\\mathbf{w} = (\\mathbf{X}^{T}\\mathbf{X})^{-1}\\mathbf{X}^{T}\\mathbf{y}\n\\end{equation}\n\nNote that this solution has a high computational cost. As the number of variables (*features*) increases, the cost for matrix inversion becomes prohibitive. See [this text](https://sgfin.github.io/files/notes/CS229_Lecture_Notes.pdf) for more details.\n\n# Exercise 1\nUsing only **NumPy** (a quick introduction to this library can be found [here](http://cs231n.github.io/python-numpy-tutorial/)), complete the two functions below. Recall that $\\mathbf{X} \\in \\mathbb{R}^{N\\times d}$; thus you will need to add a component of value 1 to each of the observations in $\\mathbf{X}$ before performing the computation described above.\n\nNOTE: Although the dataset above has data of dimension $d=1$, your code must be generic (it should work for $d\\geq1$)\n\n## 1.1. Weight computation function\n\n\n```python\ndef normal_equation_weights(X, y):\n \"\"\"\n Calculates the weights of a linear function using the normal equation method.\n You should add into X a new column with 1s.\n\n :param X: design matrix\n :type X: np.ndarray(shape=(N, d))\n :param y: regression targets\n :type y: np.ndarray(shape=(N, 1))\n :return: weight vector\n :rtype: np.ndarray(shape=(d+1, 1))\n \"\"\"\n \n # START OF YOUR CODE:\n N = X.shape[0]\n X_tilde = np.column_stack((np.ones((N, 1)), X))\n X_cross = np.dot(np.linalg.inv(np.dot(X_tilde.T, X_tilde)), X_tilde.T)\n w = np.dot(X_cross, y)\n \n return w\n # END OF YOUR CODE\n```\n\n\n```python\n# test of function normal_equation_weights()\n\nw = 0 # this is not necessary\nw = normal_equation_weights(X, y)\nprint(\"Estimated w =\\n\", w)\n```\n\n Estimated w =\n [[12043.47818971]\n [ 49.88841379]]\n\n\n## 1.2. Prediction function\n\n\n```python\ndef normal_equation_prediction(X, w):\n \"\"\"\n Calculates the prediction over a set of observations X using the linear function\n characterized by the weight vector w.\n You should add into X a new column with 1s.\n\n :param X: design matrix\n :type X: np.ndarray(shape=(N, d))\n :param w: weight vector\n :type w: np.ndarray(shape=(d+1, 1))\n :param y: regression prediction\n :type y: np.ndarray(shape=(N, 1))\n \"\"\"\n \n # START OF YOUR CODE:\n N = X.shape[0]\n X_tilde = np.column_stack((np.ones((N, 1)), X))\n y = np.dot(X_tilde, w)\n \n return y\n # END OF YOUR CODE\n```\n\n## 1.3. Coefficient of determination\nWe can use the [$R^2$](https://pt.wikipedia.org/wiki/R%C2%B2) metric (Coefficient of determination) to evaluate how well the linear model fits the data.\n\n**Which $\ud835\udc45^2$ value would you expect to observe ?**\n\n\n```python\nfrom sklearn.metrics import r2_score\n\n# test of function normal_equation_prediction()\nprediction = normal_equation_prediction(X, w)\n\n# compute the R2 score using the r2_score function from sklearn\n# Replace 0 with an appropriate call of the function\n\n# START OF YOUR CODE:\nr_2 = r2_score(y, prediction)\n# END OF YOUR CODE\n\nplot_points_regression(X,\n y,\n title='Real estate prices prediction',\n xlabel=\"m\\u00b2\",\n ylabel='$',\n prediction=prediction,\n legend=True,\n r_squared=r_2)\n```\n\n## Additional tests\n\nLet us compute a prediction for $x=650$\n\n\n\n```python\n# Let us use the prediction function\nx = np.asarray([650]).reshape(1,1)\nprediction = normal_equation_prediction(x, w)\nprint(\"Area = %.2f Predicted price = %.4f\" %(x[0], prediction))\n```\n\n Area = 650.00 Predicted price = 44470.9472\n\n\n## 1.4. Processing time\n\nExperiment with different nummber of samples $N$ and observe how processing time varies.\n\nBe careful not to use a too large value; it may make jupyter freeze ...\n\n\n```python\n# Add other values for N\n# START OF YOUR CODE:\nN = [100, 200, 500, 1000, 5000, 10000, 100000, 10000000]\n# END OF YOUR CODE\n\nfor i in N:\n X, y = get_housing_prices_data(N=i)\n init = time.time()\n w = normal_equation_weights(X, y)\n prediction = normal_equation_prediction(X,w)\n init = time.time() - init\n \n print(\"\\nExecution time = {:.8f}(s)\\n\".format(init))\n```\n\n \n X shape = (100, 1)\n y shape = (100, 1)\n \n X: mean 645.0, sdt 323.65, max 1200.00, min 90.00\n y: mean 44131.31, sdt 16462.71, max 77546.66, min 10220.15\n \n Execution time = 0.00058222(s)\n \n \n X shape = (200, 1)\n y shape = (200, 1)\n \n X: mean 645.0, sdt 322.04, max 1200.00, min 90.00\n y: mean 44643.07, sdt 16741.32, max 77416.36, min 11328.75\n \n Execution time = 0.00007701(s)\n \n \n X shape = (500, 1)\n y shape = (500, 1)\n \n X: mean 645.0, sdt 321.07, max 1200.00, min 90.00\n y: mean 44582.73, sdt 16791.18, max 83074.05, min 9172.27\n \n Execution time = 0.00033569(s)\n \n \n X shape = (1000, 1)\n y shape = (1000, 1)\n \n X: mean 645.0, sdt 320.75, max 1200.00, min 90.00\n y: mean 44441.20, sdt 16539.07, max 79329.80, min 10420.06\n \n Execution time = 0.00034523(s)\n \n \n X shape = (5000, 1)\n y shape = (5000, 1)\n \n X: mean 645.0, sdt 320.49, max 1200.00, min 90.00\n y: mean 44266.45, sdt 16549.26, max 80368.79, min 5516.63\n \n Execution time = 0.00048804(s)\n \n \n X shape = (10000, 1)\n y shape = (10000, 1)\n \n X: mean 645.0000610351562, sdt 320.46, max 1200.00, min 90.00\n y: mean 44271.82, sdt 16555.38, max 85085.10, min 3211.80\n \n Execution time = 0.00039077(s)\n \n \n X shape = (100000, 1)\n y shape = (100000, 1)\n \n X: mean 645.0000610351562, sdt 320.43, max 1200.00, min 90.00\n y: mean 44233.35, sdt 16525.27, max 85116.39, min 2534.62\n \n Execution time = 0.00301242(s)\n \n \n X shape = (10000000, 1)\n y shape = (10000000, 1)\n \n X: mean 644.9998779296875, sdt 320.43, max 1200.00, min 90.00\n y: mean 44250.33, sdt 16514.00, max 89948.86, min 199.12\n \n Execution time = 0.29421997(s)\n \n\n\n# Exercise 2\n\nLet us test the code with $\ud835\udc51>1$. \nWe will use the data we have collected in our first class. The [file](https://edisciplinas.usp.br/pluginfile.php/5982803/course/section/6115454/QT1data.csv) can be found on e-disciplinas. \n\nLet us try to predict the weight based on one or more features.\n\n\n```python\nimport pandas as pd\n\n# load the dataset\ndf = pd.read_csv('QT1data.csv')\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SexAgeHeightWeightShoe numberTrouser number
0Female53154593640
1Male23170564038
2Female23167633740
3Male21178784040
4Female25153583638
\n
\n\n\n\n\n```python\ndf.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
AgeHeightWeightShoe number
count130.000000130.000000130.000000130.000000
mean28.238462170.68461570.23846239.507692
std12.38704211.56849115.5348092.973386
min3.000000100.00000015.00000024.000000
25%21.000000164.25000060.00000038.000000
50%23.000000172.00000069.50000040.000000
75%29.000000178.00000080.00000041.000000
max62.000000194.000000130.00000046.000000
\n
\n\n\n\n\n```python\n# Our target variable is the weight\ny = df.pop('Weight').values\ny\n```\n\n\n\n\n array([ 59, 56, 63, 78, 58, 89, 68, 83, 70, 56, 65, 66, 78,\n 75, 47, 68, 65, 99, 80, 62, 60, 84, 91, 60, 15, 85,\n 56, 62, 69, 78, 60, 48, 66, 85, 101, 74, 52, 52, 80,\n 72, 75, 78, 61, 74, 70, 90, 66, 79, 80, 65, 90, 69,\n 58, 63, 62, 73, 55, 65, 62, 75, 48, 59, 74, 80, 51,\n 90, 58, 117, 77, 75, 56, 50, 67, 93, 70, 76, 85, 50,\n 86, 96, 63, 56, 90, 95, 130, 70, 83, 70, 64, 57, 54,\n 69, 53, 28, 62, 68, 73, 54, 75, 85, 62, 69, 55, 82,\n 84, 52, 64, 73, 86, 77, 64, 65, 55, 50, 98, 77, 51,\n 66, 83, 61, 80, 81, 76, 78, 70, 75, 72, 80, 90, 53])\n\n\n\n## 2.1. One feature ($d=1$)\n\nWe will use 'Height' as the input feature and predict the weight\n\n\n```python\nfeature_cols = ['Height']\nX = df.loc[:, feature_cols]\nX.shape\n```\n\n\n\n\n (130, 1)\n\n\n\nWrite the code for computing the following\n- compute the regression weights using $\\mathbf{X}$ and $\\mathbf{y}$\n- compute the prediction\n- compute the $R^2$ value\n- plot the regression graph (use appropriate values for the parameters of function plot_points_regression())\n\n\n```python\n# START OF YOUR CODE:\nw = normal_equation_weights(X, y)\nprediction = normal_equation_prediction(X, w)\nr_2 = r2_score(y, prediction)\nplot_points_regression(X,\n y,\n title='Weight based on height prediction',\n xlabel=\"Height\",\n ylabel='Weight',\n prediction=prediction,\n legend=True,\n r_squared=r_2)\n# END OF YOUR CODE\n```\n\n## 2.2 - Two input features ($d=2$)\n\nNow repeat the exercise with using as input the features 'Height' and 'Shoe number'\n\n- compute the regression weights using $\\mathbf{X}$ and $\\mathbf{y}$\n- compute the prediction\n- compute and print the $R^2$ value\n\nNote that our plotting function can not be used. There is no need to do plotting here.\n\n\n```python\n# START OF YOUR CODE:\nfeature_cols = ['Height', 'Shoe number']\nX = df.loc[:, feature_cols]\n\nw = normal_equation_weights(X, y)\nprediction = normal_equation_prediction(X, w)\nr_2 = r2_score(y, prediction)\n\nprint(r_2)\n# END OF YOUR CODE\n```\n\n 0.45381183096658584\n\n\n## 2.3 - Three input features ($d=3$)\n\nNow try with three features. There is no need to do plotting here.\n- compute the regression weights using $\\mathbf{X}$ and $\\mathbf{y}$\n- compute the prediction\n- compute and print the $R^2$ value\n\n\n```python\n# START OF YOUR CODE:\nfeature_cols = ['Height', 'Shoe number', 'Age']\nX = df.loc[:, feature_cols]\n\nw = normal_equation_weights(X, y)\nprediction = normal_equation_prediction(X, w)\nr_2 = r2_score(y, prediction)\n\nprint(r_2)\n# END OF YOUR CODE\n```\n\n 0.4776499498669615\n\n\n## 2.4 - Your comments\n\nDid you observe anything interesting with varying values of $d$ ? Comment about it.\n\nYOUR COMMENT BELOW:\n\nAs you increase $d$, the $R^2$ score increases slightly.\n", "meta": {"hexsha": "18fa75b149185901c8be8c3504bacc9e16afd532", "size": 116719, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ep02_linreg_analytic.ipynb", "max_stars_repo_name": "ricardokojo/MAC0460-2021", "max_stars_repo_head_hexsha": "b5d6c85c1b35e9dba9f5443f218a85b0e0845b32", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ep02_linreg_analytic.ipynb", "max_issues_repo_name": "ricardokojo/MAC0460-2021", "max_issues_repo_head_hexsha": "b5d6c85c1b35e9dba9f5443f218a85b0e0845b32", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ep02_linreg_analytic.ipynb", "max_forks_repo_name": "ricardokojo/MAC0460-2021", "max_forks_repo_head_hexsha": "b5d6c85c1b35e9dba9f5443f218a85b0e0845b32", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 109.4924953096, "max_line_length": 34588, "alphanum_fraction": 0.816850727, "converted": true, "num_tokens": 6908, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3451052709578724, "lm_q2_score": 0.33458944125318607, "lm_q1q2_score": 0.11546857978332391}} {"text": "\n*This notebook contains course material from [CBE30338](https://jckantor.github.io/CBE30338)\nby Jeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE30338.git).\nThe text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),\nand code is released under the [MIT license](https://opensource.org/licenses/MIT).*\n\n\n< [Realizable PID Control](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/04.05-Realizable-PID-Control.ipynb) | [Contents](toc.ipynb) | [PID Control - Laboratory](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/04.10-PID-Control.ipynb) >

\n\n# PID Controller Tuning\n\nWe have previously discussed many of the features that should be included in any practical implementation of PID control. The notebook addresses the core issue of how to find appropriate values for the control constants $K_P$, $K_I$, and $K_D$ in the non-interacting model for PID control\n\n\\begin{align}\nMV & = \\overline{MV} + K_P(\\beta\\ SP - PV) + K_I \\int^{t} (SP - PV)\\ dt + K_D \\frac{d(\\gamma\\ SP - PV)}{dt}\n\\end{align}\n\nwhere we have include setpoint weights $\\beta$ and $\\gamma$ for the proportional and derivative terms, respectively. In the case where the PID model is given in the standard ISA form\n\n\\begin{align}\nMV & = \\overline{MV} + K_c\\left[(\\beta\\ SP - PV) + \\frac{1}{\\tau_I}\\int^{t} (SP - PV)\\ dt + \\tau_D \\frac{d(\\gamma\\ SP - PV)}{dt}\\right]\n\\end{align}\n\nthe equivalent task is to find values for the control gain $K_c$, the integral time constant $\\tau_I$, and derivative time constant $\\tau_D$. The equivalence of these models is established by the following relationships among the parameters\n\n\\begin{align}\n\\begin{array}{ccc}\n\\mbox{ISA} \\rightarrow \\mbox{Non-interacting} & & \\mbox{Non-interacting} \\rightarrow \\mbox{ISA} \\\\\nK_P = K_c & & K_c = K_P\\\\\nK_I = \\frac{K_c}{\\tau_I} & & \\tau_I = \\frac{K_P}{K_I}\\\\\nK_D = K_c\\tau_D & & \\tau_D = \\frac{K_D}{K_P}\n\\end{array}\n\\end{align}\n\n### Empirical Methods\n\nDetermining PID control parameters is complicated by the general absence of process models for most applications. Typically the control implementation takes place in three steps:\n\n1. **Idenfication.** A prescribed experiment is performed to create an empirical model for the response of the process to the manipulated input.\n2. **Control Design.** Given an empirical model, find PID control parameters that provide setpoint response, disturbance rejection, and robustness to modeling errors.\n3. **Validation.** Perform a series of test to validate control performance under normal and extreme conditions.\n\nIdentification is normally limited to procedures that can be completed with minimal equipment downtime, and without extensive support from \n\n\u00c5str\u00f6m, Karl J., and Tore H\u00e4gglund. \"Advanced PID control.\" The Instrumentation Systems and Automation Society. 2006.\n\n\u00c5str\u00f6m, Karl J., and Tore H\u00e4gglund. \"Revisiting the Ziegler\u2013Nichols step response method for PID control.\" Journal of process control 14, no. 6 (2004): 635-650.\n\nGarpinger, Olaf, Tore H\u00e4gglund, and Karl J. \u00c5str\u00f6m. \"Performance and robustness trade-offs in PID control.\" Journal of process control 24(2004): 568-577.\n\nThe basic approach to tuning PID controllers is to:\n\n1. Perform a specified experiment to extract key parameters that specify process behavior.\n3. From the process parameters, use formula to determine control constants.\n4. Test the resulting controller for setpoint tracking and disturbance rejection.\n\nThe methods we'll be discussing differ in the type of experiment to be performed, the parameters extracted from experimental results, and the assumptions underlying the choice of control parameters. The methods we'll cover are commonly used in industry, and should be in the toolkit of most chemical engineers.\n\n1. AMIGO\n2. Ziegler-Nichols\n3. Relay Tuning\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef fopdt(t, K, tau, theta):\n return K*(1 - np.exp(-(t-theta)/tau))*(t > theta)\n\n\nt = np.linspace(0,600,400)\n\ny = fopdt(t, 2, .1, 10)\n\nplt.plot(t, y)\n```\n\n## PID Reference Implementation\n\n\n```python\ndef PID(Kp, Ki, Kd, MV_bar=0, beta=1, gamma=0):\n # initialize stored data\n t_prev = -100\n P = 0\n I = 0\n D = 0\n S = 0\n N = 5\n \n # initial control\n MV = MV_bar\n \n while True:\n # yield MV, wait for new t, SP, PV, TR\n data = yield MV\n \n # see if a tracking data is being supplied\n if len(data) < 4:\n t, SP, PV = data\n else:\n t, SP, PV, TR = data\n I = TR - MV_bar - P - D\n \n # PID calculations\n P = Kp*(beta*SP - PV)\n I = I + Ki*(SP - PV)*(t - t_prev)\n eD = gamma*SP - PV\n D = N*Kp*(Kd*eD - S)/(Kd + N*Kp*(t - t_prev))\n MV = MV_bar + P + I + D\n \n # Constrain MV to range 0 to 100 for anti-reset windup\n MV = 0 if MV < 0 else 100 if MV > 100 else MV\n I = MV - MV_bar - P - D\n \n # update stored data for next iteration\n S = D*(t - t_prev) + S\n t_prev = t\n```\n\n## AMIGO Tuning\n\n### KLT Model - First Order with Dead Time\n\nAMIGO tuning assumes a so-called KLT process model with three parameters\n\n\\begin{align}\n\\tau \\frac{d y}{dt} + y = K u(t-\\theta)\n\\end{align}\n\nwhere $y$ is the deviation of the process variable from a nominal steady-state value ($PV - \\overline{PV}$), $u$ is a deviation in the manipulated manipulated variable from a nominal value ($MV - \\overline{MV}$), and the parameters have the following descriptions.\n\n| | |\n| :-: | :-: |\n|$K$| static gain |\n|$\\tau$| first-order time constant (T, or Time constant)\n|$\\theta$| time-delay (L, or Lag)\n\nThese parameters can be determined from step testing.\n\n### Tuning Rules\n\nThe AMIGO tuning rules provide values for the PID parameters $K_c$, $\\tau_I$, $\\tau_D$ in addition to setpoint weights $\\beta$ and $\\gamma$.\n\n\\begin{align}\nK_c & = \\frac{1}{K}\\left(0.2 + 0.45\\frac{\\tau}{\\theta}\\right) \\\\\n\\\\\n\\tau_I & = \\frac{0.4\\theta + 0.8\\tau}{\\theta + 0.1\\tau}\\theta \\\\\n\\\\\n\\tau_D & = \\frac{0.5\\theta\\tau}{0.3\\theta + \\tau} \\\\\n\\\\\n\\beta & = \\begin{cases} 0 & \\theta \\lt \\tau \\\\ 1 & \\theta \\gt \\tau \\end{cases} \\\\\n\\\\\n\\gamma & = 0\n\\end{align}\n\nFor proportional-integral (PI) control, the tuning rules are\n\n\\begin{align}\nK_c & = \\frac{1}{K}\\left(0.15 + 0.35\\frac{\\tau}{\\theta} - \\frac{\\tau^2}{(\\theta + \\tau)^2}\\right) \\\\\n\\\\\n\\tau_I & = \\left(0.35 + \\frac{13\\tau^2}{\\tau^2 + 12\\theta\\tau + 7\\theta^2} \\right)\\theta \\\\\n\\\\\n\\beta & = \\begin{cases} 0 & \\theta \\lt \\tau \\\\ 1 & \\theta \\gt \\tau \\end{cases} \\\\\n\\\\\n\\gamma & = 0\n\\end{align}\n\nBased on extensive simulation studies, the AMIGO tuning rules generally provide good performance for systems dynamics that are dominated by time lag $\\theta > \\tau$. The tuning rules are generally found to be overly conservative for $\\theta < \\tau$.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nr = np.linspace(0.05,10)\nkr = 0.2 + 0.45/r\nir = r*(0.4*r + 0.8)/(r + 0.1)\ndr = 0.5*r/(0.3*r + 1)\nplt.figure(figsize=(8,6))\nplt.loglog(r,kr,r,ir,r,dr)\nplt.legend(['K K_c','Ti / T','Td / T'])\nplt.xlabel('theta / T')\nplt.grid(True, which='both')\n```\n\n## Ziegler Nichols Tuning\n\n## Relay Tuning\n\n\n```python\ndef relay(SP, a = 5):\n MV = 0\n while True:\n PV = yield MV\n MV_prev = MV\n MV = 100 if PV < SP - a else 0 if PV > SP + a else MV_prev\n```\n\n\n```python\n%matplotlib inline\nfrom tclab import clock, setup, Historian, Plotter\n\nTCLab = setup(connected=False, speedup=20)\n\ntfinal = 1200\nMV_bar = 50\nhMV = 20\n\nwith TCLab() as lab:\n h = Historian([('SP', lambda: SP), ('T1', lambda: lab.T1), ('Q1', lab.Q1)])\n p = Plotter(h, 200)\n T1 = lab.T1\n for t in clock(tfinal, 1):\n if t < 600:\n SP = lab.T1\n MV = MV_bar\n else:\n MV = (MV_bar - hMV) if (lab.T1 > SP) else (MV_bar + hMV)\n lab.Q1(MV)\n p.update(t) # update information display\n```\n\n\n```python\nPu = 140\nh = 20\na = 1.5\n\nKu = 4*h/a/3.14\n\n```\n\n\n```python\nKu\n```\n\n\n\n\n 16.985138004246284\n\n\n\n\n```python\nKp = Ku/2\nTi = Pu/2\nTd = Pu/8\n\nKi = Kp/Ti\nKd = Kp*Td\nprint(Kp, Ki, Kd)\n```\n\n 8.492569002123142 0.12132241431604489 148.619957537155\n\n\n\n```python\n%matplotlib inline\nfrom tclab import clock, setup, Historian, Plotter\n\nTCLab = setup(connected=False, speedup=10)\n\ncontroller = PID(Kp, Ki, Kd, beta=1, gamma=1) # create pid control\ncontroller.send(None) # initialize\n\ntfinal = 600\n\nwith TCLab() as lab:\n h = Historian([('SP', lambda: SP), ('T1', lambda: lab.T1), ('MV', lambda: MV), ('Q1', lab.Q1)])\n p = Plotter(h, tfinal)\n T1 = lab.T1\n for t in clock(tfinal, 2):\n SP = T1 if t < 50 else 50 # get setpoint\n PV = lab.T1 # get measurement\n MV = controller.send([t, SP, PV]) # compute manipulated variable\n lab.Q1(MV) # apply \n p.update(t) # update information display\n```\n\n## Exercise: Compare the performance of these tuning rules.\n\nTo be written.\n\n\n```python\n\n```\n\n\n< [Realizable PID Control](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/04.05-Realizable-PID-Control.ipynb) | [Contents](toc.ipynb) | [PID Control - Laboratory](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/04.10-PID-Control.ipynb) >

\n", "meta": {"hexsha": "9d580f57197e33076c3af0fe5ec261bbb3352068", "size": 143115, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Mathematical Modeling/04.06-PID-Controller-Tuning.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Mathematical Modeling/04.06-PID-Controller-Tuning.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Mathematical Modeling/04.06-PID-Controller-Tuning.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 272.6, "max_line_length": 31280, "alphanum_fraction": 0.9145931593, "converted": true, "num_tokens": 3019, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4493926344647597, "lm_q2_score": 0.2568319970758679, "lm_q1q2_score": 0.11541840778076973}} {"text": "Probabilistic Programming and Bayesian Methods for Hackers \n========\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n#### Looking for a printed version of Bayesian Methods for Hackers?\n\n_Bayesian Methods for Hackers_ is now a published book by Addison-Wesley, available on [Amazon](http://www.amazon.com/Bayesian-Methods-Hackers-Probabilistic-Addison-Wesley/dp/0133902838)! \n\n\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assumes that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json, matplotlib\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) / 2, 2, k + 1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$ pass. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2 * p / (1 + p), color=\"#348ABD\", lw=3)\n# plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2 * (0.2) / 1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Is my code bug-free?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1. / 3, 2. / 3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.ylim(0,1)\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n#### Expected Value\nExpected value (EV) is one of the most important concepts in probability. The EV for a given probability distribution can be described as \"the mean value in the long run for many repeated samples from that distribution.\" To borrow a metaphor from physics, a distribution's EV acts like its \"center of mass.\" Imagine repeating the same experiment many times over, and taking the average over each outcome. The more you repeat the experiment, the closer this average will become to the distributions EV. (side note: as the number of repeated experiments goes to infinity, the difference between the average outcome and the EV becomes arbitrarily small.)\n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots, \\; \\; \\lambda \\in \\mathbb{R}_{>0} $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1. / l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0, 1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```python\nimport pymc as pm\n\nalpha = 1.0 / count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n\nwith pm.Model() as model:\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```python\nprint(\"Random output:\", tau.eval(), tau.eval(), tau.eval())\n```\n\n Random output: 46 26 32\n\n\n\n```python\n# @pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. Deterministic functions will be covered in Chapter 2. \n\n\n```python\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n# Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [-----------------100%-----------------] 40000 of 40000 complete in 6.5 sec\n\n\n```python\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```python\nfigsize(12.5, 10)\n# histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data) - 20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\n# type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\n# type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\n# type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg/).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1f5e5bcb6912f46ddab614d90f56698463d626b3", "size": 332560, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_stars_repo_name": "yashpatel5400/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "c88d86ec45b4590779f1b340547db50cfb2e2f51", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_issues_repo_name": "yashpatel5400/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "c88d86ec45b4590779f1b340547db50cfb2e2f51", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_forks_repo_name": "yashpatel5400/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "c88d86ec45b4590779f1b340547db50cfb2e2f51", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 264.9880478088, "max_line_length": 84576, "alphanum_fraction": 0.8915503969, "converted": true, "num_tokens": 11692, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3629692055196168, "lm_q2_score": 0.31742627204485063, "lm_q1q2_score": 0.11521596177517318}} {"text": "# Notebook contents: \n\nThis notebook contains a lecture. The code for generating plots are found at the of the notebook. Links below.\n\n- [presentation](#Session-11:)\n- [code for plots](#Code-for-plots)\n\n# Session 11:\n## Linear regression and regularizaton\n\n*Andreas Bjerre-Nielsen*\n\n## Vaaaamos\n\n\n```python\nimport warnings\nfrom sklearn.exceptions import ConvergenceWarning\nwarnings.filterwarnings(action='ignore', category=ConvergenceWarning)\n\nimport matplotlib.pyplot as plt\nimport numpy as np \nimport pandas as pd \nimport seaborn as sns\n```\n\n# Introduction\n\n## It sucks not being able to complete all the exercises...\n\n- We know, we feel sorry, we have been there. The exercises help you grow. Bonus is if you feel bored.\n\n> #### Hadley Wickham\n\n> The bad news is that when ever you learn a new skill you\u2019re going to suck. It\u2019s going to be frustrating. The good news is that is typical and happens to everyone and it is only temporary. You can\u2019t go from knowing nothing to becoming an expert without going through a period of great frustration and great suckiness.\n\n> #### Kosuke Imai\n\n> One can learn data analysis only by doing, not by reading.\n\n## Supervised problems (1)\n*How do we distinguish between problems?*\n\n\n```python\nf_identify_question\n```\n\n## Supervised problems (2)\n*The two canonical problems*\n\n\n```python\nf_identify_answer\n```\n\n## Supervised problems (3)\n*Which models have we seen for classification?*\n\n- perceptron\n\n- adaline \n\n- logistic regression\n\n## Agenda\n1. [What is prediction](#What-is-prediction)\n1. [Modelling data: overfitting vs underfitting](#Modelling-data:-overfitting-vs-underfitting)\n1. [Linear regression models: exact vs. approximate](#Regression-models:-exact-vs.-approximate)\n1. [The curse of overfitting and regularization](#The-curse-of-overfitting-and-regularization)\n1. [Implementation details](#Implementation-details)\n\n# What is prediction\n\n## Two agendas (1)\n\nWhat are the objectives of empirical research? \n\n1. *causation*: what is the effect of a particular variable on an outcome? \n2. *prediction*: find some function that provides a good prediction of $y$ as a function of $x$\n\n## Two agendas (2)\n\nHow might we express the agendas in a model?\n\n$$ y = \\alpha + \\beta x + \\varepsilon $$\n\n- *causation*: interested in $\\hat{\\beta}$ \n\n- *prediction*: interested in $\\hat{y}$ \n\n\n## Two agendas (3)\n\nMight these two agendas be related at a deeper level? \n\nCan prediction quality inform us about how to make causal models?\n\n# Modelling data: overfitting vs underfitting\n\n## Model complexity (1)\n*What does a model of low complexity look like in regression problems?*\n\n\n```python\nf_complexity[0]\n```\n\n## Model complexity (2)\n*What does medium model complexity look like?*\n\n\n```python\nf_complexity[1]\n```\n\n## Model complexity (3)\n*What does high model complexity look like?*\n\n\n```python\nf_complexity[2]\n```\n\n## Model fitting (1)\n*Quiz (1 min.): Which model fitted the data best?*\n\n\n```python\nf_bias_var['regression'][2]\n```\n\n## Model fitting (2)\n*What does underfitting and overfitting look like for classification?*\n\n\n```python\nf_bias_var['classification'][2]\n```\n\n# Regression models: exact vs. approximate\n\n## Estimation (1)\n*Do we know already some ways to estimate regression models?*\n\n- Social scientists know all about the Ordinary Least Squares (OLS).\n- Some properties of OLS\n - Is applied to solve linear models.\n - Estimates both parameters and their standard deviation.\n - Is the best linear unbiased estimator under regularity conditions. \n \n\n*How is OLS estimated?*\n\n- $\\beta=(\\textbf{X}^T\\textbf{X})^{-1}\\textbf{X}^T\\textbf{y}$ \n - derived from by solving for $\\beta$ in FOC: $ X'y=X'X\\beta$ \n - note: equivalent to: $ X'\\varepsilon=0$\n\n- computation requires non perfect multicollinarity.\n\n## Estimation (2)\n*How might we estimate a linear regression model?*\n\n- first order methods (e.g. gradient descent)\n- second order methods (Newton, quasi-Newton)\n - often faster, but may not always work\n- what about local minima?\n - not a big problem in this course as we use linear models only\n - we should make grid search over random values\n\n*So what the hell was gradient descent?*\n\n- repeat the following: compute errors, multiply with features, and update coefficients\n\n## Estimation (3)\n*Can you explain that in details?*\n\n- Yes, like with Adaline, we minimize the sum of squared errors (SSE): \n\\begin{align}SSE&=\\boldsymbol{e}^{T}\\boldsymbol{e}\\\\\\boldsymbol{e}&=\\textbf{y}-\\textbf{X}\\textbf{w}\\end{align}\n\n\n```python\nX = np.random.normal(size=(3,2))\ny = np.random.normal(size=(3))\nw = np.random.normal(size=(3))\n\ne = y-(w[0]+X.dot(w[1:]))\nSSE = e.T.dot(e)\n```\n\n## Estimation (4)\n*And what about the updating..? What is it about the first order deritative?*\n\n\\begin{align}\n\\frac{\\partial SSE}{\\partial\\hat{\\textbf{w}}}&=\\textbf{X}^T\\textbf{e}\\qquad\\text{(the gradient)}\\\\\n \\Delta\\hat{\\textbf{w}}&=-\\eta\\cdot\\textbf{X}^T\\textbf{e}\\qquad\\text{(gradient descent)}\\\\\n &=-\\eta\\cdot\\textbf{X}^T(\\textbf{y}-\\hat{\\textbf{y}})\\\\ \n &=-\\eta\\cdot\\textbf{X}^T(\\textbf{y}-\\textbf{X}\\hat{\\textbf{w}})\n\\end{align}\n\n\n```python\neta = 0.001 # learning rate\nfod = X.T.dot(e)\nupdate_vars = eta*fod\nupdate_bias = eta*e.sum()\n```\n\n## Estimation (5)\n*What are some computational advantages relative to OLS?*\n\n- OLS \n - only works on linear models\n - Quadratic scaling in number of variables ($K$) is slow!\n - Computation complexity $\\mathcal{O}(K^2N)$ ([read more](https://math.stackexchange.com/questions/84495/computational-complexity-of-least-square-regression-operation))\n\n- Approximate methods: e.g. gradient descent \n - Works despite high multicollinarity\n - Scales well: can be applied in subsets to very large datasets \n - We only need subset in memory \n - Note: not guaranteed convergence time!\n - Works on non-linear problems, e.g. neural networks.\n\n## Fitting a polynomial (1)\nPolyonomial: $f(x) = 2+8*x^4$\n\nTry models of increasing order polynomials. \n\n- Split data into train and test (50/50)\n- For polynomial order 0 to 9:\n - Iteration n: $y = \\sum_{k=0}^{n}(\\beta_k\\cdot x^k)+\\varepsilon$. (Taylor expansion)\n - Estimate order n model on training data\n - Evaluate with on test data with $\\log RMSE$ ($= \\log \\sqrt{SSE/n}$)\n\n## Fitting a polynomial (2)\nWe generate samples of data from true model (fourth order polynomial).\n\n\n```python\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\n\ndef true_fct(X):\n return 2+X**4\n\nn_samples = 25\nnp.random.seed(0)\n\nX_train = np.random.normal(size=(n_samples,1))\ny_train = true_fct(X_train).reshape(-1) + np.random.randn(n_samples) \n\nX_test = np.random.normal(size=(n_samples,1))\ny_test = true_fct(X_test).reshape(-1) + np.random.randn(n_samples)\n```\n\n## Fitting a polynomial (3)\nWe estimate the polynomials and store MSE for train and test:\n\n\n```python\nfrom sklearn.metrics import mean_squared_error as mse\n\ntest_mse = []\ntrain_mse = []\nparameters = []\n\nmax_degree = 15\ndegrees = range(max_degree+1)\n\nfor p in degrees:\n X_train_p = PolynomialFeatures(degree=p).fit_transform(X_train)\n X_test_p = PolynomialFeatures(degree=p).fit_transform(X_test)\n reg = LinearRegression().fit(X_train_p, y_train)\n train_mse += [mse(reg.predict(X_train_p),y_train)] \n test_mse += [mse(reg.predict(X_test_p),y_test)] \n parameters.append(reg.coef_)\n```\n\n## Fitting a polynomial (4)\n*So what happens to the model performance in- and out-of-sample?*\n\n\n```python\ndegree_index = pd.Index(degrees,name='Polynomial degree ~ model complexity')\n\nax = pd.DataFrame({'Train set':train_mse, 'Test set':test_mse})\\\n .set_index(degree_index).plot(figsize=(14,5), logy=True)\nax.set_ylabel('Mean squared error')\n```\n\n## Fitting a polynomial (5)\n*Quiz: Why does it go wrong on the test data?*\n\n- more spurious parameters \n - (we include variables beyond those in true model, i.e. $x^4$ and the bias term)\n- the coefficient size increases (next slide)\n\n## Fitting a polynomial (6)\n*What do you mean coefficient size increase?*\n\nPlot of mean coeffiecient/weight sizes.\n\n\n```python\norder_idx = pd.Index(range(n_degrees+1),name='Polynomial order')\nax = pd.DataFrame(parameters,index=order_idx)\\\n.abs().mean(1).plot(figsize=(14,5),logy=True)\nax.set_ylabel('Mean parameter size')\n```\n\n## Fitting a polynomial (7)\n*How else could we visualize this problem?*\n\n\n```python\nf_bias_var['regression'][2]\n```\n\n# The curse of overfitting and regularization\n\n## Looking for a remedy\n*How might we solve the overfitting problem?*\n\n- too many number of variables (spurious relations)\n- excessive magnitude of the coefficient size of variables \n\nCould we incorporate these two issues in our optimization problem?\n\n## Regularization (1)\n\n*Why do we regularize?*\n\n- To mitigate overfitting > better model predictions\n\n*How do we regularize?*\n\n- We make models which are less complex:\n - reducing the **number** of coefficient;\n - reducing the **size** of the coefficients.\n\n## Regularization (2)\n\n*What does regularization look like?*\n\nWe add a penalty term our optimization procedure:\n \n$$ \\text{arg min}_\\beta \\, \\underset{\\text{MSE=SSE/n}}{\\underbrace{E[(y_0 - \\hat{f}(x_0))^2]}} + \\underset{\\text{penalty}}{\\underbrace{\\lambda \\cdot R(\\beta)}}$$\n\nIntroduction of penalties implies that increased model complexity has to be met with high increases precision of estimates.\n\n## Regularization (3)\n\n*What are some used penalty functions?*\n\nThe two most common penalty functions are L1 and L2 regularization.\n\n- L1 regularization (***Lasso***): $R(\\beta)=\\sum_{j=1}^{p}|\\beta_j|$ \n - Makes coefficients sparse, i.e. selects variables by removing some (if $\\lambda$ is high)\n \n \n- L2 regularization (***Ridge***): $R(\\beta)=\\sum_{j=1}^{p}\\beta_j^2$\n - Reduce coefficient size\n - Fast due to analytical solution\n \n*To note:* The *Elastic Net* uses a combination of L1 and L2 regularization.\n\n## Regularization (4)\n\n*How the Lasso (L1 reg.) deviates from OLS*\n\n

\n\n## Regularization (5)\n\n*How the Ridge regression (L2 reg.) deviates from OLS*\n\n
\n\n## Regularization (6)\n\n*How might we describe the $\\lambda$ of Lasso and Ridge?*\n\nThese are hyperparameters that we can optimize over. \n\n- More about this tomorrow.\n\n## Regularization (7)\n\n*Is there a generalization of of Lasso and Ridge?*\n\nYes, the elastic net allows both types of regularization. Thererfore, it has two hyperparameters.\n\n# Implementation details\n\n## Underfitting remedies\n*Is it possible to solve the underfitting problem?*\n\nYes, there are in general two ways.\n- Using polynomial interactions of all features.\n - This is known as Taylor expansion\n - Note: we need to use regularization too curb impact of overfitting!\n- Using non-linear models who can capture all patterns.\n - These are called universal approximators\n - Return to an overview of these in Session 14.\n\n## Underfitting remedies (2)\n*Some of the models we see here, e.g. Perceptrons, seem too simple - are they ever useful?*\n\n- No, not for serious machine learning. \n- But for exposition (your learning), yes.\n- However, the perceptron and related models are building blocks for building neural networks.\n\n\n## The devils in the details (1)\n\n*So we just run regularization?*\n\n# NO\n\nWe need to rescale our features:\n- convert to zero mean: \n- standardize to unit std: \n\nCompute in Python:\n- option 1: `StandardScaler` in `sklearn` (RECOMMENDED)\n- option 2: `(X - np.mean(X)) / np.std(X)`\n\n\n\n## The devils in the details (2)\n*So we just scale our test and train?*\n\n# NO\n\nFit to the distribution in the **training data first**, then rescale train and test! See more [here](https://stats.stackexchange.com/questions/174823/how-to-apply-standardization-normalization-to-train-and-testset-if-prediction-i).\n\n## The devils in the details (3)\n*So we just rescale before using polynomial features?*\n\n# NO\n\nOtherwise the interacted varaibles are not gaussian distributed.\n\n## The devils in the details (4)\n*Does sklearn's `PolynomialFeatures` work for more than variable?*\n\n# YES!\n\n# The end\n[Return to agenda](#Agenda)\n\n# Code for plots\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport requests\nimport seaborn as sns\n\nplt.style.use('ggplot')\n%matplotlib inline\n\nSMALL_SIZE = 16\nMEDIUM_SIZE = 18\nBIGGER_SIZE = 20\n\nplt.rc('font', size=SMALL_SIZE) # controls default text sizes\nplt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title\nplt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels\nplt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize\nplt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title\n```\n\n\n```python\n%run ../ML_plots.ipynb\n```\n", "meta": {"hexsha": "8e6a0cf1e48d9dd743cf3588a198b62d1acbe289", "size": 603600, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "teaching_material/session_11/module_11_slides.ipynb", "max_stars_repo_name": "lukasodgaard/ISDS", "max_stars_repo_head_hexsha": "22f26cddb7817b0e539fd8a225387de8e1112198", "max_stars_repo_licenses": ["MIT", "Unlicense"], "max_stars_count": 66, "max_stars_repo_stars_event_min_datetime": "2020-06-27T18:09:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-30T12:57:12.000Z", "max_issues_repo_path": "teaching_material/session_11/module_11_slides.ipynb", "max_issues_repo_name": "lukasodgaard/ISDS", "max_issues_repo_head_hexsha": "22f26cddb7817b0e539fd8a225387de8e1112198", "max_issues_repo_licenses": ["MIT", "Unlicense"], "max_issues_count": 50, "max_issues_repo_issues_event_min_datetime": "2020-07-12T20:24:54.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-27T19:36:24.000Z", "max_forks_repo_path": "teaching_material/session_11/module_11_slides.ipynb", "max_forks_repo_name": "lukasodgaard/ISDS", "max_forks_repo_head_hexsha": "22f26cddb7817b0e539fd8a225387de8e1112198", "max_forks_repo_licenses": ["MIT", "Unlicense"], "max_forks_count": 93, "max_forks_repo_forks_event_min_datetime": "2020-07-05T11:28:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T12:31:51.000Z", "avg_line_length": 414.5604395604, "max_line_length": 77868, "alphanum_fraction": 0.9426507621, "converted": true, "num_tokens": 3411, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881357207955, "lm_q2_score": 0.23370635157681108, "lm_q1q2_score": 0.11502749348869944}} {"text": "# Agents: Lab 1\n\n\n```python\nfrom IPython.core.display import HTML\ncss_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'\nHTML(url=css_file)\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Conway's Game of Life\n\nA simple agent model is [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life), which is an example of [Cellular automota](https://en.wikipedia.org/wiki/Cellular_automaton). A two-dimensional square grid of cells are either \"dead\" or \"alive\". At each iteration, each cell checks its neighbours (including diagonals: each cell has 8 neighbours).\n\n* Any live cell with fewer than two live neighbours dies (\"under-population\")\n* Any live cell with two or three neighbours lives (\"survival\")\n* Any live cell with four or more neighbours dies (\"over-population\")\n* Any dead cell with *exactly* three neigbours lives (\"reproduction\")\n\nAt the boundaries of the grid periodic boundary conditions are imposed.\n\nWrite a function that takes a `numpy` array representing the grid. Test it on some of the [standard example patterns](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Examples_of_patterns). The `matplotlib` `imshow` function, and the [matplotlib `FuncAnimation`](http://matplotlib.org/examples/animation/dynamic_image.html) function may help; if running in the notebook, the [instructions on installing and using ffmpeg and html5](https://github.com/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_03_1DDiffusion.ipynb) may also be useful.\n\n\n```python\n%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot, animation\n\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nrcParams['figure.figsize'] = (12,6)\n\nfrom __future__ import division\n```\n\n\n```python\ndef conway_iteration(grid):\n \"\"\"\n Take one iteration of Conway's game of life.\n \n Parameters\n ----------\n \n grid : array\n (N+2) x (N+2) numpy array representing the grid (1: live, 0: dead)\n \n \"\"\"\n \n # Code to go here\n \n return grid\n```\n\n\n```python\n# Try the loaf - this is static\n\ngrid_loaf = numpy.array([[0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0],\n [0,0,0,1,1,0,0,0],\n [0,0,1,0,0,1,0,0],\n [0,0,0,1,0,1,0,0],\n [0,0,0,0,1,0,0,0],\n [0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0]])\n\nfig = pyplot.figure()\nim = pyplot.imshow(grid_loaf[1:-1,1:-1], cmap=pyplot.get_cmap('gray'))\n\ndef init():\n im.set_array(grid_loaf[1:-1,1:-1])\n return im,\n\ndef animate(i):\n conway_iteration(grid_loaf)\n im.set_array(grid_loaf[1:-1,1:-1])\n return im,\n\n# This will only work if you have ffmpeg installed\n\nanim = animation.FuncAnimation(fig, animate, init_func=init, interval=50, frames=10, blit=True)\n```\n\n\n```python\nHTML(anim.to_html5_video())\n```\n\n\n\n\n\n\n\n\nCreate some random $256 \\times 256$ grids and see what behaviour results.\n\n\n```python\n\n```\n\n## Cellular Automata\n\nThe Game of Life is an example of *cellular automata*: a \"grid\" containing cells representing some model is updated in discrete timesteps according to some rules, usually involving neighbouring cells. Each cell can be thought of as an independent player of the game - an agent - that interacts through its neighbours in order to evolve.\n\nAs an example of a cellular automata model, consider traffic flow. A road is modelled as a grid with one spatial dimension containing $N$ cells. The cell either contains a car (has value $1$) or doesn't (has value $0$). If the space in front of the car is empty it moves forwards; if not, it stays where it is. Periodic boundary conditions are used. We phrase this in terms of \"road locations\" $R_i^n$, so that $R_i^{n+1} = 0$ except for:\n\n\\begin{align}\n R_i^{n+1} & = 1 & \\text{if $R_i^n=1$ and $R_{i+1}^n=1$ (car does not move), or} \\\\\n R_i^{n+1} & = 1 & \\text{if $R_{i-1}^n=1$ and $R_i^n=0$ (car moves forwards)}.\n\\end{align}\n\nA useful diagnostic quantity for this model is the *average velocity*; the number of cars that moved in one step divided by the total number of cars on the road.\n\n### Initial data and density\n\nFor initial data, we choose the *density* of cars on the road to be between $0$ and $1$. Then, for each grid cell, compute uniform random numbers for each cell and initialize the cell according to the density:\n\n### Evolution\n\nConstruct \"roads\" as above and evolve according to the update rule. See how the average velocity behaves. Test the limiting cases. Plot the behaviour of the average velocity against the density: can you understand why it behaves this way? If needed, plot the late time locations of the \"cars\".\n\n\n```python\n\n```\n\n### Making it more complex\n\nConsider adding another lane to the road. The grid becomes a $N \\times 2$ array. The \"fast\" lane can take one additional step every $k$ steps (eg, if the fast lane is going $10$% faster, then every tenth step the fast lane takes two timesteps instead of one).\n\nWe now need to add rules to change lane. Denote cells in the fast lane by $F^n_i$, and in the slow lane by $S^n_i$. Consider a \"polite\" overtaking move: if $S^n_i=1$ and $S^n_{i+1}=1$ then the car at $S^n_i$ will overtake (move to $F^{n+1}_{i+1}$) *only* if there are no cars in its way ($F^n_{i} = 0 = F^n_{i+1}$) *and* if it will not block a car in the fast lane ($F^n_{i-1}=0$). It will also move back into the slow lane in the same circumstances: if a car in the fast lane at $F^n_{i}=1$ is not blocked by, or blocking, any slow lane cars, ie $0 = S^n_i = S^n_{i+1} = S^n_{i-1}$.\n\nInitialize the slow lane only with a certain density of cars. Investigate how the what the average *density* of cars in the fast and slow lane looks like, depending on the initial density.\n\n\n```python\n\n```\n", "meta": {"hexsha": "89fac60eaafa465b4286a3a8bb5d3e6b4743ebc3", "size": 53882, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FEEG6016 Simulation and Modelling/05-Agents-Lab-1.ipynb", "max_stars_repo_name": "ngcm/training-public", "max_stars_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-06-23T05:50:49.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-22T10:29:53.000Z", "max_issues_repo_path": "FEEG6016 Simulation and Modelling/05-Agents-Lab-1.ipynb", "max_issues_repo_name": "Jhongesell/training-public", "max_issues_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-11-28T08:29:55.000Z", "max_issues_repo_issues_event_max_datetime": "2017-11-28T08:29:55.000Z", "max_forks_repo_path": "FEEG6016 Simulation and Modelling/05-Agents-Lab-1.ipynb", "max_forks_repo_name": "Jhongesell/training-public", "max_forks_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2015-04-18T21:44:48.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-09T17:35:58.000Z", "avg_line_length": 103.0248565966, "max_line_length": 31022, "alphanum_fraction": 0.8187520879, "converted": true, "num_tokens": 2521, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4301473485858429, "lm_q2_score": 0.26588047891687405, "lm_q1q2_score": 0.11436778304682749}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\n```\n\n# Definitions in Dynamics: kinematics and kinetics\n\nDynamics (and Applied Mechanics II) is the study of motion and interacting objects. Dynamic models have two sets of deinitions\n\n1. Kinematics: the study of the geometry of motion\n2. Kinetics: the study of forces, work, and impulsive components\n\n## Kinematics: the geometry of motion\n\n### Position\n\nClassical physics describes the position of an object using three\nindependent coordinates e.g. \n\n$$\\mathbf{r}_{P/O} = x\\hat{i} + y\\hat{j} + z\\hat{k}$$\n\nwhere $\\mathbf{r}_{P/O}$ is the position of point $P$ _with respect to the point\nof origin_ $O$, $x,~y,~z$ are magnitudes of distance along a\nCartesian coordinate system and $\\hat{i},~\\hat{j}$ and $\\hat{k}$ are\nunit vectors that describe three directions. \n\nA __unit vector__ is a unitless vector with a magnitude of 1. It only describes a direction. In a 3D coordinate system, you can descirbe the x-, y-, and z-axes using $\\hat{i},~\\hat{j},~and~\\hat{k}$. \n\nIn the figure below, the three unit vectors are plotted as blue arrows and a vector, $\\mathbf{v}=-1\\hat{i}+1\\hat{j}+1\\hat{k}$ is plotted in red. \n\n- the values, $[-1,~1,~1]$ are the _components_ of the vector $\\mathbf{v}$\n- _components_ depend upon the coordinate system. \n- the vector $\\mathbf{v}$ describes a magnitude, $|\\mathbf{v}| = \\sqrt{1^2+1^2+1^2}=\\sqrt{3}$ and direction, $\\hat{e} = -\\frac{1}{\\sqrt{3}}\\hat{i}+\\frac{1}{\\sqrt{3}}\\hat{j}+\\frac{1}{\\sqrt{3}}\\hat{k}$\n- _unit vectors_ help to quantify the magnitude and direction, but the vector is independent of the _unit vectors_\n\n\n```python\nax = plt.figure().add_subplot(projection='3d')\nax.quiver([0, 0, 0], [0, 0, 0], [0, 0, 0], \n [1, 0, 0], [0, 1, 0], [0, 0, 1], label = 'unit vectors')\nax.quiver(0, 0, 0,\n -1, 1, 1, colors='red', label = 'v = -1i+1j+1k')\nax.set_xlim((-2,2))\nax.set_ylim((2,-2))\nax.set_zlim((-2,2))\nax.legend()\nax.set_xlabel('x-axis')\nax.set_ylabel('y-axis')\nax.set_zlabel('z-axis')\nax.set_title('Unit vectors for x-y-z fixed coordinate system');\n```\n\n### Velocity\n\nThe velocity of an object is the change in position per length of time.\n\n\\begin{equation}\n\\mathbf{v}_{P/O} = \\frac{d\\mathbf{r}_{P/O}}{dt} = \\dot{x}\\hat{i} + \\dot{y}\\hat{j} +\n\\dot{z}\\hat{k}\n\\end{equation}\n\n> __Note:__ The notation $\\dot{x}$ and $\\ddot{x}$ is short-hand for writing out\n> $\\frac{dx}{dt}$ and $\\frac{d^2x}{dt^2}$, respectively.\n\nThe definition of velocity depends upon the change in position of all\nthree independent coordinates, where\n$\\frac{d}{dt}(x\\hat{i})=\\dot{x}\\hat{i}$. \n\n### Example - velocity given position\nYou can find velocity based upon postion, but you can only find changes\nin position with velocity. Consider tracking the motion of a car driving\ndown a road using GPS. You determine its motion and create the position,\n\n$\\mathbf{r} = x\\hat{i} +y\\hat{j}~miles$, where\n\n- $x(t) = t^2 + 3~miles$ \n- $y(t) = 3t - 1~miles$\n- and $t$ is measured in hours\n\nTo get the velocity, calculate $\\mathbf{v} = \\dot{\\mathbf{r}}$\n\n$\\mathbf{v} = (2t)\\hat{i} +3 \\hat{j}$\n\n\n```python\nt = np.linspace(0, 6, 7)\nx = t**2 + 3\ny = 3*t -1\nplt.plot(x,y,'o')\nplt.quiver(x,y,2*t, 3)\nplt.title('Position of car on road every hour'+\n '\\nvelocity shown as arrow')\nplt.axis('equal')\nplt.xlabel('x-position (m)')\nplt.ylabel('y-position (m)');\n```\n\n## Speed\n\nThe speed of an object is the\n[magnitude](https://www.mathsisfun.com/algebra/vectors.html) of the\nvelocity, \n\n$|\\mathbf{v}_{P/O}| = \\sqrt{\\mathbf{v}\\cdot\\mathbf{v}} =\n\\sqrt{\\dot{x}^2 + \\dot{y}^2 + \\dot{z}^2}$\n\nIn the example above, the speed of the car, $v$, is given by\n\n$v = |\\mathbf{v}| = \\sqrt{\\dot{x}^2+\\dot{y}^2} = \\sqrt{4t^2 +9}$\n\n## Acceleration\n\nThe acceleration of an object is the change in velocity per length of\ntime. \n\n$\\mathbf{a}_{P/O} = \\frac{d \\mathbf{v}_{P/O} }{dt} = \\ddot{x}\\hat{i} +\n\\ddot{y}\\hat{j} + \\ddot{z}\\hat{k}$\n\nwhere $\\ddot{x}=\\frac{d^2 x}{dt^2}$ and $\\mathbf{a}_{P/O}$ is the\nacceleration of point $P$ _with respect to the point of origin_ $O$. \n\n## Rotation and Orientation\n\nThe definitions of position, velocity, and acceleration all describe a\nsingle point, but dynamic engineering systems are composed of rigid\nbodies is needed to describe the position of an object. \n\n\n\n_In the figure above, the center of the block is located at\n$r_{P/O}=x\\hat{i}+y\\hat{j}$ in both the left and right images, but the\ntwo locations are not the same. The orientation of the block is\nimportant for determining the position of all the material points._\n\nIn general, a rigid body has a _pitch_, _yaw_, and _roll_ that describes\nits rotational orientation, as seen in the animation below. We will\nrevisit 3D motion in Module_05. For now, we will limit our description of \nmotion to _planar motion_. \n\n\n\n```python\nfrom IPython.display import YouTubeVideo\nvid = YouTubeVideo(\"li7t--8UZms?loop=1\")\ndisplay(vid)\n```\n\n\n\n\n\n\n\n## Planar motion: angular velocity\n\nPlanar motion means that an object's motion is described using three components:\n\n- x-position\n- y-position\n- orientation ($\\theta$)\n\nThe x-y-positions describe a point (such as the center of mass) in the object and $\\theta$ describes its rotation. The change in angle over time is called __angular velocity__\n\n- __angular velocity__ has a magnitude and direction, $\\mathbf{\\omega} = \\dot{\\theta}\\hat{k}$\n- the derivative of angular velocity is __angular acceleration__, $\\mathbf{\\alpha}=\\frac{d\\mathbf{\\omega}}{dt} = \\ddot{\\theta}\\hat{k}$\n\nNow, you can describe a planar object's motion with\n\n- position, $\\mathbf{r}=x\\hat{i}+y\\hat{j}$, and orientation $\\theta$\n- velocity, $\\mathbf{v}=\\dot{x}\\hat{i}+\\dot{y}\\hat{j}$, and angular velocity $\\mathbf{\\omega}=\\dot{\\theta}\\hat{k}$\n- acceleration, $\\mathbf{r}=\\ddot{x}\\hat{i}+\\ddot{y}\\hat{j}$, and angular acceleration $\\mathbf{\\alpha}=\\ddot{\\theta}\\hat{k}$ \n\n## Kinetics - forces, work-energy, impulse-momentum\n\n### Newton-Euler equations\n\nIn this course, you will use the Newton-Euler equations to relate motion to applied forces and moments\n\n- $\\mathbf{F} = m\\mathbf{a}$\n- $\\mathbf{M} = I\\mathbf{\\alpha}$\n\nThe Newton-Euler equations describe forces in terms of acceleration and angular acceleration. The $m$ is the mass of the object and the $I$ is the [moment of inertia](https://en.wikipedia.org/wiki/Moment_of_inertia) of the object. You can think of _moment of inertia_ as a way to measure distribution of mass, its measured in terms of $[kg\\cdot m^2]$, in SI units. \n\n### Example - changing a tire\n\nWhen you change a tire, you either need to engage the brakes or leave the tire in contact with the ground. If not, you can calculate how quickly the tire will accelerate for a an applied moment. \n\nIn this example, a $m=10~kg$, tire with moment of inertia $I =1~kg\\cdot m^2$ is able to rotate freely. The tire iron has a 10-N force on both sides, each 0.1-m from the center of the tire. \n\n- $\\sum F_x = 0 + 0 = m\\ddot{x}$\n- $\\sum F_y = 10 - 10 = m\\ddot{y}$\n- $\\sum M = 0.1\\cdot 10 + 0.1 \\cdot 10 = I\\ddot{\\theta}$\n\nBoth the acceleration components are $\\ddot{x} = \\ddot{y} = 0~m/s$. The angular acceleration is $\\ddot{\\theta}\\hat{k} = \\frac{2~N\\cdot m}{1~kg-m^2} = 2~\\frac{rad}{s^2}\\hat{k}$. \n\n### Work-Energy equations\n\n__Work__ is defined as a force acting through a distance and/or a moment acting over a given rotation, $dW=F\\cdot d\\mathbf{r}+Md\\theta$. When mechanical __work__ is added to a dynamic system, the energy of the system has to change. If you _ignore changes in temperature_, then there are two possible conversions\n\n- kinetic energy, $T = \\frac{1}{2}mv^2 + \\frac{1}{2}I\\dot{\\theta}^2$\n- potential energy, $V$\n\nThe kinetic energy quantifies the speed and mass of an object. For a solid object, like the tire from the last example, this include the _translational kinetic energy_ $\\frac{1}{2}mv^2$ and the _rotational kinetic energy_, $\\frac{1}{2}I\\dot{\\theta}^2$. The potential energy can describe work done by _conservative forces_, such as\n\n- springs, $V_{spring} = \\frac{1}{2}kx^2$, where x is distance the spring stretched and k is the spring stiffness\n- gravity _near Earth surface_, $V_{gravity} = mgh$, where $h$ is the change in height of the object\n- gravity _general_, $V_{gravity} = \\frac{GMm}{r}$, where $G=6.674\\times10^{11} \\frac{m^3}{kg \\cdot s^2}$, $M$ is mass of Earth, and $r$ is distance from Earth's center\n- ...\n\nPotential energy is independent of velocity and the path an object takes. If you raise a 5-kg bowling ball 1 meter, it has $V=mgh=(5~kg)(9.81~\\frac{m}{s^2})(1~m) = 49~J$ of potential energy. It does not matter if it was raised slowly or quickly or if it reached the height on a vertical path or angled path. \n\nThe work-energy equation satisfies the [first law of thermodynamics](https://en.wikipedia.org/wiki/First_law_of_thermodynamics): energy can neither be created or destroyed\n\n$T_1 + V_1 + W_{1\\rightarrow 2} = T_2+V_2$\n\nThis equation states that the mechanical work, $W_{1\\rightarrow 2}$, will create a change in total energy of the system, $T+V$. The subscripts $1~and~2$ describe a starting and ending point. \n\n### Example - elevator motor selection\n\nYou are an engineer asked to select an electric motor for a 1000-kg elevator. The elevator has a 1000-kg counterweight as shown below. What information do you need to select the motor?\n\n\n\nFirst, consider the work done to go from 0-30 m. \n\n$T_1 + V_1 +W_{1\\rightarrow 2} = T_2 + V_2$\n\n- $T_1 = T_2 = 0$ since the elevator and counterweight stop at 30 m and 0 m, respectively\n- $V_1 = (1000~kg)(9.81~m/s^2)(30~m)$\n- $V_2 = (1000~kg)(9.81~m/s^2)(30~m)$\n\nThis means, $W_{1\\rightarrow 2}=V_2-V_1=0~J$. How can this be?\n\nThe counterweight is there to balance the weight of the elevator, so no matter what height the elevator reaches, the work done to overcome gravity is $0~J$. Does this mean you can move the elevator with a 0-N-m motor? _not quite_. \n\nThe motor might not need to overcome gravity, but if you want to reach the top floor in a certain amount of time, you need to add kinetic energy to the system. A 30-m building is 10 stories tall. You would want to know how fast the elevator needs to travel. Let's estimate that it takes 20 seconds to go floor 1 to 10. \n\n- $T_1 = 0$\n- $v_2 = \\frac{30~m}{20~s} = 1.5~m/s$\n- $T_2 = \\frac{1}{2}(1000~kg)(1.5~m/s)^2+\\frac{1}{2}(1000~kg)(1.5~m/s)^2$\n\nNow, you have\n\n$W_{1\\rightarrow 2}=T_2-T_1=2250~J=2250~N\\cdot m$\n\nThe work done to get the elevator to its cruising speed is 2.25 kJ. When you stop, you will have to remove 2.25 kJ using the motor or a brake. Now, how _quickly_ should you start and stop? Standard practice for comfort is to [accelerate elevators](https://www.treehugger.com/how-fast-should-elevator-go-4858555#:~:text=The%20elevator%20could%20be%20going,pushing%20the%20limits%20of%20comfort.) at $1.5~m/s^2$.\n\nSo the time it takes to reach maximum speed is $\\Delta t = 1~s$. You should select a motor that creates $\\dot{W}_{1\\rightarrow 2} = \\frac{2250~J}{1~s} = 2.25~kW$ of power. \n\n### Impulse-momentum equations\n\nNewton first described the laws of motion in terms of _momentum_, $\\mathbf{p} = m\\mathbf{v}$ and force. The second law was stated as the equation\n\n$F = d(m\\mathbf{v})$\n\nThe applied force, $F$, will change the momentum, $m\\mathbf{v}$. The same equation is true for rotation of an object\n\n$M = d(\\mathbf{h})$\n\nwhere $\\mathbf{h}$ is the angular momentum of an object. In planar systems, $\\mathbf{h} = I\\dot{\\theta}\\hat{k}$. \n\nAn _impulse_ is a force or moment applied over a period of time. When there is an impact or explosion, you can use the _impulse-momentum_ equations as such\n\n- $m\\mathbf{v}_1 + \\mathbf{F}dt = m\\mathbf{v}_2$\n- $I\\dot{\\theta}_1 + \\mathbf{M}dt = I\\dot{\\theta}_2$\n\nwhere $\\mathbf{F}dt$ is the linear impulse, $\\mathbf{M}dt$ is the moment-impulse, $m\\mathbf{v}$ is linear momentum, and $I\\dot{\\theta}$ is angular momentum. \n\n### Example - hitting a golf ball\n\nWhen a golf ball is hit by a club, it goes from rest to 45 m/s (~100 mph) almost instantly. If you take the derivative of velocity to get acceleration, you have\n\n$a = \\frac{45~m/s}{0~s} = \\infty~m/s^2$\n\nwhich would mean that the applied force is [0.045~kg](https://golf.com/gear/golf-balls/how-many-dimples-on-a-golf-ball/#:~:text=But%20in%20the%20modern%20game,1.620%20ounces%2C%20or%2045.93%20grams.)$\\cdot \\infty~m/s^2=\\infty~N$. Its just not helpful information, and in reality there must be some change in time between 0 and $45~m/s$. \n\nInstead, use the _impulse-momentum_ equation to determine the _impulse_ required to change the _momentum_ of the golf ball\n\n$\\mathbf{F}dt = m\\mathbf{v}_2- m\\mathbf{v}_1 = (0.045~kg)(45~m/s) = 2.02~\\frac{kg\\cdot m}{s} = 2.02~N\\cdot s$\n\n\n\n## Wrapping up\n\nIn this notebook you defined:\n\n- Kinematics - the geometry of motion\n - position, $\\mathbf{r} = x\\hat{i}+y\\hat{j}+z\\hat{k}$\n - velocity, $\\mathbf{v} = \\frac{d\\mathbf{r}}{dt}$\n - acceleration, $\\mathbf{a} = \\frac{d\\mathbf{a}}{dt}$\n - angular velocity, $\\mathbf{\\omega} = \\dot{\\theta}\\hat{k}$ _in planar motion_\n - angular acceleration, $\\mathbf{\\alpha} = \\ddot{\\theta}\\hat{k}$ _in planar motion_\n- Kinetics - the study of forces, work-energy, and impulse-momentum\n - Newton-Euler equations: $\\mathbf{F} = m\\mathbf{a}$ and $\\mathbf{M} = \\frac{d}{dt}\\mathbf{h}$ \n - work-energy equation: $T_1+V_1 +W_{1\\rightarrow 2} = T_2 + V_2$\n - impulse-momentum equation: $m\\mathbf{v}_1 + Fdt = m\\mathbf{v}_2$ and $I\\omega_1 + Mdt = I\\omega_2$ _in planar motion\n \nEvery rigid-body dynamic problem is solved using these kinematic and kinetic equations. You will use a combination of algebra, geometry, and calculus to get final solutions, but the __core engineering concepts__ in dynamics are _kinematics_ and _kinetics_ equations. \n", "meta": {"hexsha": "d3dddfdce206788e1836186330ab835ccd2acf56", "size": 97427, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "module_01/definitions.ipynb", "max_stars_repo_name": "UConn-Cooper/engineering-dynamics", "max_stars_repo_head_hexsha": "d89a591321634905c1c0e3522a3a9f7aab5abbd3", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-16T23:51:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-16T23:51:30.000Z", "max_issues_repo_path": "module_01/definitions.ipynb", "max_issues_repo_name": "UConn-Cooper/engineering-dynamics", "max_issues_repo_head_hexsha": "d89a591321634905c1c0e3522a3a9f7aab5abbd3", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-02-16T01:25:06.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-24T20:29:00.000Z", "max_forks_repo_path": "module_01/definitions.ipynb", "max_forks_repo_name": "UConn-Cooper/engineering-dynamics", "max_forks_repo_head_hexsha": "d89a591321634905c1c0e3522a3a9f7aab5abbd3", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-02-16T01:20:32.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-02T19:19:58.000Z", "avg_line_length": 202.9729166667, "max_line_length": 48308, "alphanum_fraction": 0.889024603, "converted": true, "num_tokens": 4405, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38861804086755836, "lm_q2_score": 0.2942149783515162, "lm_q1q2_score": 0.11433724848085733}} {"text": "# [ATM 623: Climate Modeling](../index.ipynb)\n\n[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany\n\n# Lecture 6: Elementary greenhouse models\n\n### About these notes:\n\nThis document uses the interactive [`Jupyter notebook`](https://jupyter.org) format. The notes can be accessed in several different ways:\n\n- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware\n- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)\n- A complete snapshot of the notes as of May 2017 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2017/Notes/index.html).\n\n[Also here is a legacy version from 2015](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).\n\nMany of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab\n\n\n```python\n# Ensure compatibility with Python 2 and 3\nfrom __future__ import print_function, division\n```\n\n## Contents\n\n1. [A single layer atmosphere](#section1)\n2. [Introducing the two-layer grey gas model](#section2)\n3. [Tuning the grey gas model to observations](#section3)\n4. [Level of emission](#section4)\n5. [Radiative forcing in the 2-layer grey gas model](#section5)\n6. [Radiative equilibrium in the 2-layer grey gas model](#section6)\n7. [Summary](#section7)\n\n____________\n\n\n## 1. A single layer atmosphere\n____________\n\nWe will make our first attempt at quantifying the greenhouse effect in the simplest possible greenhouse model: a single layer of atmosphere that is able to absorb and emit longwave radiation.\n\n\n```python\nfrom IPython.display import Image\nImage('../images/1layerAtm_sketch.png')\n```\n\n### Assumptions\n\n- Atmosphere is a single layer of air at temperature $T_a$\n- Atmosphere is **completely transparent to shortwave** solar radiation.\n- The **surface** absorbs shortwave radiation $(1-\\alpha) Q$\n- Atmosphere is **completely opaque to infrared** radiation\n- Both surface and atmosphere emit radiation as **blackbodies** ($\\sigma T_s^4, \\sigma T_a^4$)\n- Atmosphere radiates **equally up and down** ($\\sigma T_a^4$)\n- There are no other heat transfer mechanisms\n\nWe can now use the concept of energy balance to ask what the temperature need to be in order to balance the energy budgets at the surface and the atmosphere, i.e. the **radiative equilibrium temperatures**.\n\n\n### Energy balance at the surface\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n(1-\\alpha) Q + \\sigma T_a^4 &= \\sigma T_s^4 \\\\\n\\end{align}\n\nThe presence of the atmosphere above means there is an additional source term: downwelling infrared radiation from the atmosphere.\n\nWe call this the **back radiation**.\n\n### Energy balance for the atmosphere\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n\\sigma T_s^4 &= A\\uparrow + A\\downarrow = 2 \\sigma T_a^4 \\\\\n\\end{align}\n\nwhich means that \n$$ T_s = 2^\\frac{1}{4} T_a \\approx 1.2 T_a $$\n\nSo we have just determined that, in order to have a purely **radiative equilibrium**, we must have $T_s > T_a$. \n\n*The surface must be warmer than the atmosphere.*\n\n### Solve for the radiative equilibrium surface temperature\n\nNow plug this into the surface equation to find\n\n$$ \\frac{1}{2} \\sigma T_s^4 = (1-\\alpha) Q $$\n\nand use the definition of the emission temperature $T_e$ to write\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n*In fact, in this model, $T_e$ is identical to the atmospheric temperature $T_a$, since all the OLR originates from this layer.*\n\nSolve for the surface temperature:\n$$ T_s = 2^\\frac{1}{4} T_e $$\n\nPutting in observed numbers, $T_e = 255$ K gives a surface temperature of \n$$T_s = 303 ~\\text{K}$$\n\nThis model is one small step closer to reality: surface is warmer than atmosphere, emissions to space generated in the atmosphere, atmosphere heated from below and helping to keep surface warm.\n\nBUT our model now overpredicts the surface temperature by about 15\u00baC (or K).\n\nIdeas about why?\n\nBasically we just need to read our **list of assumptions** above and realize that none of them are very good approximations:\n\n- Atmosphere absorbs some solar radiation.\n- Atmosphere is NOT a perfect absorber of longwave radiation\n- Absorption and emission varies strongly with wavelength *(atmosphere does not behave like a blackbody)*.\n- Emissions are not determined by a single temperature $T_a$ but by the detailed *vertical profile* of air temperture.\n- Energy is redistributed in the vertical by a variety of dynamical transport mechanisms (e.g. convection and boundary layer turbulence).\n\n\n\n____________\n\n\n## 2. Introducing the two-layer grey gas model\n____________\n\nLet's generalize the above model just a little bit to build a slighly more realistic model of longwave radiative transfer.\n\nWe will address two shortcomings of our single-layer model:\n1. No vertical structure\n2. 100% longwave opacity\n\nRelaxing these two assumptions gives us what turns out to be a very useful prototype model for **understanding how the greenhouse effect works**.\n\n### Assumptions\n\n- The atmosphere is **transparent to shortwave radiation** (still)\n- Divide the atmosphere up into **two layers of equal mass** (the dividing line is thus at 500 hPa pressure level)\n- Each layer **absorbs only a fraction $\\epsilon$ ** of whatever longwave radiation is incident upon it.\n- We will call the fraction $\\epsilon$ the **absorptivity** of the layer.\n- Assume $\\epsilon$ is the same in each layer\n\nThis is called the **grey gas** model, where grey here means the emission and absorption have no spectral dependence.\n\nWe can think of this model informally as a \"leaky greenhouse\".\n\nNote that the assumption that $\\epsilon$ is the same in each layer is appropriate if the absorption is actually carried out by a gas that is **well-mixed** in the atmosphere.\n\nOut of our two most important absorbers:\n\n- CO$_2$ is well mixed\n- H$_2$O is not (mostly confined to lower troposphere due to strong temperature dependence of the saturation vapor pressure).\n\nBut we will ignore this aspect of reality for now.\n\nIn order to build our model, we need to introduce one additional piece of physics known as **Kirchoff's Law**:\n\n$$ \\text{absorptivity} = \\text{emissivity} $$\n\nSo if a layer of atmosphere at temperature $T$ absorbs a fraction $\\epsilon$ of incident longwave radiation, it must emit\n\n$$ \\epsilon ~\\sigma ~T^4 $$\n\nboth up and down.\n\n### A sketch of the radiative fluxes in the 2-layer atmosphere\n\n\n```python\nImage('../images/2layerAtm_sketch.png')\n```\n\n- Surface temperature is $T_s$\n- Atm. temperatures are $T_0, T_1$ where $T_0$ is closest to the surface.\n- absorptivity of atm layers is $\\epsilon$\n- Surface emission is $\\sigma T_s^4$\n- Atm emission is $\\epsilon \\sigma T_0^4, \\epsilon \\sigma T_1^4$ (up and down)\n- Absorptivity = emissivity for atmospheric layers\n- a fraction $(1-\\epsilon)$ of the longwave beam is **transmitted** through each layer\n\n### A fun aside: symbolic math with the `sympy` package\n\nThis two-layer grey gas model is simple enough that we can work out all the details algebraically. There are three temperatures to keep track of $(T_s, T_0, T_1)$, so we will have 3x3 matrix equations.\n\nWe all know how to work these things out with pencil and paper. But it can be tedious and error-prone. \n\nSymbolic math software lets us use the computer to automate a lot of tedious algebra.\n\nThe [sympy](http://www.sympy.org/en/index.html) package is a powerful open-source symbolic math library that is well-integrated into the scientific Python ecosystem. \n\n\n```python\nimport sympy\n# Allow sympy to produce nice looking equations as output\nsympy.init_printing()\n# Define some symbols for mathematical quantities\n# Assume all quantities are positive (which will help simplify some expressions)\nepsilon, T_e, T_s, T_0, T_1, sigma = \\\n sympy.symbols('epsilon, T_e, T_s, T_0, T_1, sigma', positive=True)\n# So far we have just defined some symbols, e.g.\nT_s\n```\n\n\n```python\n# We have hard-coded the assumption that the temperature is positive\nsympy.ask(T_s>0)\n```\n\n\n\n\n True\n\n\n\n### Longwave emissions\n\nLet's denote the emissions from each layer as\n\\begin{align}\nE_s &= \\sigma T_s^4 \\\\\nE_0 &= \\epsilon \\sigma T_0^4 \\\\\nE_1 &= \\epsilon \\sigma T_1^4 \n\\end{align}\n\nrecognizing that $E_0$ and $E_1$ contribute to **both** the upwelling and downwelling beams.\n\n\n```python\n# Define these operations as sympy symbols \n# And display as a column vector:\nE_s = sigma*T_s**4\nE_0 = epsilon*sigma*T_0**4\nE_1 = epsilon*sigma*T_1**4\nE = sympy.Matrix([E_s, E_0, E_1])\nE\n```\n\n### Shortwave radiation\nSince we have assumed the atmosphere is transparent to shortwave, the incident beam $Q$ passes unchanged from the top to the surface, where a fraction $\\alpha$ is reflected upward out to space.\n\n\n```python\n# Define some new symbols for shortwave radiation\nQ, alpha = sympy.symbols('Q, alpha', positive=True)\n# Create a dictionary to hold our numerical values\ntuned = {}\ntuned[Q] = 341.3 # global mean insolation in W/m2\ntuned[alpha] = 101.9/Q.subs(tuned) # observed planetary albedo\ntuned[sigma] = 5.67E-8 # Stefan-Boltzmann constant in W/m2/K4\ntuned\n# Numerical value for emission temperature\n#T_e.subs(tuned)\n```\n\n### Upwelling beam\n\nLet $U$ be the upwelling flux of longwave radiation. \n\nThe upward flux from the surface to layer 0 is\n$$ U_0 = E_s $$\n(just the emission from the suface).\n\n\n```python\nU_0 = E_s\nU_0\n```\n\nFollowing this beam upward, we can write the upward flux from layer 0 to layer 1 as the sum of the transmitted component that originated below layer 0 and the new emissions from layer 0:\n\n$$ U_1 = (1-\\epsilon) U_0 + E_0 $$\n\n\n```python\nU_1 = (1-epsilon)*U_0 + E_0\nU_1\n```\n\nContinuing to follow the same beam, the upwelling flux above layer 1 is\n$$ U_2 = (1-\\epsilon) U_1 + E_1 $$\n\n\n```python\nU_2 = (1-epsilon) * U_1 + E_1\n```\n\nSince there is no more atmosphere above layer 1, this upwelling flux is our Outgoing Longwave Radiation for this model:\n\n$$ OLR = U_2 $$\n\n\n```python\nU_2\n```\n\nThe three terms in the above expression represent the **contributions to the total OLR that originate from each of the three levels**. \n\nLet's code this up explicitly for future reference:\n\n\n```python\n# Define the contributions to OLR originating from each level\nOLR_s = (1-epsilon)**2 *sigma*T_s**4\nOLR_0 = epsilon*(1-epsilon)*sigma*T_0**4\nOLR_1 = epsilon*sigma*T_1**4\n\nOLR = OLR_s + OLR_0 + OLR_1\n\nprint( 'The expression for OLR is')\nOLR\n```\n\n### Downwelling beam\n\nLet $D$ be the downwelling longwave beam. Since there is no longwave radiation coming in from space, we begin with \n\n\n```python\nfromspace = 0\nD_2 = fromspace\n```\n\nBetween layer 1 and layer 0 the beam contains emissions from layer 1:\n\n$$ D_1 = (1-\\epsilon)D_2 + E_1 = E_1 $$\n\n\n```python\nD_1 = (1-epsilon)*D_2 + E_1\nD_1\n```\n\nFinally between layer 0 and the surface the beam contains a transmitted component and the emissions from layer 0:\n\n$$ D_0 = (1-\\epsilon) D_1 + E_0 = \\epsilon(1-\\epsilon) \\sigma T_1^4 + \\epsilon \\sigma T_0^4$$\n\n\n```python\nD_0 = (1-epsilon)*D_1 + E_0\nD_0\n```\n\nThis $D_0$ is what we call the **back radiation**, i.e. the longwave radiation from the atmosphere to the surface.\n\n____________\n\n\n## 3. Tuning the grey gas model to observations\n____________\n\nIn building our new model we have introduced exactly one parameter, the absorptivity $\\epsilon$. We need to choose a value for $\\epsilon$.\n\nWe will tune our model so that it **reproduces the observed global mean OLR** given **observed global mean temperatures**.\n\nTo get appropriate temperatures for $T_s, T_0, T_1$, let's revisit the [global, annual mean lapse rate plot from NCEP Reanalysis data](Lecture05 -- Radiation.ipynb) from the previous lecture.\n\n### Temperatures\n\nFirst, we set \n$$T_s = 288 \\text{ K} $$\n\nFrom the lapse rate plot, an average temperature for the layer between 1000 and 500 hPa is \n\n$$ T_0 = 275 \\text{ K}$$\n\nDefining an average temperature for the layer between 500 and 0 hPa is more ambiguous because of the lapse rate reversal at the tropopause. We will choose\n\n$$ T_1 = 230 \\text{ K}$$\n\nFrom the graph, this is approximately the observed global mean temperature at 275 hPa or about 10 km.\n\n\n```python\n# add to our dictionary of values:\ntuned[T_s] = 288.\ntuned[T_0] = 275.\ntuned[T_1] = 230.\ntuned\n```\n\n### OLR\n\nFrom the [observed global energy budget](Lecture01 -- Planetary energy budget.ipynb) we set \n\n$$ OLR = 238.5 \\text{ W m}^{-2} $$\n\n### Solving for $\\epsilon$\n\nWe wrote down the expression for OLR as a function of temperatures and absorptivity in our model above. \n\nWe just need to equate this to the observed value and solve a **quadratic equation** for $\\epsilon$.\n\nThis is where the real power of the symbolic math toolkit comes in. \n\nSubsitute in the numerical values we are interested in:\n\n\n```python\n# the .subs() method for a sympy symbol means\n# substitute values in the expression using the supplied dictionary\n# Here we use observed values of Ts, T0, T1 \nOLR2 = OLR.subs(tuned)\nOLR2\n```\n\nWe have a quadratic equation for $\\epsilon$.\n\nNow use the `sympy.solve` function to solve the quadratic:\n\n\n```python\n# The sympy.solve method takes an expression equal to zero\n# So in this case we subtract the tuned value of OLR from our expression\neps_solution = sympy.solve(OLR2 - 238.5, epsilon)\neps_solution\n```\n\nThere are two roots, but the second one is unphysical since we must have $0 < \\epsilon < 1$.\n\nJust for fun, here is a simple of example of *filtering a list* using powerful Python *list comprehension* syntax:\n\n\n```python\n# Give me only the roots that are between zero and 1!\nlist_result = [eps for eps in eps_solution if 0\n\n## 4. Level of emission\n____________\n\nEven in this very simple greenhouse model, there is **no single level** at which the OLR is generated.\n\nThe three terms in our formula for OLR tell us the contributions from each level.\n\n\n```python\nOLRterms = sympy.Matrix([OLR_s, OLR_0, OLR_1])\nOLRterms\n```\n\nNow evaluate these expressions for our tuned temperature and absorptivity:\n\n\n```python\nOLRtuned = OLRterms.subs(tuned)\nOLRtuned\n```\n\nSo we are getting about 67 W m$^{-2}$ from the surface, 79 W m$^{-2}$ from layer 0, and 93 W m$^{-2}$ from the top layer.\n\nIn terms of fractional contributions to the total OLR, we have (limiting the output to two decimal places):\n\n\n```python\nsympy.N(OLRtuned / 239., 2)\n```\n\nNotice that the largest single contribution is coming from the top layer. This is in spite of the fact that the emissions from this layer are weak, because it is so cold.\n\nComparing to observations, the actual contribution to OLR from the surface is about 22 W m$^{-2}$ (or about 9% of the total), not 67 W m$^{-2}$. So we certainly don't have all the details worked out yet!\n\nAs we will see later, to really understand what sets that observed 22 W m$^{-2}$, we will need to start thinking about the spectral dependence of the longwave absorptivity.\n\n____________\n\n\n## 5. Radiative forcing in the 2-layer grey gas model\n____________\n\nAdding some extra greenhouse absorbers will mean that a greater fraction of incident longwave radiation is absorbed in each layer.\n\nThus **$\\epsilon$ must increase** as we add greenhouse gases.\n\nSuppose we have $\\epsilon$ initially, and the absorptivity increases to $\\epsilon_2 = \\epsilon + \\delta_\\epsilon$.\n\nSuppose further that this increase happens **abruptly** so that there is no time for the temperatures to respond to this change. **We hold the temperatures fixed** in the column and ask how the radiative fluxes change.\n\n**Do you expect the OLR to increase or decrease?**\n\nLet's use our two-layer leaky greenhouse model to investigate the answer.\n\nThe components of the OLR before the perturbation are\n\n\n```python\nOLRterms\n```\n\nAfter the perturbation we have\n\n\n```python\ndelta_epsilon = sympy.symbols('delta_epsilon')\nOLRterms_pert = OLRterms.subs(epsilon, epsilon+delta_epsilon)\nOLRterms_pert\n```\n\nLet's take the difference\n\n\n```python\ndeltaOLR = OLRterms_pert - OLRterms\ndeltaOLR\n```\n\nTo make things simpler, we will neglect the terms in $\\delta_\\epsilon^2$. This is perfectly reasonably because we are dealing with **small perturbations** where $\\delta_\\epsilon << \\epsilon$.\n\nTelling `sympy` to set the quadratic terms to zero gives us\n\n\n```python\ndeltaOLR_linear = sympy.expand(deltaOLR).subs(delta_epsilon**2, 0)\ndeltaOLR_linear\n```\n\nRecall that the three terms are the contributions to the OLR from the three different levels. In this case, the **changes** in those contributions after adding more absorbers.\n\nNow let's divide through by $\\delta_\\epsilon$ to get the normalized change in OLR per unit change in absorptivity:\n\n\n```python\ndeltaOLR_per_deltaepsilon = \\\n sympy.simplify(deltaOLR_linear / delta_epsilon)\ndeltaOLR_per_deltaepsilon\n```\n\nNow look at the **sign** of each term. Recall that $0 < \\epsilon < 1$. **Which terms in the OLR go up and which go down?**\n\n**THIS IS VERY IMPORTANT, SO STOP AND THINK ABOUT IT.**\n\nThe contribution from the **surface** must **decrease**, while the contribution from the **top layer** must **increase**.\n\n**When we add absorbers, the average level of emission goes up!**\n\n### \"Radiative forcing\" is the change in radiative flux at TOA after adding absorbers\n\nIn this model, only the longwave flux can change, so we define the radiative forcing as\n\n$$ R = - \\delta OLR $$\n\n(with the minus sign so that $R$ is positive when the climate system is gaining extra energy).\n\nWe just worked out that whenever we add some extra absorbers, the emissions to space (on average) will originate from higher levels in the atmosphere. \n\nWhat does this mean for OLR? Will it increase or decrease?\n\nTo get the answer, we just have to sum up the three contributions we wrote above:\n\n\n```python\nR = -sum(deltaOLR_per_deltaepsilon)\nR\n```\n\nIs this a positive or negative number? The key point is this:\n\n**It depends on the temperatures, i.e. on the lapse rate.**\n\n### Greenhouse effect for an isothermal atmosphere\n\nStop and think about this question:\n\nIf the **surface and atmosphere are all at the same temperature**, does the OLR go up or down when $\\epsilon$ increases (i.e. we add more absorbers)?\n\nUnderstanding this question is key to understanding how the greenhouse effect works.\n\n#### Let's solve the isothermal case\n\nWe will just set $T_s = T_0 = T_1$ in the above expression for the radiative forcing.\n\n\n```python\nR.subs([(T_0, T_s), (T_1, T_s)])\n```\n\nwhich then simplifies to\n\n\n```python\nsympy.simplify(R.subs([(T_0, T_s), (T_1, T_s)]))\n```\n\n#### The answer is zero\n\nFor an isothermal atmosphere, there is **no change** in OLR when we add extra greenhouse absorbers. Hence, no radiative forcing and no greenhouse effect.\n\nWhy?\n\nThe level of emission still must go up. But since the temperature at the upper level is the **same** as everywhere else, the emissions are exactly the same.\n\n### The radiative forcing (change in OLR) depends on the lapse rate!\n\nFor a more realistic example of radiative forcing due to an increase in greenhouse absorbers, we can substitute in our tuned values for temperature and $\\epsilon$. \n\nWe'll express the answer in W m$^{-2}$ for a 1% increase in $\\epsilon$.\n\nThe three components of the OLR change are\n\n\n```python\ndeltaOLR_per_deltaepsilon.subs(tuned) * 0.01\n```\n\nAnd the net radiative forcing is\n\n\n```python\nR.subs(tuned) * 0.01\n```\n\nSo in our example, **the OLR decreases by 2.2 W m$^{-2}$**, or equivalently, the radiative forcing is +2.2 W m$^{-2}$.\n\nWhat we have just calculated is this:\n\n*Given the observed lapse rates, a small increase in absorbers will cause a small decrease in OLR.*\n\nThe greenhouse effect thus gets stronger, and energy will begin to accumulate in the system -- which will eventually cause temperatures to increase as the system adjusts to a new equilibrium.\n\n____________\n\n\n## 6. Radiative equilibrium in the 2-layer grey gas model\n____________\n\nIn the previous section we:\n\n- made no assumptions about the processes that actually set the temperatures. \n- used the model to calculate radiative fluxes, **given observed temperatures**. \n- stressed the importance of knowing the lapse rates in order to know how an increase in emission level would affect the OLR, and thus determine the radiative forcing.\n\nA key question in climate dynamics is therefore this:\n\n**What sets the lapse rate?**\n\nIt turns out that lots of different physical processes contribute to setting the lapse rate. \n\nUnderstanding how these processes acts together and how they change as the climate changes is one of the key reasons for which we need more complex climate models.\n\nFor now, we will use our prototype greenhouse model to do the most basic lapse rate calculation: the **radiative equilibrium temperature**.\n\nWe assume that\n\n- the only exchange of energy between layers is longwave radiation\n- equilibrium is achieved when the **net radiative flux convergence** in each layer is zero.\n\n### Compute the radiative flux convergence\n\nFirst, the **net upwelling flux** is just the difference between flux up and flux down:\n\n\n```python\n# Upwelling and downwelling beams as matrices\nU = sympy.Matrix([U_0, U_1, U_2])\nD = sympy.Matrix([D_0, D_1, D_2])\n# Net flux, positive up\nF = U-D\nF\n```\n\n#### Net absorption is the flux convergence in each layer\n\n(difference between what's coming in the bottom and what's going out the top of each layer)\n\n\n```python\n# define a vector of absorbed radiation -- same size as emissions\nA = E.copy()\n\n# absorbed radiation at surface\nA[0] = F[0]\n# Compute the convergence\nfor n in range(2):\n A[n+1] = -(F[n+1]-F[n])\n\nA\n```\n\n#### Radiative equilibrium means net absorption is ZERO in the atmosphere\n\nThe only other heat source is the **shortwave heating** at the **surface**.\n\nIn matrix form, here is the system of equations to be solved:\n\n\n```python\nradeq = sympy.Equality(A, sympy.Matrix([(1-alpha)*Q, 0, 0]))\nradeq\n```\n\nJust as we did for the 1-layer model, it is helpful to rewrite this system using the definition of the **emission temperture** $T_e$\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n\n```python\nradeq2 = radeq.subs([((1-alpha)*Q, sigma*T_e**4)])\nradeq2\n```\n\nIn this form we can see that we actually have a **linear system** of equations for a set of variables $T_s^4, T_0^4, T_1^4$.\n\nWe can solve this matrix problem to get these as functions of $T_e^4$.\n\n\n```python\n# Solve for radiative equilibrium \nfourthpower = sympy.solve(radeq2, [T_s**4, T_1**4, T_0**4])\nfourthpower\n```\n\nThis produces a dictionary of solutions for the fourth power of the temperatures!\n\nA little manipulation gets us the solutions for temperatures that we want:\n\n\n```python\n# need the symbolic fourth root operation\nfrom sympy.simplify.simplify import nthroot\n\nfourthpower_list = [fourthpower[key] for key in [T_s**4, T_0**4, T_1**4]]\nsolution = sympy.Matrix([nthroot(item,4) for item in fourthpower_list])\n# Display result as matrix equation!\nT = sympy.Matrix([T_s, T_0, T_1])\nsympy.Equality(T, solution)\n```\n\nIn more familiar notation, the radiative equilibrium solution is thus\n\n\\begin{align} \nT_s &= T_e \\left( \\frac{2+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_0 &= T_e \\left( \\frac{1+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_1 &= T_e \\left( \\frac{ 1}{2 - \\epsilon} \\right)^{1/4}\n\\end{align}\n\nPlugging in the tuned value $\\epsilon = 0.586$ gives\n\n\n```python\nTsolution = solution.subs(tuned)\n# Display result as matrix equation!\nsympy.Equality(T, Tsolution)\n```\n\nNow we just need to know the Earth's emission temperature $T_e$!\n\n(Which we already know is about 255 K)\n\n\n```python\n# Here's how to calculate T_e from the observed values\nsympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)\n```\n\n\n```python\n# Need to unpack the list\nTe_value = sympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)[0]\nTe_value\n```\n\n#### Now we finally get our solution for radiative equilibrium\n\n\n```python\n# Output 4 significant digits\nTrad = sympy.N(Tsolution.subs([(T_e, Te_value)]), 4)\nsympy.Equality(T, Trad)\n```\n\nCompare these to the values we derived from the **observed lapse rates**:\n\n\n```python\nsympy.Equality(T, T.subs(tuned))\n```\n\nThe **radiative equilibrium** solution is substantially **warmer at the surface** and **colder in the lower troposphere** than reality.\n\nThis is a very general feature of radiative equilibrium, and we will see it again very soon in this course.\n\n____________\n\n\n## 7. Summary\n____________\n\n## Key physical lessons\n\n- Putting a **layer of longwave absorbers** above the surface keeps the **surface substantially warmer**, because of the **backradiation** from the atmosphere (greenhouse effect).\n- The **grey gas** model assumes that each layer absorbs and emits a fraction $\\epsilon$ of its blackbody value, independent of wavelength.\n\n- With **incomplete absorption** ($\\epsilon < 1$), there are contributions to the OLR from every level and the surface (there is no single **level of emission**)\n- Adding more absorbers means that **contributions to the OLR** from **upper levels** go **up**, while contributions from the surface go **down**.\n- This upward shift in the weighting of different levels is what we mean when we say the **level of emission goes up**.\n\n- The **radiative forcing** caused by an increase in absorbers **depends on the lapse rate**.\n- For an **isothermal atmosphere** the radiative forcing is zero and there is **no greenhouse effect**\n- The radiative forcing is positive for our atmosphere **because tropospheric temperatures tends to decrease with height**.\n- Pure **radiative equilibrium** produces a **warm surface** and **cold lower troposphere**.\n- This is unrealistic, and suggests that crucial heat transfer mechanisms are missing from our model.\n\n### And on the Python side...\n\nDid we need `sympy` to work all this out? No, of course not. We could have solved the 3x3 matrix problems by hand. But computer algebra can be very useful and save you a lot of time and error, so it's good to invest some effort into learning how to use it. \n\nHopefully these notes provide a useful starting point.\n\n### A follow-up assignment\n\nYou are now ready to tackle [Assignment 5](../Assignments/Assignment05 -- Radiative forcing in a grey radiation atmosphere.ipynb), where you are asked to extend this grey-gas analysis to many layers. \n\nFor more than a few layers, the analytical approach we used here is no longer very useful. You will code up a numerical solution to calculate OLR given temperatures and absorptivity, and look at how the lapse rate determines radiative forcing for a given increase in absorptivity.\n\n
\n[Back to ATM 623 notebook home](../index.ipynb)\n
\n\n____________\n## Version information\n____________\n\n\n\n```python\n%load_ext version_information\n%version_information sympy\n```\n\n Loading extensions from ~/.ipython/extensions is deprecated. We recommend managing extensions like any other Python packages, in site-packages.\n\n\n\n\n\n
SoftwareVersion
Python3.6.2 64bit [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
IPython6.1.0
OSDarwin 16.7.0 x86_64 i386 64bit
sympy1.1.1
Wed Oct 11 13:31:13 2017 EDT
\n\n\n\n____________\n\n## Credits\n\nThe author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\nIt was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php)\n\nDevelopment of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.\n____________\n\n\n```python\n\n```\n", "meta": {"hexsha": "333708ed5a380687c47d4df924fff5a7b434f8a6", "size": 414971, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture06 -- Elementary greenhouse models.ipynb", "max_stars_repo_name": "EasezzZ/Climate_Modeling_Lect", "max_stars_repo_head_hexsha": "98b681933c7340c72a7c1a96caf424f225576cfb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/Lecture06 -- Elementary greenhouse models.ipynb", "max_issues_repo_name": "EasezzZ/Climate_Modeling_Lect", "max_issues_repo_head_hexsha": "98b681933c7340c72a7c1a96caf424f225576cfb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture06 -- Elementary greenhouse models.ipynb", "max_forks_repo_name": "EasezzZ/Climate_Modeling_Lect", "max_forks_repo_head_hexsha": "98b681933c7340c72a7c1a96caf424f225576cfb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-20T03:10:31.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-20T03:10:31.000Z", "avg_line_length": 151.726142596, "max_line_length": 161360, "alphanum_fraction": 0.8697836716, "converted": true, "num_tokens": 7573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3923368443773709, "lm_q2_score": 0.29098086621490676, "lm_q1q2_score": 0.11416251482495045}} {"text": "# Jupyter Notebook Tutorial\n> A tutorial of core functionality of Jupyter Notebooks to have an enjoyable coding experience.\n\n- toc: true \n- badges: true\n- comments: true\n- author: Isaac Flath\n- categories: [Jupyter,Getting Started]\n\n# Top 3 uses:\n1. Exploratory analysis, model creation, data science, any kind of coding that require lots of rapid experimentation and iteration.\n1. Tutorials, guides, and blogs (like this one). Because you have a great mix of text functionality with code, they work really well for tutorials and guides. Rather than having static images, or code snippets that have to get updated each iteration, the code is part of the guide and it really simplifies the process. Notebooks can be exported directly to html and be opened in any browser to give to people. With the easy conversion to html, naturally it's easy to post them on a web page.\n1. Technical presentations of results. You can have the actual code analysis done, with text explanations. Excess code can be collapsed so that if someone asks really detailed questions you can expand and have every piece of detail. Changes to the analysis are in the presentation so no need to save and put static images in other documents\n\n\n# Cell Types\n\nA cell can be 3 different types. The most useful are code cells and markdown cells.\n\n### Code Cells\n - Code cells run code The next few cells are examples of code cells\n - While the most common application is Python, you can set up environments easily to use R, swift, and other languages within jupyter notebooks\n### Markdown Cells\n - This cell is a markdown cell. It is really nice for adding details and text explanations in where a code comment is not enough\n - They have all the normal markdown functionality, plus more. For example, I can write any technical or mathy stuff using latex, or create html tables in markdown or html.\n - I can also make markdown tables.\n \n##### Latex Formulas\n\n\n$$\\begin{bmatrix}w_1&w_2&w_3&w_4&w_5\\\\x_1&x_2&x_3&x_4&x_5\\\\y_1&y_2&y_3&y_4&y_5\\\\z_1&z_2&z_3&z_4&z_5\\end{bmatrix}$$\n\n$\\begin{align}\n\\frac{dy}{du} &= f'(u) = e^u = e^{\\sin(x^2)}, \\\\\n\\frac{du}{dv} &= g'(v) = \\cos v = \\cos(x^2), \\\\\n\\frac{dv}{dx} &= h'(x) = 2x.\n\\end{align}$\n\n##### Markdown Table\n\n| This | is | a | table | for | demos |\n|------|------|-------|-------|-------|-------|\n| perc | 55% | 22% | 23% | 12% | 53% |\n| qty | 23 | 19 | 150 | 9 | 92 |\n\n\n```python\n#collapse-hide\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\npd.options.display.max_columns = None\npd.options.display.max_rows = None\n\n%matplotlib inline\n```\n\n# Running Code\n\nNaturally you can run code cells and print to the Jupyter Notebook\n\n\n```python\nfor x in range(0,5):\n print(x*10)\n```\n\n 0\n 10\n 20\n 30\n 40\n\n\n# DataFrames\n\n\n```python\niris = sns.load_dataset('iris')\niris[iris.petal_length > 6]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
sepal_lengthsepal_widthpetal_lengthpetal_widthspecies
1057.63.06.62.1virginica
1077.32.96.31.8virginica
1097.23.66.12.5virginica
1177.73.86.72.2virginica
1187.72.66.92.3virginica
1227.72.86.72.0virginica
1307.42.86.11.9virginica
1317.93.86.42.0virginica
1357.73.06.12.3virginica
\n
\n\n\n\n# Plotting\nBelow we are going to make a few graphs to get the point accross. Naturally, each graph can be accompanied with a markdown cell that gives context and explains the value of that graph.\n\n### Line Chart\n\n\n```python\n# evenly sampled time at 200ms intervals\nt = np.arange(0., 5., 0.2)\n\n# red dashes, blue squares and green triangles\nplt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')\nplt.show()\n```\n\n### Scatter Plot\n\nSometimes we will want to display a graph, but may not want all the code and details to be immediately visable. In these examples we can create a scatter plot like below, but collapse the code cell.\n\nThis is great when you want to show a graph and explain it, but the details of how the graph was created aren't that important. \n\n\n```python\n#collapse-hide\ndata = {'a': np.arange(50),\n 'c': np.random.randint(0, 50, 50),\n 'd': np.random.randn(50)}\ndata['b'] = data['a'] + 10 * np.random.randn(50)\ndata['d'] = np.abs(data['d']) * 100\n\nplt.scatter('a', 'b', c='c', s='d', data=data)\nplt.xlabel('entry a')\nplt.ylabel('entry b')\nplt.show()\n```\n\n### Categorical Plot\n\nWe can create subplots to have multiple plots show up. This can be especially helpful when showing lots of the same information, or showing how 2 different metrics are related or need to be analyzed together\n\n\n```python\n#collapse-hide\nnames = ['group_a', 'group_b', 'group_c']\nvalues = [1, 10, 100]\n\nplt.figure(figsize=(9, 3))\n\nplt.subplot(131)\nplt.bar(names, values)\nplt.subplot(132)\nplt.scatter(names, values)\nplt.subplot(133)\nplt.plot(names, values)\nplt.suptitle('Categorical Plotting')\nplt.show()\n```\n\n# Stack Traces\n\nWhen you run into an error, by default jupyter notebooks give you whatever the error message is, but also the entire stack trace.\n\nThere is a debug functionality, but I find that these stack traces and jupyter cells work even better than a debugger. I can break my code into as many cells as I want and run things interactively. Here's a few examples of stack traces.\n\n### Matrix Multiplication Good\n\nNow we are going to show an example of errors where the stack trace isn't as simple. Suppose we are trying to multiply 2 arrays together (matrix multiplication).\n\n\n\n```python\na = np.array([\n [1,2,4],\n [3,4,5],\n [5,6,7]\n ])\nb = np.array([\n [11,12,14],\n [31,14,15],\n [23,32,23]\n ])\na@b\n```\n\n\n\n\n array([[165, 168, 136],\n [272, 252, 217],\n [402, 368, 321]])\n\n\n\n### Matrix Multiplication Bad\n\nNow if it errors because the columns from matrix a don't match the rows from matrix b, we will get an error as matrix multiplication is impossible with those matrices. We see the same idea s the above for loop, stack trace with error and arrow pointing at the line that failed\n\n\n```python\n# here's another\na = np.array([\n [1,2,4],\n [3,4,5],\n [5,6,7]\n ])\nb = np.array([\n [11,12,14],\n [31,14,15]\n ])\na@b\n```\n\n### Second Layer of Bad\n\nBut what if the line we call isn't what fails? What if what I run works, but the function underneath fails?\n\nIn these example, you see the entire trace. It starts with are arrow at what you ran that errored. It then shows an arrow that your code called that caused the error, so you can track all the way back to the source. Here's how it shows a two step stack trace, but it can be as long as needed.\n\n\n```python\ndef matmul(a,b):\n c = a@b\n return c\n```\n\n\n```python\nmatmul(a,b)\n```\n\n# Magic Commands\n\nMagic commands are special commands for Juptyer Notebooks. They give you incredible functionality and you will likley find the experience very frustrating without them. A few that I use often are:\n+ ? | put a question mark or 2 after a function or method to get the documentation. ?? gives more detail than ?. I can also use it to wild card search modules for functions.\n+ shift tab | when you are writing something holding shift + tab will open a mini popup with the documentation for that thing. It may be a funciton, method, or module.\n+ ```%who``` or ```%whos``` or ```%who_ls``` | These are all variants that list the objects and variables. I prefer %whos most of the time\n+ ```%history``` | This allows you to look at the last pieces of code that you ran\n+ $$ | wrapping latex code in dollar signs in a markdown cell renders latex code in markdown cells\n+ ! | putting ! at the beginning of a line makes it run that in terminal. For example ```!ls | grep .csv```\n+ ```%time``` | I can use this to time the execution of things\n\n\n```python\nnp.*array*?\n```\n\n\n np.array\n np.array2string\n np.array_equal\n np.array_equiv\n np.array_repr\n np.array_split\n np.array_str\n np.asanyarray\n np.asarray\n np.asarray_chkfinite\n np.ascontiguousarray\n np.asfarray\n np.asfortranarray\n np.broadcast_arrays\n np.chararray\n np.compare_chararrays\n np.get_array_wrap\n np.ndarray\n np.numarray\n np.recarray\n\n\n\n```python\nnp.array_equal??\n```\n\n\n \u001b[0;31mSignature:\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray_equal\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0ma2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n \u001b[0;31mSource:\u001b[0m \n \u001b[0;34m@\u001b[0m\u001b[0marray_function_dispatch\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0m_array_equal_dispatcher\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m\u001b[0;32mdef\u001b[0m \u001b[0marray_equal\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0ma2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;34m\"\"\"\u001b[0m\n \u001b[0;34m True if two arrays have the same shape and elements, False otherwise.\u001b[0m\n \u001b[0;34m\u001b[0m\n \u001b[0;34m Parameters\u001b[0m\n \u001b[0;34m ----------\u001b[0m\n \u001b[0;34m a1, a2 : array_like\u001b[0m\n \u001b[0;34m Input arrays.\u001b[0m\n \u001b[0;34m\u001b[0m\n \u001b[0;34m Returns\u001b[0m\n \u001b[0;34m -------\u001b[0m\n \u001b[0;34m b : bool\u001b[0m\n \u001b[0;34m Returns True if the arrays are equal.\u001b[0m\n \u001b[0;34m\u001b[0m\n \u001b[0;34m See Also\u001b[0m\n \u001b[0;34m --------\u001b[0m\n \u001b[0;34m allclose: Returns True if two arrays are element-wise equal within a\u001b[0m\n \u001b[0;34m tolerance.\u001b[0m\n \u001b[0;34m array_equiv: Returns True if input arrays are shape consistent and all\u001b[0m\n \u001b[0;34m elements equal.\u001b[0m\n \u001b[0;34m\u001b[0m\n \u001b[0;34m Examples\u001b[0m\n \u001b[0;34m --------\u001b[0m\n \u001b[0;34m >>> np.array_equal([1, 2], [1, 2])\u001b[0m\n \u001b[0;34m True\u001b[0m\n \u001b[0;34m >>> np.array_equal(np.array([1, 2]), np.array([1, 2]))\u001b[0m\n \u001b[0;34m True\u001b[0m\n \u001b[0;34m >>> np.array_equal([1, 2], [1, 2, 3])\u001b[0m\n \u001b[0;34m False\u001b[0m\n \u001b[0;34m >>> np.array_equal([1, 2], [1, 4])\u001b[0m\n \u001b[0;34m False\u001b[0m\n \u001b[0;34m\u001b[0m\n \u001b[0;34m \"\"\"\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0ma1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0ma2\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0masarray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0masarray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0;32mFalse\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0ma1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshape\u001b[0m \u001b[0;34m!=\u001b[0m \u001b[0ma2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0;32mFalse\u001b[0m\u001b[0;34m\u001b[0m\n \u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mbool\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0masarray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma1\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0ma2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mall\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n \u001b[0;31mFile:\u001b[0m ~/opt/anaconda3/lib/python3.7/site-packages/numpy/core/numeric.py\n \u001b[0;31mType:\u001b[0m function\n\n\n\n\n```python\na = np.array(np.random.rand(512,512))\nb = np.array(np.random.rand(512,512))\n%time for i in range(0,20): a@b\n```\n\n CPU times: user 198 ms, sys: 2.68 ms, total: 200 ms\n Wall time: 34.8 ms\n\n\n\n```python\n%whos\n```\n\n Variable Type Data/Info\n ---------------------------------\n a ndarray 512x512: 262144 elems, type `float64`, 2097152 bytes (2.0 Mb)\n b ndarray 512x512: 262144 elems, type `float64`, 2097152 bytes (2.0 Mb)\n data dict n=4\n i int 19\n iris DataFrame sepal_length sepal_<...> 1.8 virginica\n matmul function \n names list n=3\n np module kages/numpy/__init__.py'>\n pd module ages/pandas/__init__.py'>\n plt module es/matplotlib/pyplot.py'>\n sns module ges/seaborn/__init__.py'>\n t ndarray 25: 25 elems, type `float64`, 200 bytes\n values list n=3\n x int 4\n\n\n\n```python\n%history -l 5\n```\n\n matmul(a,b)\n np.*array*?\n np.array_equal??\n a = np.array(np.random.rand(512,512))\n b = np.array(np.random.rand(512,512))\n %time for i in range(0,20): a@b\n %whos\n\n\n# Jupyter Extensions\n\nThere are many extensions to Jupyter Notebooks. After all a jupyter notebook is just a JSON file, so you can read the JSON in and manipulate and transform things however you want! There are many features, such as variable explorers, auto code timers, and more - but I find I that most are unneccesary. About half the people I talk to don't use any, and the other half use several.\n\n# NBDEV\n\nNBdev is a jupyter extension/python library that allows you to do full development projects in Jupyter Notebooks. There have been books and libraries written entirely in Jupyter notebooks, including testing frameworks and unit testing that goes with them. A common misconception is that Jupyter notebooks cannot be used for that, though many people already have. \n\n\n\n\nThere are many features NBdev adds. Here's a few.\n\nUsing notebooks written like this, nbdev can create and run any of the following with a single command:\n\n+ Searchable, hyperlinked documentation; any word you surround in backticks will by automatically hyperlinked to the appropriate documentation\n+ Cells in jupyter notebook marked with #export will be exported automatically to a python module\n+ Python modules, following best practices such as automatically defining __ all __ (more details) with your exported functions, classes, and variables\n+ Pip installers (uploaded to pypi for you)\n+ Tests (defined directly in your notebooks, and run in parallel). \n+ Navigate and edit your code in a standard text editor or IDE, and export any changes automatically back into your notebooks\n\nI reccomend checking them out for more detail https://github.com/fastai/nbdev\n", "meta": {"hexsha": "306a5fb8f8040281aa7da64e1d96964e177505c3", "size": 71649, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2020-05-07-JupyterNotebookTutorial.ipynb", "max_stars_repo_name": "Isaac-Flath/fastblog", "max_stars_repo_head_hexsha": "4f477836507e40a18d29c156ebc535cb9d89b6fe", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-13T13:43:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-21T14:43:28.000Z", "max_issues_repo_path": "_notebooks/2020-05-07-JupyterNotebookTutorial.ipynb", "max_issues_repo_name": "Isaac-Flath/fastblog", "max_issues_repo_head_hexsha": "4f477836507e40a18d29c156ebc535cb9d89b6fe", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-02-17T19:27:34.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-16T10:25:58.000Z", "max_forks_repo_path": "_notebooks/2020-05-07-JupyterNotebookTutorial.ipynb", "max_forks_repo_name": "Isaac-Flath/fastblog", "max_forks_repo_head_hexsha": "4f477836507e40a18d29c156ebc535cb9d89b6fe", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-15T14:28:34.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-15T14:28:34.000Z", "avg_line_length": 90.6949367089, "max_line_length": 22568, "alphanum_fraction": 0.7889154071, "converted": true, "num_tokens": 5077, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48828339529583464, "lm_q2_score": 0.23370636758849891, "lm_q1q2_score": 0.11411493866836865}} {"text": "# \u4e00\u3001\u9a6c\u5c14\u53ef\u592b\u6027\u8d28\n\n  \u9a6c\u5c14\u79d1\u592b\u6027\u8d28\u2014\u2014\u5f53\u524d\u7684\u72b6\u6001\u53ea\u548c\u4e0a\u4e00\u65f6\u523b\u6709\u5173\uff0c\u5728\u4e0a\u4e00\u65f6\u523b\u4e4b\u524d\u7684\u4efb\u4f55\u72b6\u6001\u90fd\u548c\u6211\u65e0\u5173\u3002\u6211\u4eec\u79f0\u5176\u7b26\u5408\u9a6c\u5c14\u53ef\u592b\u6027\u8d28\u3002\n\n  \u5177\u6709\u9a6c\u5c14\u79d1\u592b\u6027\u8d28\u7684\u72b6\u6001\u6ee1\u8db3\u4e0b\u9762\u516c\u5f0f\uff1a\n\n  $P(S_{t+1}|S_1,S_2,...,S_t) = P(S_{t+1}|S_t)$\n\n  \u6839\u636e\u516c\u5f0f\u4e5f\u5c31\u662f\u8bf4\u7ed9\u5b9a\u5f53\u524d\u72b6\u6001$S_t$\uff0c\u5c06\u6765\u7684\u72b6\u6001\u4e0et\u65f6\u523b\u4e4b\u524d\u7684\u72b6\u6001\u5df2\u7ecf\u6ca1\u6709\u5173\u7cfb\u3002\n\n# \u4e8c\u3001\u9a6c\u5c14\u53ef\u592b\u94fe\n\n  \u9a6c\u5c14\u53ef\u592b\u94fe\u662f\u6307\u5177\u6709\u9a6c\u5c14\u53ef\u592b\u6027\u8d28\u7684\u968f\u673a\u8fc7\u7a0b\u3002\u5728\u8fc7\u7a0b\u4e2d\uff0c\u5728\u7ed9\u5b9a\u5f53\u524d\u4fe1\u606f\u7684\u60c5\u51b5\u4e0b\uff0c\u8fc7\u53bb\u7684\u4fe1\u606f\u72b6\u6001\u5bf9\u4e8e\u9884\u6d4b\u5c06\u6765\u72b6\u6001\u662f\u65e0\u5173\u7684\u3002\u2014\u2014\u53ef\u4ee5\u79f0\u4e4b\u4e3a\u9a6c\u5c14\u79d1\u592b\u65e0\u540e\u6548\u6027\u3002\n\n\n\n# \u4e09\u3001\u9690\u9a6c\u5c14\u53ef\u592b\u6a21\u578b\n\n  \u9690\u9a6c\u5c14\u53ef\u592b\u6a21\u578b(Hidden Markov Model, HMM)\u662f\u4e00\u79cd\u7edf\u8ba1\u6a21\u578b\uff0c\u5728\u8bed\u97f3\u8bc6\u522b\u3001\u884c\u4e3a\u8bc6\u522b\u3001NLP\u3001\u6545\u969c\u8bca\u65ad\u7b49\u9886\u57df\u5177\u6709\u9ad8\u6548\u7684\u6027\u80fd\u3002\n\n  HMM\u662f\u5173\u4e8e\u65f6\u5e8f\u7684\u6982\u7387\u6a21\u578b\uff0c\u63cf\u8ff0\u4e00\u4e2a\u542b\u6709\u672a\u77e5\u53c2\u6570\u7684\u9a6c\u5c14\u53ef\u592b\u94fe\u6240\u751f\u6210\u7684\u4e0d\u53ef\u89c2\u6d4b\u7684\u72b6\u6001\u968f\u673a\u5e8f\u5217\uff0c\u518d\u7531\u5404\u4e2a\u72b6\u6001\u751f\u6210\u89c2\u6d4b\u968f\u673a\u5e8f\u5217\u7684\u8fc7\u7a0b\u3002\n\n  HMM\u662f\u4e00\u4e2a\u53cc\u91cd\u968f\u673a\u8fc7\u7a0b---\u5177\u6709\u4e00\u5b9a\u72b6\u6001\u7684\u9690\u9a6c\u5c14\u53ef\u592b\u94fe\u548c\u968f\u673a\u7684\u89c2\u6d4b\u5e8f\u5217\u3002\n\n  HMM\u968f\u673a\u751f\u6210\u7684\u72b6\u6001\u968f\u673a\u5e8f\u5217\u88ab\u79f0\u4e3a\u72b6\u6001\u5e8f\u5217\uff1b\u6bcf\u4e2a\u72b6\u6001\u751f\u6210\u4e00\u4e2a\u89c2\u6d4b\uff0c\u7531\u6b64\u4ea7\u751f\u7684\u89c2\u6d4b\u968f\u673a\u5e8f\u5217\uff0c\u88ab\u79f0\u4e3a\u89c2\u6d4b\u5e8f\u5217\u3002\n\n  \u4e3e\u4e2a\u4f8b\u5b50\u6765\u7406\u89e3\u4e0bhmm\uff0c\u6211\u4eec\u4f7f\u7528\u8f93\u5165\u6cd5\u6253\u5b57\uff0c\u6572\u51fa\u7684\u6bcf\u4e2a\u5b57\u7b26\u5c31\u662f\u89c2\u6d4b\u5e8f\u5217\uff0c\u800c\u5b9e\u9645\u6211\u4eec\u60f3\u5199\u7684\u8bdd\u5c31\u662f\u9690\u85cf\u5e8f\u5217\u3002\u90a3\u4e0d\u540c\u516c\u53f8\u7684\u8f93\u5165\u6cd5\u90fd\u662f\u5728\u52aa\u529b\u63d0\u9ad8\u731c\u6d4b\u54b1\u4eec\u8981\u5199\u7684\u8bdd\u7684\u80fd\u529b\u3002\n\n## 1\u3001\u5b9a\u4e49\n\n\n\n\n\n  \u5bf9\u4e8eHMM\u6a21\u578b\uff0c\u9996\u5148\u6211\u4eec\u5047\u8bbeQ\u662f\u6240\u6709\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001\u7684\u96c6\u5408\uff0cV\u662f\u6240\u6709\u53ef\u80fd\u7684\u89c2\u6d4b\u72b6\u6001\u7684\u96c6\u5408\uff0c\u5373\uff1a\n\n$Q={q_1,q_2,...,q_N},V={v_1,v_2,...v_M}$\n\n  \u5176\u4e2d\uff0cN\u662f\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001\u6570\uff0cM\u662f\u6240\u6709\u7684\u53ef\u80fd\u7684\u89c2\u5bdf\u72b6\u6001\u6570\u3002\n\n  \u5bf9\u4e8e\u4e00\u4e2a\u957f\u5ea6\u4e3aT\u7684\u5e8f\u5217\uff0cI\u5bf9\u5e94\u7684\u72b6\u6001\u5e8f\u5217, O\u662f\u5bf9\u5e94\u7684\u89c2\u5bdf\u5e8f\u5217\uff0c\u5373\uff1a\n\n$I={i_1,i_2,...,i_T},O={o_1,o_2,...o_T}$\n\n  \u5176\u4e2d\uff0c\u4efb\u610f\u4e00\u4e2a\u9690\u85cf\u72b6\u6001$i_t\u2208Q$,\u4efb\u610f\u4e00\u4e2a\u89c2\u5bdf\u72b6\u6001$o_t\u2208V$\n\n  HMM\u6a21\u578b\u505a\u4e86\u4e24\u4e2a\u5f88\u91cd\u8981\u7684\u5047\u8bbe\u5982\u4e0b\uff1a\n\n1. \u9f50\u6b21\u9a6c\u5c14\u79d1\u592b\u94fe\u5047\u8bbe\u3002\u5373\u4efb\u610f\u65f6\u523b\u7684\u9690\u85cf\u72b6\u6001\u53ea\u4f9d\u8d56\u4e8e\u5b83\u524d\u4e00\u4e2a\u9690\u85cf\u72b6\u6001\uff0c\u8fd9\u4e2a\u6211\u4eec\u5728MCMC(\u4e8c)\u9a6c\u5c14\u79d1\u592b\u94fe\u4e2d\u6709\u8be6\u7ec6\u8bb2\u8ff0\u3002\u5f53\u7136\u8fd9\u6837\u5047\u8bbe\u6709\u70b9\u6781\u7aef\uff0c\u56e0\u4e3a\u5f88\u591a\u65f6\u5019\u6211\u4eec\u7684\u67d0\u4e00\u4e2a\u9690\u85cf\u72b6\u6001\u4e0d\u4ec5\u4ec5\u53ea\u4f9d\u8d56\u4e8e\u524d\u4e00\u4e2a\u9690\u85cf\u72b6\u6001\uff0c\u53ef\u80fd\u662f\u524d\u4e24\u4e2a\u6216\u8005\u662f\u524d\u4e09\u4e2a\u3002\u4f46\u662f\u8fd9\u6837\u5047\u8bbe\u7684\u597d\u5904\u5c31\u662f\u6a21\u578b\u7b80\u5355\uff0c\u4fbf\u4e8e\u6c42\u89e3\u3002\u5982\u679c\u5728\u65f6\u523bt\u7684\u9690\u85cf\u72b6\u6001\u662f$i_t=q_i$,\u5728\u65f6\u523bt+1\u7684\u9690\u85cf\u72b6\u6001\u662f$i_{t+1}=q_j$, \u5219\u4ece\u65f6\u523bt\u5230\u65f6\u523bt+1\u7684HMM\u72b6\u6001\u8f6c\u79fb\u6982\u7387$a_{ij}$\u53ef\u4ee5\u8868\u793a\u4e3a\uff1a\n\n$a_{ij}=P(i_{t+1}=q_j|i_t=q_i)$\n\n  \u8fd9\u6837$a_{ij}$\u53ef\u4ee5\u7ec4\u6210\u9a6c\u5c14\u79d1\u592b\u94fe\u7684\u72b6\u6001\u8f6c\u79fb\u77e9\u9635A:\n \n$A=[a_{ij}]_{N\u00d7N}$\n\n2. \u89c2\u6d4b\u72ec\u7acb\u6027\u5047\u8bbe\u3002\u5373\u4efb\u610f\u65f6\u523b\u7684\u89c2\u5bdf\u72b6\u6001\u53ea\u4ec5\u4ec5\u4f9d\u8d56\u4e8e\u5f53\u524d\u65f6\u523b\u7684\u9690\u85cf\u72b6\u6001\uff0c\u8fd9\u4e5f\u662f\u4e00\u4e2a\u4e3a\u4e86\u7b80\u5316\u6a21\u578b\u7684\u5047\u8bbe\u3002\u5982\u679c\u5728\u65f6\u523bt\u7684\u9690\u85cf\u72b6\u6001\u662f$i_t=q_j$, \u800c\u5bf9\u5e94\u7684\u89c2\u5bdf\u72b6\u6001\u4e3a$o_t=v_k$, \u5219\u8be5\u65f6\u523b\u89c2\u5bdf\u72b6\u6001$v_k$\u5728\u9690\u85cf\u72b6\u6001$q_j$\u4e0b\u751f\u6210\u7684\u6982\u7387\u4e3a$b_j(k)$,\u6ee1\u8db3\uff1a\n \n$b_j(k)=P(o_t=v_k|i_t=q_j)$\n\n  \u8fd9\u6837$b_j(k)$\u53ef\u4ee5\u7ec4\u6210\u89c2\u6d4b\u72b6\u6001\u751f\u6210\u7684\u6982\u7387\u77e9\u9635B:\n \n$B=[b_j(k)]_{N\u00d7M}$\n\n  \u9664\u6b64\u4e4b\u5916\uff0c\u6211\u4eec\u9700\u8981\u4e00\u7ec4\u5728\u65f6\u523bt=1\u7684\u9690\u85cf\u72b6\u6001\u6982\u7387\u5206\u5e03\u03a0:\n \n$\u03a0=[\u03c0(i)]_N$  \u5176\u4e2d$\u03c0(i)=P(i_1=q_i)$\n\n  \u4e00\u4e2aHMM\u6a21\u578b\uff0c\u53ef\u4ee5\u7531\u9690\u85cf\u72b6\u6001\u521d\u59cb\u6982\u7387\u5206\u5e03\u03a0, \u72b6\u6001\u8f6c\u79fb\u6982\u7387\u77e9\u9635A\u548c\u89c2\u6d4b\u72b6\u6001\u6982\u7387\u77e9\u9635B\u51b3\u5b9a\u3002\u03a0,A\u51b3\u5b9a\u72b6\u6001\u5e8f\u5217\uff0cB\u51b3\u5b9a\u89c2\u6d4b\u5e8f\u5217\u3002\u56e0\u6b64\uff0cHMM\u6a21\u578b\u53ef\u4ee5\u7531\u4e00\u4e2a\u4e09\u5143\u7ec4\u03bb\u8868\u793a\u5982\u4e0b\uff1a\n \n$\u03bb=(A,B,\u03a0)$\n\n## 2\u3001hmm\u80fd\u89e3\u51b3\u7684\u95ee\u9898\n\n1. \u8bc4\u4f30\u89c2\u5bdf\u5e8f\u5217\u6982\u7387\u3002\u5373\u7ed9\u5b9a\u6a21\u578b$\u03bb=(A,B,\u03a0)$\u548c\u89c2\u6d4b\u5e8f\u5217$O={o_1,o_2,...o_T}$\uff0c\u8ba1\u7b97\u5728\u6a21\u578b$\u03bb$\u4e0b\u89c2\u6d4b\u5e8f\u5217O\u51fa\u73b0\u7684\u6982\u7387$P(O|\u03bb)$\u3002\u8fd9\u4e2a\u95ee\u9898\u7684\u6c42\u89e3\u9700\u8981\u7528\u5230\u524d\u5411\u540e\u5411\u7b97\u6cd5\u3002\n\n2. \u6a21\u578b\u53c2\u6570\u5b66\u4e60\u95ee\u9898\u3002\u5373\u7ed9\u5b9a\u89c2\u6d4b\u5e8f\u5217$O={o_1,o_2,...o_T}$\uff0c\u4f30\u8ba1\u6a21\u578b$\u03bb=(A,B,\u03a0)$\u7684\u53c2\u6570\uff0c\u4f7f\u8be5\u6a21\u578b\u4e0b\u89c2\u6d4b\u5e8f\u5217\u7684\u6761\u4ef6\u6982\u7387$P(O|\u03bb)$\u6700\u5927\u3002\u8fd9\u4e2a\u95ee\u9898\u7684\u6c42\u89e3\u9700\u8981\u7528\u5230\u57fa\u4e8eEM\u7b97\u6cd5\u7684\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u3002\n\n3. \u9884\u6d4b\u95ee\u9898\uff0c\u4e5f\u79f0\u4e3a\u89e3\u7801\u95ee\u9898\u3002\u5373\u7ed9\u5b9a\u6a21\u578b$\u03bb=(A,B,\u03a0)$\u548c\u89c2\u6d4b\u5e8f\u5217$O={o_1,o_2,...o_T}$\uff0c\u6c42\u7ed9\u5b9a\u89c2\u6d4b\u5e8f\u5217\u6761\u4ef6\u4e0b\uff0c\u6700\u53ef\u80fd\u51fa\u73b0\u7684\u5bf9\u5e94\u7684\u72b6\u6001\u5e8f\u5217\uff0c\u8fd9\u4e2a\u95ee\u9898\u7684\u6c42\u89e3\u9700\u8981\u7528\u5230\u57fa\u4e8e\u52a8\u6001\u89c4\u5212\u7684\u7ef4\u7279\u6bd4\u7b97\u6cd5\u3002\u3010\u9884\u6d4b\u51fa\u7684\u6bcf\u4e00\u4e2a\u9690\u72b6\u6001\u90fd\u6709N\u4e2a\u503c\uff0c\u6bcf\u4e2a\u503c\u5bf9\u5e94\u4e00\u4e2a\u6982\u7387\u503c\uff1b\u4f8b\u5982\u53e5\u5b50\u5206\u8bcd\uff0c\u90a3\u4e48\u5c31\u9700\u8981\u5229\u7528\u7ef4\u7279\u6bd4\u7b97\u6cd5\u627e\u5230\u6574\u4e2a\u53e5\u5b50\u7684\u6700\u5927\u9690\u72b6\u6001\u6982\u7387\u5bf9\u5e94\u7684\u9690\u72b6\u6001\u8def\u5f84\u3002\u3011\n\n# \u56db\u3001\u8bc4\u4f30\u89c2\u5bdf\u5e8f\u5217\u6982\u7387\n\n## 1\u3001\u6c42\u89c2\u6d4b\u5e8f\u5217\u7684\u6982\u7387\n\n\u8fd9\u4e2a\u95ee\u9898\u662f\u8fd9\u6837\u7684\u3002\u6211\u4eec\u5df2\u77e5HMM\u6a21\u578b\u7684\u53c2\u6570\ud835\udf06=(\ud835\udc34,\ud835\udc35,\u03a0)\u3002\u5176\u4e2d\ud835\udc34\u662f\u9690\u85cf\u72b6\u6001\u8f6c\u79fb\u6982\u7387\u7684\u77e9\u9635\uff0c\ud835\udc35\u662f\u89c2\u6d4b\u72b6\u6001\u751f\u6210\u6982\u7387\u7684\u77e9\u9635\uff0c \u03a0\u662f\u9690\u85cf\u72b6\u6001\u7684\u521d\u59cb\u6982\u7387\u5206\u5e03\u3002\u540c\u65f6\u6211\u4eec\u4e5f\u5df2\u7ecf\u5f97\u5230\u4e86\u89c2\u6d4b\u5e8f\u5217$O =\\{o_1,o_2,...o_T\\}$,\u73b0\u5728\u6211\u4eec\u8981\u6c42\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u5728\u6a21\u578b\ud835\udf06\u4e0b\u51fa\u73b0\u7684\u6761\u4ef6\u6982\u7387\ud835\udc43(\ud835\udc42|\ud835\udf06)\u3002\n \n\u4e4d\u4e00\u770b\uff0c\u8fd9\u4e2a\u95ee\u9898\u5f88\u7b80\u5355\u3002\u56e0\u4e3a\u6211\u4eec\u77e5\u9053\u6240\u6709\u7684\u9690\u85cf\u72b6\u6001\u4e4b\u95f4\u7684\u8f6c\u79fb\u6982\u7387\u548c\u6240\u6709\u4ece\u9690\u85cf\u72b6\u6001\u5230\u89c2\u6d4b\u72b6\u6001\u751f\u6210\u6982\u7387\uff0c\u90a3\u4e48\u6211\u4eec\u662f\u53ef\u4ee5\u66b4\u529b\u6c42\u89e3\u7684\u3002\n\n\u6211\u4eec\u53ef\u4ee5\u5217\u4e3e\u51fa\u6240\u6709\u53ef\u80fd\u51fa\u73b0\u7684\u957f\u5ea6\u4e3a\ud835\udc47\u7684\u9690\u85cf\u5e8f\u5217$I = \\{i_1,i_2,...,i_T\\}$,\u5206\u5e03\u6c42\u51fa\u8fd9\u4e9b\u9690\u85cf\u5e8f\u5217\u4e0e\u89c2\u6d4b\u5e8f\u5217$O =\\{o_1,o_2,...o_T\\}$\u7684\u8054\u5408\u6982\u7387\u5206\u5e03\ud835\udc43(\ud835\udc42,\ud835\udc3c|\ud835\udf06)\uff0c\u8fd9\u6837\u6211\u4eec\u5c31\u53ef\u4ee5\u5f88\u5bb9\u6613\u7684\u6c42\u51fa\u8fb9\u7f18\u5206\u5e03\ud835\udc43(\ud835\udc42|\ud835\udf06)\u4e86\u3002\n\n\u5177\u4f53\u66b4\u529b\u6c42\u89e3\u7684\u65b9\u6cd5\u662f\u8fd9\u6837\u7684\uff1a\u9996\u5148\uff0c\u4efb\u610f\u4e00\u4e2a\u9690\u85cf\u5e8f\u5217$I = \\{i_1,i_2,...,i_T\\}$\u51fa\u73b0\u7684\u6982\u7387\u662f\uff1a\n\n$$P(I|\\lambda) = \\pi_{i_1} a_{i_{1}i_{2}} a_{i_{2}i_{3}}... a_{i_{T-1}\\;\\;i_T}$$\n\n\u5bf9\u4e8e\u56fa\u5b9a\u7684\u72b6\u6001\u5e8f\u5217$I = \\{i_1,i_2,...,i_T\\}$\uff0c\u6211\u4eec\u8981\u6c42\u7684\u89c2\u5bdf\u5e8f\u5217$O =\\{o_1,o_2,...o_T\\}$\u51fa\u73b0\u7684\u6982\u7387\u662f\uff1a\n\n$$P(O|I, \\lambda) = b_{i_1}(o_1)b_{i_2}(o_2)...b_{i_T}(o_T)$$\n\n\u5219\ud835\udc42\u548c\ud835\udc3c\u8054\u5408\u51fa\u73b0\u7684\u6982\u7387\u662f\uff1a\n\n$$P(O,I|\\lambda) = P(I|\\lambda)P(O|I, \\lambda) = \\pi_{i_1}b_{i_1}(o_1)a_{i_1i_2}b_{i_2}(o_2)...a_{i_{T-1}\\;\\;i_T}b_{i_T}(o_T)$$\n\n\u7136\u540e\u6c42\u8fb9\u7f18\u6982\u7387\u5206\u5e03\uff0c\u5373\u53ef\u5f97\u5230\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u5728\u6a21\u578b\ud835\udf06\u4e0b\u51fa\u73b0\u7684\u6761\u4ef6\u6982\u7387\ud835\udc43(\ud835\udc42|\ud835\udf06)\uff0c\u4f46\u662f\u6211\u4eec\u9700\u8981\u8ba1\u7b97\u6240\u6709\u7684T\u4e2a\u9690\u72b6\u6001\u3001N\u79cd\u9690\u72b6\u6001\u7684\u81ea\u7531\u7ec4\u5408\uff0c\u5373$T^N$\u79cd\u7ec4\u5408\u7684\u6982\u7387\u7136\u540e\u7d2f\u52a0\uff1a\n\n>\u8fb9\u7f18\u5206\u5e03\uff08Marginal Distribution\uff09\u6307\u5728\u6982\u7387\u8bba\u548c\u7edf\u8ba1\u5b66\u7684\u591a\u7ef4\u968f\u673a\u53d8\u91cf\u4e2d\uff0c\u53ea\u5305\u542b\u5176\u4e2d\u90e8\u5206\u53d8\u91cf\u7684\u6982\u7387\u5206\u5e03\u3002\\\n>\\\n>**\u5b9a\u4e49** \\\n>\u5047\u8bbe\u6709\u4e00\u4e2a\u548c\u4e24\u4e2a\u53d8\u91cf\u76f8\u5173\u7684\u6982\u7387\u5206\u5e03\uff1a\\\n>$P(x,y)$\\\n>\u5173\u4e8e\u5176\u4e2d\u4e00\u4e2a\u7279\u5b9a\u53d8\u91cf\u7684\u8fb9\u7f18\u5206\u5e03\u5219\u4e3a\u7ed9\u5b9a\u5176\u4ed6\u53d8\u91cf\u7684\u6761\u4ef6\u6982\u7387\u5206\u5e03\uff1a\\\n>$P(x)=\\sum_yP(x,y)=\\sum_yP(x|y)P(y)$ \\\n>\u89e3\u91ca\uff1a\u8ba9y\u53d6\u6240\u6709\u7684\u53ef\u80fd\u503c\uff0c\u7136\u540e\u6c42\u51fa\u53ea\u6709x\u53d8\u91cf\u7684\u6982\u7387\uff1b\u4e3e\u4e2a\u4f8b\u5b50\uff1ax\u4e3a\u80a4\u8272\uff08\u9ed1\u3001\u767d\u3001\u9ec4\uff09y\u4e3a\u6027\u522b\uff08\u7537\u3001\u5973\uff09\u90a3\u4e48$P(x)=P(x|\u7537)+P(x|\u5973)$ \\\n>\u53c2\u9605Wikipedia\u4e3e\u4f8b\uff0c\u4e0b\u56fe\u4e2d\uff0cX\u548cY\u9075\u4ece\u7eff\u5708\u5185\u6240\u793a\u7684\u4e8c\u5143\u6b63\u6001\u5206\u5e03\uff0c\u7ea2\u7ebf\u548c\u84dd\u7ebf\u5206\u522b\u8868\u793aY\u53d8\u91cf\u548cX\u53d8\u91cf\u7684\u8fb9\u7f18\u5206\u5e03 \\\n>\n>\u5728\u8fd9\u4e2a\u8fb9\u7f18\u5206\u5e03\u4e2d\uff0c\u6211\u4eec\u5f97\u5230\u53ea\u5173\u4e8e\u4e00\u4e2a\u53d8\u91cf\u7684\u6982\u7387\u5206\u5e03\uff0c\u800c\u4e0d\u518d\u8003\u8651\u53e6\u4e00\u53d8\u91cf\u7684\u5f71\u54cd\uff0c\u5b9e\u9645\u4e0a\u8fdb\u884c\u4e86\u964d\u7ef4\u64cd\u4f5c\u3002\u5728\u5b9e\u9645\u5e94\u7528\u4e2d\uff0c\u4f8b\u5982\u4eba\u5de5\u795e\u7ecf\u7f51\u7edc\u7684\u795e\u7ecf\u5143\u4e92\u76f8\u5173\u8054\uff0c\u5728\u8ba1\u7b97\u5b83\u4eec\u5404\u81ea\u7684\u53c2\u6570\u7684\u65f6\u5019\uff0c\u5c31\u4f1a\u4f7f\u7528\u8fb9\u7f18\u5206\u5e03\u8ba1\u7b97\u5f97\u5230\u67d0\u4e00\u7279\u5b9a\u795e\u7ecf\u5143\uff08\u53d8\u91cf\uff09\u7684\u503c\u3002\n\n$$P(O|\\lambda) = \\sum\\limits_{I}P(O,I|\\lambda) = \\sum\\limits_{i_1,i_2,...i_T}\\pi_{i_1}b_{i_1}(o_1)a_{i_1i_2}b_{i_2}(o_2)...a_{i_{T-1}\\;\\;i_T}b_{i_T}(o_T)$$\n\n\u867d\u7136\u4e0a\u8ff0\u65b9\u6cd5\u6709\u6548\uff0c\u4f46\u662f\u5982\u679c\u6211\u4eec\u7684\u9690\u85cf\u72b6\u6001\u6570\ud835\udc41\u975e\u5e38\u591a\u7684\u90a3\u5c31\u9ebb\u70e6\u4e86\uff0c\u6b64\u65f6\u6211\u4eec\u9884\u6d4b\u72b6\u6001\u6709$N^T$\u79cd\u7ec4\u5408\uff0c\u7b97\u6cd5\u7684\u65f6\u95f4\u590d\u6742\u5ea6\u662f$\ud835\udc42(\ud835\udc47\ud835\udc41^\ud835\udc47)$\u9636\u7684\u3002\u56e0\u6b64\u5bf9\u4e8e\u4e00\u4e9b\u9690\u85cf\u72b6\u6001\u6570\u6781\u5c11\u7684\u6a21\u578b\uff0c\u6211\u4eec\u53ef\u4ee5\u7528\u66b4\u529b\u6c42\u89e3\u6cd5\u6765\u5f97\u5230\u89c2\u6d4b\u5e8f\u5217\u51fa\u73b0\u7684\u6982\u7387\uff0c\u4f46\u662f\u5982\u679c\u9690\u85cf\u72b6\u6001\u591a\uff0c\u5219\u4e0a\u8ff0\u7b97\u6cd5\u592a\u8017\u65f6\uff0c\u6211\u4eec\u9700\u8981\u5bfb\u627e\u5176\u4ed6\u7b80\u6d01\u7684\u7b97\u6cd5\u3002\n\n\u524d\u5411\u540e\u5411\u7b97\u6cd5\u5c31\u662f\u6765\u5e2e\u52a9\u6211\u4eec\u5728\u8f83\u4f4e\u7684\u65f6\u95f4\u590d\u6742\u5ea6\u60c5\u51b5\u4e0b\u6c42\u89e3\u8fd9\u4e2a\u95ee\u9898\n\n# 2\u3001\u524d\u5411\u7b97\u6cd5\u6c42HMM\u89c2\u6d4b\u5e8f\u5217\u7684\u6982\u7387\n\n\u524d\u5411\u540e\u5411\u7b97\u6cd5\u662f\u524d\u5411\u7b97\u6cd5\u548c\u540e\u5411\u7b97\u6cd5\u7684\u7edf\u79f0\uff0c\u8fd9\u4e24\u4e2a\u7b97\u6cd5\u90fd\u53ef\u4ee5\u7528\u6765\u6c42HMM\u89c2\u6d4b\u5e8f\u5217\u7684\u6982\u7387\u3002\u6211\u4eec\u5148\u6765\u770b\u770b\u524d\u5411\u7b97\u6cd5\u662f\u5982\u4f55\u6c42\u89e3\u8fd9\u4e2a\u95ee\u9898\u7684\u3002\n\n\u524d\u5411\u7b97\u6cd5\u672c\u8d28\u4e0a\u5c5e\u4e8e\u52a8\u6001\u89c4\u5212\u7684\u7b97\u6cd5\uff0c\u4e5f\u5c31\u662f\u6211\u4eec\u8981\u901a\u8fc7\u627e\u5230\u5c40\u90e8\u72b6\u6001\u9012\u63a8\u7684\u516c\u5f0f\uff0c\u8fd9\u6837\u4e00\u6b65\u6b65\u7684\u4ece\u5b50\u95ee\u9898\u7684\u6700\u4f18\u89e3\u62d3\u5c55\u5230\u6574\u4e2a\u95ee\u9898\u7684\u6700\u4f18\u89e3\u3002\n\n\u5728\u524d\u5411\u7b97\u6cd5\u4e2d\uff0c\u901a\u8fc7\u5b9a\u4e49\u201c\u524d\u5411\u6982\u7387\u201d\u6765\u5b9a\u4e49\u52a8\u6001\u89c4\u5212\u7684\u8fd9\u4e2a\u5c40\u90e8\u72b6\u6001\u3002\u4ec0\u4e48\u662f\u524d\u5411\u6982\u7387\u5462, \u5176\u5b9e\u5b9a\u4e49\u5f88\u7b80\u5355\uff1a\u5b9a\u4e49\u65f6\u523b\ud835\udc61\u65f6\u9690\u85cf\u72b6\u6001\u4e3a$q_i$, \u89c2\u6d4b\u72b6\u6001\u7684\u5e8f\u5217\u4e3a$\ud835\udc5c_1,\ud835\udc5c_2,...\ud835\udc5c_\ud835\udc61$\u7684\u6982\u7387\u4e3a\u524d\u5411\u6982\u7387\u3002\u8bb0\u4e3a\uff1a\n\n$$\\alpha_t(i) = P(o_1,o_2,...o_t, i_t =q_i | \\lambda)\\ \\ \\ \\ \u6ce8\u610f\uff1a\u5b57\u6bcd\u662falpha\u4e0d\u662fa$$\n\n\u65e2\u7136\u662f\u52a8\u6001\u89c4\u5212\uff0c\u6211\u4eec\u5c31\u8981\u9012\u63a8\u4e86\uff0c\u73b0\u5728\u6211\u4eec\u5047\u8bbe\u6211\u4eec\u5df2\u7ecf\u627e\u5230\u4e86\u5728\u65f6\u523b\ud835\udc61\u65f6\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\uff0c\u73b0\u5728\u6211\u4eec\u9700\u8981\u9012\u63a8\u51fa\u65f6\u523b\ud835\udc61+1\u65f6\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\u3002\n\n\n\n\u4ece\u4e0a\u56fe\u53ef\u4ee5\u770b\u51fa\uff0c\u6211\u4eec\u53ef\u4ee5\u57fa\u4e8e\u65f6\u523b\ud835\udc61\u65f6\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\uff0c\u518d\u4e58\u4ee5\u5bf9\u5e94\u7684\u72b6\u6001\u8f6c\u79fb\u6982\u7387\uff0c\u5373$\\alpha_t(j)a_{ji}$\u5c31\u662f\u5728\u65f6\u523b\ud835\udc61\u89c2\u6d4b\u5230$o_1,o_2,...o_t$\uff0c\u5e76\u4e14\u65f6\u523b\ud835\udc61\u9690\u85cf\u72b6\u6001$\ud835\udc5e_\ud835\udc57$, \u65f6\u523b\ud835\udc61+1\u9690\u85cf\u72b6\u6001$\ud835\udc5e_i$\u7684\u6982\u7387\u3002\u8868\u793a\u4e3a\uff1a\n\n$$P(o_1,o_2,...o_t,i_t=q_j,i_{t+1}=q_i|\\lambda)=\\alpha_t(j)a_{ji}$$\n\n\u5982\u679c\u5c06\u60f3\u4e0a\u9762\u6240\u6709\u7684\u7ebf\u5bf9\u5e94\u7684\u6982\u7387\u6c42\u548c\uff0c\u5373$\\sum\\limits_{j=1}^N\\alpha_t(j)a_{ji}$\u5c31\u662f\u5728\u65f6\u523b\ud835\udc61\u89c2\u6d4b\u5230$o_1,o_2,...o_t$\uff0c\u5e76\u4e14\u65f6\u523b\ud835\udc61+1\u9690\u85cf\u72b6\u6001$\ud835\udc5e_i$\u7684\u6982\u7387\u3002\u8868\u793a\u4e3a\uff1a\n\n$$P(o_1,o_2,...o_t,i_{t+1}=q_i|\\lambda)=\\sum\\limits_{j=1}^N\\alpha_t(j)a_{ji}$$\n\n\u7ee7\u7eed\u4e00\u6b65\uff0c\u7531\u4e8e\u89c2\u6d4b\u72b6\u6001$O_{\ud835\udc61+1}$\u53ea\u4f9d\u8d56\u4e8e\ud835\udc61+1\u65f6\u523b\u9690\u85cf\u72b6\u6001$\ud835\udc5e_\ud835\udc56$, \u8fd9\u6837$[\\sum\\limits_{j=1}^N\\alpha_t(j)a_{ji}]b_i(o_{t+1})$\u5c31\u662f\u5728\u5728\u65f6\u523b\ud835\udc61+1\u89c2\u6d4b\u5230$o_1,o_2,...o_t\uff0co_{t+1}$\uff0c\u5e76\u4e14\u65f6\u523b\ud835\udc61+1\u9690\u85cf\u72b6\u6001\ud835\udc5e\ud835\udc56\u7684\u6982\u7387\u3002\u8868\u793a\u4e3a\uff1a\n\n$$P(o_1,o_2,...o_t,o_{t+1},i_{t+1}=q_i|\\lambda)=[\\sum\\limits_{j=1}^N\\alpha_t(j)a_{ji}]b_i(o_{t+1})$$\n\n\u800c\u8fd9\u4e2a\u6982\u7387\uff0c\u6070\u6070\u5c31\u662f\u65f6\u523b\ud835\udc61+1\u5bf9\u5e94\u7684\u9690\u85cf\u72b6\u6001\ud835\udc56\u7684\u524d\u5411\u6982\u7387\uff0c\u8fd9\u6837\u6211\u4eec\u5f97\u5230\u4e86\u524d\u5411\u6982\u7387\u7684\u9012\u63a8\u5173\u7cfb\u5f0f\u5982\u4e0b\uff1a\n\n$$\\alpha_{t+1}(i) = \\Big[\\sum\\limits_{j=1}^N\\alpha_t(j)a_{ji}\\Big]b_i(o_{t+1})$$\n\n\n\u6211\u4eec\u7684\u52a8\u6001\u89c4\u5212\u4ece\u65f6\u523b1\u5f00\u59cb\uff0c\u5230\u65f6\u523b\ud835\udc47\u7ed3\u675f\uff0c\u7531\u4e8e$\\alpha_T(i)$\u8868\u793a\u5728\u65f6\u523b\ud835\udc47\u89c2\u6d4b\u5e8f\u5217\u4e3a$o_1,o_2,...o_T$\uff0c\u5e76\u4e14\u65f6\u523b\ud835\udc47\u9690\u85cf\u72b6\u6001$\ud835\udc5e_\ud835\udc56$\u7684\u6982\u7387\uff0c\u6211\u4eec\u53ea\u8981\u5c06\u6240\u6709\u9690\u85cf\u72b6\u6001\u5bf9\u5e94\u7684\u6982\u7387\u76f8\u52a0\uff0c\u5373$\\sum\\limits_{i=1}^N\\alpha_T(i)$\u5c31\u5f97\u5230\u4e86\u5728\u65f6\u523b\ud835\udc47\u89c2\u6d4b\u5e8f\u5217\u4e3a$o_1,o_2,...o_T$\u7684\u6982\u7387\u3002\n\n\u4e0b\u9762\u603b\u7ed3\u4e0b\u524d\u5411\u7b97\u6cd5\u3002\n\n\u8f93\u5165\uff1aHMM\u6a21\u578b\ud835\udf06=(\ud835\udc34,\ud835\udc35,\u03a0)\uff0c\u89c2\u6d4b\u5e8f\u5217\ud835\udc42=($o_1,o_2,...o_T$)\n\n\u8f93\u51fa\uff1a\u89c2\u6d4b\u5e8f\u5217\u6982\u7387\ud835\udc43(\ud835\udc42|\ud835\udf06)\n\n1) \u8ba1\u7b97\u65f6\u523b1\u7684\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u524d\u5411\u6982\u7387\uff1a\n\n$$\\alpha_1(i) = \\pi_ib_i(o_1),\\; i=1,2,...N$$\n\n2) \u9012\u63a8\u65f6\u523b2,3,...\ud835\udc47\u65f6\u523b\u7684\u524d\u5411\u6982\u7387\uff08\u9012\u63a8\u5173\u7cfb\u4ece\u65f6\u523b1\u5f80\u65f6\u523bT\u63a8\uff0c\u7531t\u65f6\u523b\u7684\u524d\u5411\u6982\u7387\u53ef\u4ee5\u6c42\u51fat+1\u65f6\u523b\u7684\u524d\u5411\u6982\u7387\uff09\uff1a\n\n$$\\alpha_{t+1}(i) = \\Big[\\sum\\limits_{j=1}^N\\alpha_t(j)a_{ji}\\Big]b_i(o_{t+1}),\\; i=1,2,...N$$\n\n3) \u8ba1\u7b97\u6700\u7ec8\u7ed3\u679c\uff1a\n\n$$P(O|\\lambda) = \\sum\\limits_{i=1}^N\\alpha_T(i)$$\n\n\u4ece\u9012\u63a8\u516c\u5f0f\u53ef\u4ee5\u770b\u51fa\uff0c\u6211\u4eec\u7684\u7b97\u6cd5\u65f6\u95f4\u590d\u6742\u5ea6\u662f$\ud835\udc42(\ud835\udc47\ud835\udc41^2)$\uff0c\u6bd4\u66b4\u529b\u89e3\u6cd5\u7684\u65f6\u95f4\u590d\u6742\u5ea6$\ud835\udc42(\ud835\udc47\ud835\udc41^\ud835\udc47)$\u5c11\u4e86\u51e0\u4e2a\u6570\u91cf\u7ea7\u3002\n\n\n\n## 3\u3001HMM\u524d\u5411\u7b97\u6cd5\u6c42\u89e3\u5b9e\u4f8b\n\n\n\u6211\u4eec\u7684\u89c2\u5bdf\u96c6\u5408\u662f:\n\n$$\ud835\udc49={\u7ea2\uff0c\u767d}\uff0c\ud835\udc40=2$$\n\n\u6211\u4eec\u7684\u72b6\u6001\u96c6\u5408\u662f\uff1a\n\n$$\ud835\udc44=\\{\u76d2\u5b501\uff0c\u76d2\u5b502\uff0c\u76d2\u5b503\\}\uff0c\ud835\udc41=3$$\n\n\u800c\u89c2\u5bdf\u5e8f\u5217\u548c\u72b6\u6001\u5e8f\u5217\u7684\u957f\u5ea6\u4e3a3.\n\n\u521d\u59cb\u72b6\u6001\u5206\u5e03\u4e3a\uff1a\n\n$$\\Pi = (0.2,0.4,0.4)^T$$\n\n\u72b6\u6001\u8f6c\u79fb\u6982\u7387\u5206\u5e03\u77e9\u9635\u4e3a\uff1a\n\n$$A = \\left( \\begin{array} {ccc} 0.5 & 0.2 & 0.3 \\\\ 0.3 & 0.5 & 0.2 \\\\ 0.2 & 0.3 &0.5 \\end{array} \\right)$$\n\n\u89c2\u6d4b\u72b6\u6001\u6982\u7387\u77e9\u9635\u4e3a\uff1a\n\n$$B = \\left( \\begin{array} {ccc} 0.5 & 0.5 \\\\ 0.4 & 0.6 \\\\ 0.7 & 0.3 \\end{array} \\right)$$\n\n\u7403\u7684\u989c\u8272\u7684\u89c2\u6d4b\u5e8f\u5217:\n\n$$O=\\{\u7ea2\uff0c\u767d\uff0c\u7ea2\\}$$\n\n\u9996\u5148\u8ba1\u7b97\u65f6\u523b1\u4e09\u4e2a\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\uff1a\n\n\u65f6\u523b1\u662f\u7ea2\u8272\u7403\uff0c\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b501\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_1(1) = \\pi_1b_1(o_1) = 0.2 \\times 0.5 = 0.1$$\n\n\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b502\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_1(2) = \\pi_2b_2(o_1) = 0.4 \\times 0.4 = 0.16$$\n\n\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b503\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_1(3) = \\pi_3b_3(o_1) = 0.4 \\times 0.7 = 0.28$$\n\n\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u5f00\u59cb\u9012\u63a8\u4e86\uff0c\u9996\u5148\u9012\u63a8\u65f6\u523b2\u4e09\u4e2a\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\uff1a\n\u65f6\u523b2\u662f\u767d\u8272\u7403\uff0c\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b501\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_2(1) = \\Big[\\sum\\limits_{i=1}^3\\alpha_1(i)a_{i1}\\Big]b_1(o_2) = [0.1*0.5+0.16*0.3+0.28*0.2 ] \\times 0.5 = 0.077$$\n\n\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b502\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_2(2) = \\Big[\\sum\\limits_{i=1}^3\\alpha_1(i)a_{i2}\\Big]b_2(o_2) = [0.1*0.2+0.16*0.5+0.28*0.3 ] \\times 0.6 = 0.1104$$\n\n\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b503\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_2(3) = \\Big[\\sum\\limits_{i=1}^3\\alpha_1(i)a_{i3}\\Big]b_3(o_2) = [0.1*0.3+0.16*0.2+0.28*0.5 ] \\times 0.3 = 0.0606$$\n\n\u7ee7\u7eed\u9012\u63a8\uff0c\u73b0\u5728\u6211\u4eec\u9012\u63a8\u65f6\u523b3\u4e09\u4e2a\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\uff1a\n\u65f6\u523b3\u662f\u7ea2\u8272\u7403\uff0c\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b501\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_3(1) = \\Big[\\sum\\limits_{i=1}^3\\alpha_2(i)a_{i1}\\Big]b_1(o_3) = [0.077*0.5+0.1104*0.3+0.0606*0.2 ] \\times 0.5 = 0.04187$$\n\n\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b502\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_3(2) = \\Big[\\sum\\limits_{i=1}^3\\alpha_2(i)a_{i2}\\Big]b_2(o_3) = [0.077*0.2+0.1104*0.5+0.0606*0.3 ] \\times 0.4 = 0.03551$$\n\n\u9690\u85cf\u72b6\u6001\u662f\u76d2\u5b503\u7684\u6982\u7387\u4e3a\uff1a\n\n$$\\alpha_3(3) = \\Big[\\sum\\limits_{i=1}^3\\alpha_2(i)a_{i3}\\Big]b_3(o_3) = [0.077*0.3+0.1104*0.2+0.0606*0.5 ] \\times 0.7 = 0.05284$$\n\n\u6700\u7ec8\u6211\u4eec\u6c42\u51fa\u89c2\u6d4b\u5e8f\u5217:$\ud835\udc42=\\{\u7ea2\uff0c\u767d\uff0c\u7ea2\\}$\u7684\u6982\u7387\u4e3a\uff1a\n\n$$P(O|\\lambda) = \\sum\\limits_{i=1}^3\\alpha_3(i) = 0.13022$$\n\n\n## 4\u3001\u540e\u5411\u7b97\u6cd5\u6c42HMM\u89c2\u6d4b\u5e8f\u5217\u7684\u6982\u7387\n\n\u540e\u5411\u7b97\u6cd5\u548c\u524d\u5411\u7b97\u6cd5\u975e\u5e38\u7c7b\u4f3c\uff0c\u90fd\u662f\u7528\u7684\u52a8\u6001\u89c4\u5212\uff0c\u552f\u4e00\u7684\u533a\u522b\u662f\u9009\u62e9\u7684\u5c40\u90e8\u72b6\u6001\u4e0d\u540c\uff0c\u540e\u5411\u7b97\u6cd5\u7528\u7684\u662f\u201c\u540e\u5411\u6982\u7387\u201d\uff0c\u90a3\u4e48\u540e\u5411\u6982\u7387\u662f\u5982\u4f55\u5b9a\u4e49\u7684\u5462\uff1f\n\n\u5b9a\u4e49\u65f6\u523b\ud835\udc61\u65f6\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc56$, \u4ece\u65f6\u523b\ud835\udc61+1\u5230\u6700\u540e\u65f6\u523b\ud835\udc47\u7684\u89c2\u6d4b\u72b6\u6001\u7684\u5e8f\u5217\u4e3a$\ud835\udc5c_{\ud835\udc61+1},\ud835\udc5c_{\ud835\udc61+2},...\ud835\udc5c_\ud835\udc47$\u7684\u6982\u7387\u4e3a\u540e\u5411\u6982\u7387\u3002\u8bb0\u4e3a\uff1a\n\n$$\\beta_t(i) = P(o_{t+1},o_{t+2},...o_T| i_t =q_i , \\lambda)$$\n\n\n\n\u540e\u5411\u6982\u7387\u7684\u52a8\u6001\u89c4\u5212\u9012\u63a8\u516c\u5f0f\u548c\u524d\u5411\u6982\u7387\u662f\u76f8\u53cd\u7684\u3002\u73b0\u5728\u6211\u4eec\u5047\u8bbe\u6211\u4eec\u5df2\u7ecf\u627e\u5230\u4e86\u5728\u65f6\u523b\ud835\udc61+1\u65f6\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u7684\u540e\u5411\u6982\u7387$\ud835\udefd_{\ud835\udc61+1}(\ud835\udc57)$\uff0c\u8868\u793a\u4e3a\uff1a\n\n$$\\beta_{t+1}(j) = P(o_{t+2},o_{t+3}...o_T| i_{t+1} =q_j , \\lambda)$$\n\n\u73b0\u5728\u6211\u4eec\u9700\u8981\u9012\u63a8\u51fa\u65f6\u523b\ud835\udc61\u65f6\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u7684\u540e\u5411\u6982\u7387\u3002\u5982\u4e0a\u56fe\uff0c\u6211\u4eec\u53ef\u4ee5\u8ba1\u7b97\u51fa\u89c2\u6d4b\u72b6\u6001\u7684\u5e8f\u5217\u4e3a$o_{\ud835\udc61+2},o_{\ud835\udc61+3},...o_\ud835\udc47$\uff0c \ud835\udc61\u65f6\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc56$, \u65f6\u523b\ud835\udc61+1\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc57$\u7684\u6982\u7387\u4e3a$\ud835\udc4e_{\ud835\udc56\ud835\udc57}\ud835\udefd_{\ud835\udc61+1}(\ud835\udc57)$, \u8868\u793a\u4e3a\uff1a\n\n$$P(o_{\ud835\udc61+2},o_{\ud835\udc61+3},...o_\ud835\udc47|i_t=q_i,i_{t+1}=q_j,\\lambda)=\ud835\udc4e_{\ud835\udc56\ud835\udc57}\ud835\udefd_{\ud835\udc61+1}(\ud835\udc57)$$\n\n\u63a5\u7740\u53ef\u4ee5\u5f97\u5230\u89c2\u6d4b\u72b6\u6001\u7684\u5e8f\u5217\u4e3a$o_{\ud835\udc61+1},o_{\ud835\udc61+2},...o_\ud835\udc47$\uff0c \ud835\udc61\u65f6\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc56$, \u65f6\u523b\ud835\udc61+1\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc57$\u7684\u6982\u7387\u4e3a$a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)$, \u8868\u793a\u4e3a\uff1a\n\n$$P(o_{\ud835\udc61+1},o_{\ud835\udc61+2},...o_\ud835\udc47|i_t=q_i,i_{t+1}=q_j,\\lambda)=a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)$$\n\n\u5219\u628a\u4e0a\u9762\u6240\u6709\u7ebf\u5bf9\u5e94\u7684\u6982\u7387\u52a0\u8d77\u6765\uff0c\u6211\u4eec\u53ef\u4ee5\u5f97\u5230\u89c2\u6d4b\u72b6\u6001\u7684\u5e8f\u5217\u4e3a$o_{\ud835\udc61+1},o_{\ud835\udc61+2},...o_\ud835\udc47$\uff0c \ud835\udc61\u65f6\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc56$\u7684\u6982\u7387\u4e3a$\\sum\\limits_{j=1}^{N}a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)$\uff0c\u8fd9\u4e2a\u6982\u7387\u5373\u4e3a\u65f6\u523b\ud835\udc61\u7684\u540e\u5411\u6982\u7387\u3002\u8868\u793a\u4e3a\uff1a\n\n$$P(o_{\ud835\udc61+1},o_{\ud835\udc61+2},...o_\ud835\udc47|i_t=q_i,\\lambda)=\\sum\\limits_{j=1}^{N}a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)$$\n\n\u8fd9\u6837\u6211\u4eec\u5f97\u5230\u4e86\u540e\u5411\u6982\u7387\u7684\u9012\u63a8\u5173\u7cfb\u5f0f\u5982\u4e0b\uff1a\n$$\\beta_{t}(i) = \\sum\\limits_{j=1}^{N}a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)$$\n\u73b0\u5728\u6211\u4eec\u603b\u7ed3\u4e0b\u540e\u5411\u7b97\u6cd5\u7684\u6d41\u7a0b,\u6ce8\u610f\u4e0b\u548c\u524d\u5411\u7b97\u6cd5\u7684\u76f8\u540c\u70b9\u548c\u4e0d\u540c\u70b9\uff1a\n\n\u8f93\u5165\uff1aHMM\u6a21\u578b\ud835\udf06=(\ud835\udc34,\ud835\udc35,\u03a0)\uff0c\u89c2\u6d4b\u5e8f\u5217$O=(o_1,o_2,...o_T)$\n\n\u8f93\u51fa\uff1a\u89c2\u6d4b\u5e8f\u5217\u6982\u7387\ud835\udc43(\ud835\udc42|\ud835\udf06)\n\n1) \u521d\u59cb\u5316\u65f6\u523b\ud835\udc47\u7684\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u540e\u5411\u6982\u7387\uff1a\n\n$$\\beta_T(i) = 1,\\; i=1,2,...N$$\n\n2) \u9012\u63a8\u65f6\u523b\ud835\udc47\u22121,\ud835\udc47\u22122,...1\u65f6\u523b\u7684\u540e\u5411\u6982\u7387\uff08\u9012\u63a8\u5173\u7cfb\u4eceT\u65f6\u523b\u5f80T-1,T-2...1\u65f6\u523b\u63a8\uff0c\u5229\u7528t+1\u65f6\u523b\u7684\u540e\u5411\u6982\u7387\u53ef\u4ee5\u6c42t\u65f6\u523b\u7684\u540e\u5411\u6982\u7387\uff09\uff1a\n\n$$\\beta_{t}(i) = \\sum\\limits_{j=1}^{N}a_{ij}b_j(o_{t+1})\\beta_{t+1}(j),\\; i=1,2,...N$$\n\n3) \u8ba1\u7b97\u6700\u7ec8\u7ed3\u679c\uff1a\n\n$$P(O|\\lambda) = \\sum\\limits_{i=1}^N\\pi_ib_i(o_1)\\beta_1(i)$$\n\n\u6b64\u65f6\u6211\u4eec\u7684\u7b97\u6cd5\u65f6\u95f4\u590d\u6742\u5ea6\u4ecd\u7136\u662f$\ud835\udc42(\ud835\udc47\ud835\udc41^2)$\u3002\n\n## 5\u3001HMM\u5e38\u7528\u6982\u7387\u7684\u8ba1\u7b97\n\n\u5229\u7528\u524d\u5411\u6982\u7387\u548c\u540e\u5411\u6982\u7387\uff0c\u6211\u4eec\u53ef\u4ee5\u8ba1\u7b97\u51faHMM\u4e2d\u5355\u4e2a\u72b6\u6001\u548c\u4e24\u4e2a\u72b6\u6001\u7684\u6982\u7387\u516c\u5f0f\u3002\n\n1.\u7ed9\u5b9a\u6a21\u578b\ud835\udf06\u548c\u89c2\u6d4b\u5e8f\u5217\ud835\udc42,\u5728\u65f6\u523b\ud835\udc61\u5904\u4e8e\u72b6\u6001$\ud835\udc5e_\ud835\udc56$\u7684\u6982\u7387\u8bb0\u4e3a:\n\n$$\\gamma_t(i) = P(i_t = q_i | O,\\lambda) = \\frac{P(i_t = q_i ,O|\\lambda)}{P(O|\\lambda)}$$\n\n\u90a3\u4e48$P(i_t = q_i ,O|\\lambda)$\u600e\u4e48\u6c42\u5462\uff1f\n\n\u6211\u4eec\u5148\u5206\u6790\u4e0b\u8fd9\u4e2a\u6982\u7387\u6c42\u7684\u662f\u5565\uff0c\n\n\u5b83\u7684\u610f\u601d\u662f:\u5728\u7ed9\u5b9aHMM\u7684\u53c2\u6570\u03bb\uff0c\u89c2\u6d4b\u5e8f\u5217\u662f$O(O_1,O_2,...O_T)$,\u5728t\u65f6\u523b\u7684\u72b6\u6001\u4e3a$q_i$\u7684\u6982\u7387\u3002\n\n\u6362\u53e5\u8bdd\u7406\u89e3\u5c31\u662f:\u4ece1\u5230t\u65f6\u523b\uff0ct\u65f6\u523b\u7684\u72b6\u6001\u4e3a$q_i$,\u8f93\u51fa\u662f$O(O_1,O_2,...O_T)$\u7684\u6982\u7387P1\uff1b\u7136\u540et\u65f6\u523b\u4e4b\u540e\u5230T\uff0c\u5728t\u65f6\u523b\u72b6\u6001\u4e3a$q_i$\u7684\u57fa\u7840\u4e0a\uff0c\u8f93\u51fa\u5e8f\u5217\u662f$o_{t+1},o_{t+2}..o_T$\u7684\u6982\u7387P2\uff0cP1\u548cP2\u76f8\u4e58\u5373\u4e3a\u6240\u6c42\u3002\n\n\u78b0\u5de7P1\u7b26\u5408\u524d\u5411\u6982\u7387\u5b9a\u4e49\uff0cP2\u7b26\u5408\u540e\u5411\u6982\u7387\u5b9a\u4e49\uff0c\u6240\u4ee5 $P1=\\alpha_t(i) \\ \\ \\ P2=\\beta_t(i)$\uff0c\u5219:\n\n$$P(i_t = q_i ,O|\\lambda) = P1*P2 = \\alpha_t(i)\\beta_t(i)$$\n\n\u90a3\u4e48\uff1a\n\n$$P(O|\\lambda) = \\sum_j^N \\alpha_t(j)\\beta_t(j) \\ \\ \\ \u6ce8\u610f\uff1a\u4e3a\u5565P(O|\\lambda)\u7528\u8fd9\u4e2a\u8868\u793a\uff0c\u56e0\u4e3a\u8fd9\u6837\u65b9\u4fbf\u8ba1\u7b97$$\n\n\u6240\u4ee5\uff1a\n\n$$\\gamma_t(i) = \\frac{ \\alpha_t(i)\\beta_t(i)}{\\sum\\limits_{j=1}^N \\alpha_t(j)\\beta_t(j)}$$\n\n2.\u7ed9\u5b9a\u6a21\u578b\ud835\udf06\u548c\u89c2\u6d4b\u5e8f\u5217\ud835\udc42,\u5728\u65f6\u523b\ud835\udc61\u5904\u4e8e\u72b6\u6001$\ud835\udc5e_\ud835\udc56$\uff0c\u4e14\u65f6\u523b\ud835\udc61+1\u5904\u4e8e\u72b6\u6001$\ud835\udc5e_\ud835\udc57$\u7684\u6982\u7387\u8bb0\u4e3a:\n\n$$\\xi_t(i,j) = P(i_t = q_i, i_{t+1}=q_j | O,\\lambda) = \\frac{ P(i_t = q_i, i_{t+1}=q_j , O|\\lambda)}{P(O|\\lambda)}$$\n\n\u90a3\u4e48$P(i_t = q_i, i_{t+1}=q_j , O|\\lambda)$\u600e\u4e48\u6c42\u5462\uff1f\n\n\u6211\u4eec\u5148\u5206\u6790\u4e0b\u8fd9\u4e2a\u6982\u7387\u6c42\u7684\u662f\u5565\uff0c\n\n\u5b83\u7684\u610f\u601d\u662f\uff1a\u5728\u7ed9\u5b9aHMM\u7684\u53c2\u6570\u4e0b\uff0c\u89c2\u6d4b\u5e8f\u5217\u662fO\uff0ct\u65f6\u523b\u7684\u72b6\u6001\u662f$q_i$\uff0ct+1\u65f6\u523b\u7684\u72b6\u6001\u662f$q_j$\u7684\u6982\u7387\u3002\n\n\u8fd9\u4e2a\u6982\u7387=$P(O_1,O_2,...O_t,i_t=q_i|\u03bb)*P(q_i\u8f6c\u79fb\u5230t+1\u65f6\u523b\u7684q_j)*P(o_{t+2},o_{t+3}..o_T|i_{t+1}=q_j, \u03bb)*P(t+1\u65f6\u523b\u8f93\u51fa\u662fO_{t+1})$\n\n\u9010\u4e2a\u89e3\u91ca\u4e0b\uff1a\n\n>$P(O_1,O_2,...O_t,i_t=q_i|\u03bb)$:\u8fd9\u4e0d\u5c31\u662ft\u65f6\u523b\u7684\u524d\u5411\u6982\u7387$\\alpha_t(i)$\u4e48\\\n>$P(q_i\u8f6c\u79fb\u5230t+1\u65f6\u523b\u7684q_j)$:\u8fd9\u4e0d\u5c31\u662f\u8f6c\u79fb\u6982\u7387$a_{ij}$\u4e48\\\n>$P(o_{t+2},o_{t+3}..o_T|i_{t+1}=q_j, \u03bb)$:\u8fd9\u4e0d\u5c31\u662ft+1\u65f6\u523b\u7684\u540e\u5411\u6982\u7387$\\beta_{t+1}(i)$\u4e48\\\n>(\u6ce8\u610f:t+1\u65f6\u523b\u7684\u540e\u7eed\u8f93\u51fa\u5e8f\u5217\u662f$o_{t+2},o_{t+3}..o_T$,\u5373\u4e0b\u6807\u662f\u4ecet+2\u5f00\u59cb\uff0c\u4e0d\u662ft+1\u5f00\u59cb\uff0c\u8bf7\u4ed4\u7ec6\u7406\u89e3\u540e\u5411\u6982\u7387\u7684\u5b9a\u4e49)\u3002\\\n>\u524d\u9762\u5c11\u4e86$O_{t+1}$\u8fd9\u4e2a\u65f6\u95f4\u70b9\u7684\u89c2\u6d4b\u5e8f\u5217\uff0c\u56e0\u6b64\u8981\u8865\u4e0a$P(t+1\u65f6\u523b\u8f93\u51fa\u662fO_{t+1})$,\u800c\u8fd9\u4e2a\u4e0d\u5c31\u662f$b_{j}(O_{t+1})$\u4e48\n\n\u6240\u4ee5\uff1a\n\n$$P(i_t = q_i, i_{t+1}=q_j , O|\\lambda) = \\alpha_t(i)a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)$$\n\n\u6240\u4ee5\uff1a\n\n$$P(O|\\lambda)=\\sum\\limits_{r=1}^N\\sum\\limits_{s=1}^N\\alpha_t(r)a_{rs}b_s(o_{t+1})\\beta_{t+1}(s)$$\n\n\u6240\u4ee5\uff1a\n\n$$\\xi_t(i,j) = \\frac{\\alpha_t(i)a_{ij}b_j(o_{t+1})\\beta_{t+1}(j)}{\\sum\\limits_{r=1}^N\\sum\\limits_{s=1}^N\\alpha_t(r)a_{rs}b_s(o_{t+1})\\beta_{t+1}(s)}$$\n\n3.\u5c06$\ud835\udefe_\ud835\udc61(\ud835\udc56)$\u548c$\ud835\udf09_\ud835\udc61(\ud835\udc56,\ud835\udc57)$\u5728\u5404\u4e2a\u65f6\u523b\ud835\udc61\u6c42\u548c\uff0c\u53ef\u4ee5\u5f97\u5230\uff1a\n\n\u5728\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u4e0b\u72b6\u6001\ud835\udc56\u51fa\u73b0\u7684\u671f\u671b\u503c$\\sum_{\ud835\udc61=1}^\ud835\udc47\ud835\udefe_\ud835\udc61(\ud835\udc56)$\n\n\u5728\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u4e0b\u7531\u72b6\u6001\ud835\udc56\u8f6c\u79fb\u7684\u671f\u671b\u503c$\\sum_{\ud835\udc61=1}^{\ud835\udc47-1}\ud835\udefe_\ud835\udc61(\ud835\udc56)$\n\n\u5728\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u4e0b\u7531\u72b6\u6001\ud835\udc56\u8f6c\u79fb\u5230\u72b6\u6001\ud835\udc57\u7684\u671f\u671b\u503c$\\sum_{\ud835\udc61=1}^{\ud835\udc47-1}\ud835\udf09_\ud835\udc61(\ud835\udc56,\ud835\udc57)$\n\n\n# \u4e94\u3001\u6a21\u578b\u53c2\u6570\u5b66\u4e60\u95ee\u9898\n\n## 1\u3001HMM\u6a21\u578b\u53c2\u6570\u6c42\u89e3\u6982\u8ff0\n\nHMM\u6a21\u578b\u53c2\u6570\u6c42\u89e3\u6839\u636e\u5df2\u77e5\u7684\u6761\u4ef6\u53ef\u4ee5\u5206\u4e3a\u4e24\u79cd\u60c5\u51b5\u3002\n\n\u7b2c\u4e00\u79cd\u60c5\u51b5\u8f83\u4e3a\u7b80\u5355\uff0c\u5c31\u662f\u6211\u4eec\u5df2\u77e5\ud835\udc37\u4e2a\u957f\u5ea6\u4e3a\ud835\udc47\u7684\u89c2\u6d4b\u5e8f\u5217\u548c\u5bf9\u5e94\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217\uff0c\u5373${(\ud835\udc42_1,\ud835\udc3c_1),(\ud835\udc42_2,\ud835\udc3c_2),...(\ud835\udc42_\ud835\udc37,\ud835\udc3c_\ud835\udc37)}$\u662f\u5df2\u77e5\u7684\uff0c\u6b64\u65f6\u6211\u4eec\u53ef\u4ee5\u5f88\u5bb9\u6613\u7684\u7528\u6700\u5927\u4f3c\u7136\u6765\u6c42\u89e3\u6a21\u578b\u53c2\u6570\u3002\n\n\u5047\u8bbe\u6837\u672c\u4ece\u9690\u85cf\u72b6\u6001$\ud835\udc5e_\ud835\udc56$\u8f6c\u79fb\u5230$\ud835\udc5e_\ud835\udc57$\u7684\u9891\u7387\u8ba1\u6570\u662f$\ud835\udc34_{\ud835\udc56\ud835\udc57}$,\u90a3\u4e48\u72b6\u6001\u8f6c\u79fb\u77e9\u9635\u6c42\u5f97\u4e3a\uff1a\n\n$$A = \\Big[a_{ij}\\Big], \\;\u5176\u4e2da_{ij} = \\frac{A_{ij}}{\\sum\\limits_{s=1}^{N}A_{is}}$$\n\n\u5047\u8bbe\u6837\u672c\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc57$\u4e14\u89c2\u6d4b\u72b6\u6001\u4e3a$\ud835\udc63_\ud835\udc58$\u7684\u9891\u7387\u8ba1\u6570\u662f$\ud835\udc35_{\ud835\udc57\ud835\udc58}$,\u90a3\u4e48\u89c2\u6d4b\u72b6\u6001\u6982\u7387\u77e9\u9635\u4e3a\uff1a\n\n$$B= \\Big[b_{j}(k)\\Big], \\;\u5176\u4e2db_{j}(k) = \\frac{B_{jk}}{\\sum\\limits_{s=1}^{M}B_{js}}$$\n\n\u5047\u8bbe\u6240\u6709\u6837\u672c\u4e2d\u521d\u59cb\u9690\u85cf\u72b6\u6001\u4e3a$\ud835\udc5e_\ud835\udc56$\u7684\u9891\u7387\u8ba1\u6570\u4e3a$\ud835\udc36(\ud835\udc56)$,\u90a3\u4e48\u521d\u59cb\u6982\u7387\u5206\u5e03\u4e3a\uff1a\n\n$$\\Pi = \\pi(i) = \\frac{C(i)}{\\sum\\limits_{s=1}^{N}C(s)}$$\n\n\u53ef\u89c1\u7b2c\u4e00\u79cd\u60c5\u51b5\u4e0b\u6c42\u89e3\u6a21\u578b\u8fd8\u662f\u5f88\u7b80\u5355\u7684\u3002\u4f46\u662f\u5728\u5f88\u591a\u65f6\u5019\uff0c\u6211\u4eec\u65e0\u6cd5\u5f97\u5230HMM\u6837\u672c\u89c2\u5bdf\u5e8f\u5217\u5bf9\u5e94\u7684\u9690\u85cf\u5e8f\u5217\uff0c\u53ea\u6709\ud835\udc37\u4e2a\u957f\u5ea6\u4e3a\ud835\udc47\u7684\u89c2\u6d4b\u5e8f\u5217\uff0c\u5373${(\ud835\udc42_1),(\ud835\udc42_2),...(\ud835\udc42_\ud835\udc37)}$\u662f\u5df2\u77e5\u7684\uff0c\u6b64\u65f6\u6211\u4eec\u80fd\u4e0d\u80fd\u6c42\u51fa\u5408\u9002\u7684HMM\u6a21\u578b\u53c2\u6570\u5462\uff1f\u8fd9\u5c31\u662f\u6211\u4eec\u7684\u7b2c\u4e8c\u79cd\u60c5\u51b5\uff0c\u4e5f\u662f\u6211\u4eec\u672c\u6587\u8981\u8ba8\u8bba\u7684\u91cd\u70b9\u3002\u5b83\u7684\u89e3\u6cd5\u6700\u5e38\u7528\u7684\u662f\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\uff0c\u5176\u5b9e\u5c31\u662f\u57fa\u4e8eEM\u7b97\u6cd5\u7684\u6c42\u89e3\uff0c\u53ea\u4e0d\u8fc7\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u51fa\u73b0\u7684\u65f6\u4ee3\uff0cEM\u7b97\u6cd5\u8fd8\u6ca1\u6709\u88ab\u62bd\u8c61\u51fa\u6765\uff0c\u6240\u4ee5\u6211\u4eec\u672c\u6587\u8fd8\u662f\u8bf4\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u3002\n\n\n## 2\u3001\u9c8d\u59c6-\u97e6\u5c14\u5947\uff08Baum-Welch\uff09\u7b97\u6cd5\u539f\u7406\n\n\u3000\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u539f\u7406\u65e2\u7136\u4f7f\u7528\u7684\u5c31\u662fEM\u7b97\u6cd5\u3010\uff08Expectation-Maximum\uff09\u7b97\u6cd5\u4e5f\u79f0\u671f\u671b\u6700\u5927\u5316\u7b97\u6cd5\u3011\u7684\u539f\u7406\uff0c\u90a3\u4e48\u6211\u4eec\u9700\u8981\u5728E\u6b65\u6c42\u51fa\u8054\u5408\u5206\u5e03\ud835\udc43(\ud835\udc42,\ud835\udc3c|\ud835\udf06)\u57fa\u4e8e\u6761\u4ef6\u6982\u7387$P(I|O,\\overline{\\lambda})$\u7684\u671f\u671b\uff0c\u5176\u4e2d$\\overline{\\lambda}$\u4e3a\u5f53\u524d\u7684\u6a21\u578b\u53c2\u6570\uff0c\u7136\u540e\u518dM\u6b65\u6700\u5927\u5316\u8fd9\u4e2a\u671f\u671b\uff0c\u5f97\u5230\u66f4\u65b0\u7684\u6a21\u578b\u53c2\u6570\ud835\udf06\u3002\u63a5\u7740\u4e0d\u505c\u7684\u8fdb\u884cEM\u8fed\u4ee3\uff0c\u76f4\u5230\u6a21\u578b\u53c2\u6570\u7684\u503c\u6536\u655b\u4e3a\u6b62\u3002\n\n\u9996\u5148\u6765\u770b\u770bE\u6b65\uff0c\u5f53\u524d\u6a21\u578b\u53c2\u6570\u4e3a$\\overline{\\lambda}$, \u8054\u5408\u5206\u5e03\ud835\udc43(\ud835\udc42,\ud835\udc3c|\ud835\udf06)\u57fa\u4e8e\u6761\u4ef6\u6982\u7387$P(I|O,\\overline{\\lambda})$\u7684\u671f\u671b\u8868\u8fbe\u5f0f\u4e3a\uff1a\n\n$$L(\\lambda, \\overline{\\lambda}) = \\sum\\limits_{I}P(I|O,\\overline{\\lambda})logP(O,I|\\lambda)$$\n\n\u5728M\u6b65\uff0c\u6211\u4eec\u6781\u5927\u5316\u4e0a\u5f0f\uff0c\u7136\u540e\u5f97\u5230\u66f4\u65b0\u540e\u7684\u6a21\u578b\u53c2\u6570\u5982\u4e0b\uff1a\u3000\n\n$$\\overline{\\lambda} = arg\\;\\max_{\\lambda}\\sum\\limits_{I}P(I|O,\\overline{\\lambda})logP(O,I|\\lambda)$$\n\n\u901a\u8fc7\u4e0d\u65ad\u7684E\u6b65\u548cM\u6b65\u7684\u8fed\u4ee3\uff0c\u76f4\u5230$\\overline{\\lambda}$\u6536\u655b\u3002\u4e0b\u9762\u6211\u4eec\u6765\u770b\u770b\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u7684\u63a8\u5bfc\u8fc7\u7a0b\u3002\n\n## 3\u3001\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u7684\u63a8\u5bfc\n\n\u6211\u4eec\u7684\u8bad\u7ec3\u6570\u636e\u4e3a$\\{(O_1, I_1), (O_2, I_2), ...(O_D, I_D)\\}$\uff0c\u5176\u4e2d\u4efb\u610f\u4e00\u4e2a\u89c2\u6d4b\u5e8f\u5217$O_d = \\{o_1^{(d)}, o_2^{(d)}, ... o_T^{(d)}\\}$,\u5176\u5bf9\u5e94\u7684\u672a\u77e5\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217\u8868\u793a\u4e3a\uff1a$I_d = \\{i_1^{(d)}, i_2^{(d)}, ... i_T^{(d)}\\}$\n\n\u9996\u5148\u770b\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u7684E\u6b65\uff0c\u6211\u4eec\u9700\u8981\u5148\u8ba1\u7b97\u8054\u5408\u5206\u5e03$\ud835\udc43(\ud835\udc42,\ud835\udc3c|\\lambda)$\u7684\u8868\u8fbe\u5f0f\u5982\u4e0b\uff1a\n\n$$P(O,I|\\lambda) = \\prod_{d=1}^D \\pi_{i_1^{(d)}}\\ b_{i_1^{(d)}}\\ (o_1^{(d)})\\ a_{i_1^{(d)}\\ i_2^{(d)}}\\ b_{i_2^{(d)}}\\ (o_2^{(d)}) ... a_{i_{T-1}^{(d)} \\ \\ i_T^{(d)}}\\ b_{i_T^{(d)}}\\ \\ (o_T^{(d)})$$\n\n\u6211\u4eec\u7684E\u6b65\u5f97\u5230\u7684\u671f\u671b\u8868\u8fbe\u5f0f\u4e3a\uff1a\n\n$$L(\\lambda, \\overline{\\lambda}) = \\sum\\limits_{I}P(I|O,\\overline{\\lambda})logP(O,I|\\lambda)$$\n\n\u5728M\u6b65\u6211\u4eec\u8981\u6781\u5927\u5316\u4e0a\u5f0f\u3002\u7531\u4e8e$P(I|O,\\overline{\\lambda}) = \\frac{P(I,O|\\overline{\\lambda})}{P(O|\\overline{\\lambda})}$,\u800c$\ud835\udc43(\ud835\udc42|\\overline{\\lambda})$\u662f\u5e38\u6570\uff0c\u56e0\u6b64\u6211\u4eec\u8981\u6781\u5927\u5316\u7684\u5f0f\u5b50\u7b49\u4ef7\u4e8e\uff1a\n\n$$\\overline{\\lambda} = arg\\;\\max_{\\lambda}\\sum\\limits_{I}P(O,I|\\overline{\\lambda})logP(O,I|\\lambda)$$\n\n\u6211\u4eec\u5c06\u4e0a\u9762$\ud835\udc43(\ud835\udc42,\ud835\udc3c|\ud835\udf06)$\u7684\u8868\u8fbe\u5f0f\u5e26\u5165\u6211\u4eec\u7684\u6781\u5927\u5316\u5f0f\u5b50\uff0c\u5f97\u5230\u7684\u8868\u8fbe\u5f0f\u5982\u4e0b\uff1a\n\n$$\\overline{\\lambda} = arg\\;\\max_{\\lambda}\\sum\\limits_{d=1}^D\\sum\\limits_{I}P(O,I|\\overline{\\lambda})(log\\pi_{i_1} + \\sum\\limits_{t=1}^{T-1}log\\;a_{i_t,i_{t+1}} + \\sum\\limits_{t=1}^Tlog b_{i_t}(o_t))$$\n\n\u6211\u4eec\u7684\u9690\u85cf\u6a21\u578b\u53c2\u6570$\ud835\udf06=(\ud835\udc34,\ud835\udc35,\u03a0)$,\u56e0\u6b64\u4e0b\u9762\u6211\u4eec\u53ea\u9700\u8981\u5bf9\u4e0a\u5f0f\u5206\u522b\u5bf9$\ud835\udc34,\ud835\udc35,\u03a0$\u6c42\u5bfc\u5373\u53ef\u5f97\u5230\u6211\u4eec\u66f4\u65b0\u7684\u6a21\u578b\u53c2\u6570$\\overline{\\lambda}$\n\n\u9996\u5148\u6211\u4eec\u770b\u770b\u5bf9\u6a21\u578b\u53c2\u6570\u03a0\u7684\u6c42\u5bfc\u3002\u7531\u4e8e\u03a0\u53ea\u5728\u4e0a\u5f0f\u4e2d\u62ec\u53f7\u91cc\u7684\u7b2c\u4e00\u90e8\u5206\u51fa\u73b0\uff0c\u56e0\u6b64\u6211\u4eec\u5bf9\u4e8e\u03a0\u7684\u6781\u5927\u5316\u5f0f\u5b50\u4e3a\uff1a\n\n$$\\overline{\\pi_i} = arg\\;\\max_{\\pi_{i_1}} \\sum\\limits_{d=1}^D\\sum\\limits_{I}P(O,I|\\overline{\\lambda})log\\pi_{i_1} = arg\\;\\max_{\\pi_{i}} \\sum\\limits_{d=1}^D\\sum\\limits_{i=1}^NP(O,i_1^{(d)} =i|\\overline{\\lambda})log\\pi_{i}$$\n\n\u7531\u4e8e$\ud835\udf0b_\ud835\udc56$\u8fd8\u6ee1\u8db3$\\sum\\limits_{i=1}^N\\pi_i =1$\uff0c\u56e0\u6b64\u6839\u636e\u62c9\u683c\u6717\u65e5\u5b50\u4e58\u6cd5\uff0c\u6211\u4eec\u5f97\u5230\ud835\udf0b\ud835\udc56\u8981\u6781\u5927\u5316\u7684\u62c9\u683c\u6717\u65e5\u51fd\u6570\u4e3a\uff1a\n\n$$arg\\;\\max_{\\pi_{i}}\\sum\\limits_{d=1}^D\\sum\\limits_{i=1}^NP(O,i_1^{(d)} =i|\\overline{\\lambda})log\\pi_{i} + \\gamma(\\sum\\limits_{i=1}^N\\pi_i -1)$$\n\n\u5176\u4e2d\uff0c\ud835\udefe\u4e3a\u62c9\u683c\u6717\u65e5\u7cfb\u6570\u3002\u4e0a\u5f0f\u5bf9$\ud835\udf0b_\ud835\udc56$\u6c42\u504f\u5bfc\u6570\u5e76\u4ee4\u7ed3\u679c\u4e3a0\uff0c \u6211\u4eec\u5f97\u5230\uff1a\n\n$$\\sum\\limits_{d=1}^DP(O,i_1^{(d)} =i|\\overline{\\lambda}) + \\gamma\\pi_i = 0$$\n\n\u4ee4\ud835\udc56\u5206\u522b\u7b49\u4e8e\u4ece1\u5230\ud835\udc41\uff0c\u4ece\u4e0a\u5f0f\u53ef\u4ee5\u5f97\u5230\ud835\udc41\u4e2a\u5f0f\u5b50\uff0c\u5bf9\u8fd9\ud835\udc41\u4e2a\u5f0f\u5b50\u6c42\u548c\u53ef\u5f97\uff1a\n\n$$\\sum\\limits_{d=1}^DP(O|\\overline{\\lambda}) + \\gamma = 0$$\n\n\u4ece\u4e0a\u4e24\u5f0f\u6d88\u53bb\ud835\udefe,\u5f97\u5230$\ud835\udf0b_\ud835\udc56$\u7684\u8868\u8fbe\u5f0f\u4e3a\uff1a\n\n$$\\pi_i =\\frac{\\sum\\limits_{d=1}^DP(O,i_1^{(d)} =i|\\overline{\\lambda})}{\\sum\\limits_{d=1}^DP(O|\\overline{\\lambda})} = \\frac{\\sum\\limits_{d=1}^DP(O,i_1^{(d)} =i|\\overline{\\lambda})}{DP(O|\\overline{\\lambda})} = \\frac{\\sum\\limits_{d=1}^DP(i_1^{(d)} =i|O, \\overline{\\lambda})}{D} = \\frac{\\sum\\limits_{d=1}^DP(i_1^{(d)} =i|O^{(d)}, \\overline{\\lambda})}{D}$$\n\n\u5229\u7528\u524d\u5411\u6982\u7387\u7684\u5b9a\u4e49\u53ef\u5f97\uff1a\n\n$$P(i_1^{(d)} =i|O^{(d)}, \\overline{\\lambda}) = \\gamma_1^{(d)}(i)$$\n\n\u56e0\u6b64\u6700\u7ec8\u6211\u4eec\u5728M\u6b65$\ud835\udf0b_\ud835\udc56$\u7684\u8fed\u4ee3\u516c\u5f0f\u4e3a\uff1a\n\n$$\\pi_i = \\frac{\\sum\\limits_{d=1}^D\\gamma_1^{(d)}(i)}{D}$$\n\n\u73b0\u5728\u6211\u4eec\u6765\u770b\u770b\ud835\udc34\u7684\u8fed\u4ee3\u516c\u5f0f\u6c42\u6cd5\u3002\u65b9\u6cd5\u548c\u03a0\u7684\u7c7b\u4f3c\u3002\u7531\u4e8e\ud835\udc34\u53ea\u5728\u6700\u5927\u5316\u51fd\u6570\u5f0f\u4e2d\u62ec\u53f7\u91cc\u7684\u7b2c\u4e8c\u90e8\u5206\u51fa\u73b0\uff0c\u800c\u8fd9\u90e8\u5206\u5f0f\u5b50\u53ef\u4ee5\u6574\u7406\u4e3a\uff1a\n\n$$\\sum\\limits_{d=1}^D\\sum\\limits_{I}\\sum\\limits_{t=1}^{T-1}P(O,I|\\overline{\\lambda})log\\;a_{i_t,i_{t+1}} = \\sum\\limits_{d=1}^D\\sum\\limits_{i=1}^N\\sum\\limits_{j=1}^N\\sum\\limits_{t=1}^{T-1}P(O,i_t^{(d)} = i, i_{t+1}^{(d)} = j|\\overline{\\lambda})log\\;a_{ij}$$\n\n\u7531\u4e8e$\ud835\udc4e_{\ud835\udc56\ud835\udc57}$\u8fd8\u6ee1\u8db3$\\sum\\limits_{j=1}^Na_{ij} =1$\u3002\u548c\u6c42\u89e3\ud835\udf0b\ud835\udc56\u7c7b\u4f3c\uff0c\u6211\u4eec\u53ef\u4ee5\u7528\u62c9\u683c\u6717\u65e5\u5b50\u4e58\u6cd5\u5e76\u5bf9$\ud835\udc4e_{\ud835\udc56\ud835\udc57}$\u6c42\u5bfc\uff0c\u5e76\u4ee4\u7ed3\u679c\u4e3a0\uff0c\u53ef\u4ee5\u5f97\u5230$\ud835\udc4e_{\ud835\udc56\ud835\udc57}$\u7684\u8fed\u4ee3\u8868\u8fbe\u5f0f\u4e3a\uff1a\n\n$$a_{ij} = \\frac{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T-1}P(O^{(d)}, i_t^{(d)} = i, i_{t+1}^{(d)} = j|\\overline{\\lambda})}{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T-1}P(O^{(d)}, i_t^{(d)} = i|\\overline{\\lambda})}$$\n\n\u5229\u7528\u524d\u5411\u6982\u7387\u7684\u5b9a\u4e49\u548c$\ud835\udf09\ud835\udc61(\ud835\udc56,\ud835\udc57)$\u7684\u5b9a\u4e49\u53ef\u5f97\u5230\u5728M\u6b65$\ud835\udc4e_{\ud835\udc56\ud835\udc57}$\u7684\u8fed\u4ee3\u516c\u5f0f\u4e3a\uff1a\n\n$$a_{ij} = \\frac{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T-1}\\xi_t^{(d)}(i,j)}{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T-1}\\gamma_t^{(d)}(i)}$$\n\n\u73b0\u5728\u6211\u4eec\u6765\u770b\u770b\ud835\udc35\u7684\u8fed\u4ee3\u516c\u5f0f\u6c42\u6cd5\u3002\u65b9\u6cd5\u548c\u03a0\u7684\u7c7b\u4f3c\u3002\u7531\u4e8e\ud835\udc35\u53ea\u5728\u6700\u5927\u5316\u51fd\u6570\u5f0f\u4e2d\u62ec\u53f7\u91cc\u7684\u7b2c\u4e09\u90e8\u5206\u51fa\u73b0\uff0c\u800c\u8fd9\u90e8\u5206\u5f0f\u5b50\u53ef\u4ee5\u6574\u7406\u4e3a\uff1a\n\n$$\\sum\\limits_{d=1}^D\\sum\\limits_{I}\\sum\\limits_{t=1}^{T}P(O,I|\\overline{\\lambda})log\\;b_{i_t}(o_t) = \\sum\\limits_{d=1}^D\\sum\\limits_{j=1}^N\\sum\\limits_{t=1}^{T}P(O,i_t^{(d)} = j|\\overline{\\lambda})log\\;b_{j}(o_t)$$\n\n\u7531\u4e8e$\ud835\udc4f_\ud835\udc57(\ud835\udc5c_\ud835\udc61)$\u8fd8\u6ee1\u8db3$\\sum_{\ud835\udc58=1}^\ud835\udc40\ud835\udc4f_\ud835\udc57(\ud835\udc5c_\ud835\udc61=\ud835\udc63_\ud835\udc58)=1$\u3002\u548c\u6c42\u89e3$\ud835\udf0b_\ud835\udc56$\u7c7b\u4f3c\uff0c\u6211\u4eec\u53ef\u4ee5\u7528\u62c9\u683c\u6717\u65e5\u5b50\u4e58\u6cd5\u5e76\u5bf9$\ud835\udc4f_\ud835\udc57(\ud835\udc58)$\u6c42\u5bfc\uff0c\u5e76\u4ee4\u7ed3\u679c\u4e3a0\uff0c\u5f97\u5230$\ud835\udc4f_\ud835\udc57(\ud835\udc58)$\u7684\u8fed\u4ee3\u8868\u8fbe\u5f0f\u4e3a\uff1a\n\n$$b_{j}(k) = \\frac{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T}P(O,i_t^{(d)} = j|\\overline{\\lambda})I(o_t^{(d)}=v_k)}{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T}P(O,i_t^{(d)} = j|\\overline{\\lambda})}$$\n\n\u5176\u4e2d$\ud835\udc3c(\ud835\udc5c_t^{(\ud835\udc51)}=\ud835\udc63_\ud835\udc58)$\u5f53\u4e14\u4ec5\u5f53$\ud835\udc5c_t^{(\ud835\udc51)}=\ud835\udc63_\ud835\udc58$\u65f6\u4e3a1\uff0c\u5426\u5219\u4e3a0. \u5229\u7528\u524d\u5411\u6982\u7387\u7684\u5b9a\u4e49\u53ef\u5f97$\ud835\udc4f_\ud835\udc57(\ud835\udc5c_\ud835\udc61)$\u7684\u6700\u7ec8\u8868\u8fbe\u5f0f\u4e3a\uff1a\n\n$$b_{j}(k) = \\frac{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1, o_t^{(d)}=v_k}^{T}\\gamma_t^{(d)}(j)}{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T}\\gamma_t^{(d)}(j)}$$\n\n\u6709\u4e86$\ud835\udf0b_\ud835\udc56,\ud835\udc4e_{\ud835\udc56\ud835\udc57},\ud835\udc4f_\ud835\udc57(\ud835\udc58)$\u7684\u8fed\u4ee3\u516c\u5f0f\uff0c\u6211\u4eec\u5c31\u53ef\u4ee5\u8fed\u4ee3\u6c42\u89e3HMM\u6a21\u578b\u53c2\u6570\u4e86\u3002\n\n## 4\u3001\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u6d41\u7a0b\u603b\u7ed3\n\n\u6982\u62ec\u603b\u7ed3\u4e0b\u9c8d\u59c6-\u97e6\u5c14\u5947\u7b97\u6cd5\u7684\u6d41\u7a0b\u3002\n\n**\u8f93\u5165\uff1a** \ud835\udc37\u4e2a\u89c2\u6d4b\u5e8f\u5217\u6837\u672c${(\ud835\udc42_1),(\ud835\udc42_2),...(\ud835\udc42_\ud835\udc37)}$\n\n**\u8f93\u51fa\uff1a** HMM\u6a21\u578b\u53c2\u6570\n\n1)\u968f\u673a\u521d\u59cb\u5316\u6240\u6709\u7684$\ud835\udf0b_\ud835\udc56,\ud835\udc4e_{\ud835\udc56\ud835\udc57},\ud835\udc4f_\ud835\udc57(\ud835\udc58)$\n\n2)\u5bf9\u4e8e\u6bcf\u4e2a\u6837\u672c\ud835\udc51=1,2,...\ud835\udc37\uff0c\u7528\u524d\u5411\u540e\u5411\u7b97\u6cd5\u8ba1\u7b97$\\gamma_t^{(d)}(i)\uff0c\\xi_t^{(d)}(i,j), t =1,2...T$\n\n3)\u66f4\u65b0\u6a21\u578b\u53c2\u6570\uff1a\n\n$$\\pi_i = \\frac{\\sum\\limits_{d=1}^D\\gamma_1^{(d)}(i)}{D}$$\n\n$$a_{ij} = \\frac{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T-1}\\xi_t^{(d)}(i,j)}{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T-1}\\gamma_t^{(d)}(i)}$$\n\n$$b_{j}(k) = \\frac{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1, o_t^{(d)}=v_k}^{T}\\gamma_t^{(d)}(j)}{\\sum\\limits_{d=1}^D\\sum\\limits_{t=1}^{T}\\gamma_t^{(d)}(j)}$$\n\n4)\u5982\u679c$\ud835\udf0b_\ud835\udc56,\ud835\udc4e_{\ud835\udc56\ud835\udc57,}\ud835\udc4f_\ud835\udc57(\ud835\udc58)$\u7684\u503c\u5df2\u7ecf\u6536\u655b\uff0c\u5219\u7b97\u6cd5\u7ed3\u675f\uff0c\u5426\u5219\u56de\u5230\u7b2c2\uff09\u6b65\u7ee7\u7eed\u8fed\u4ee3\u3002\n\n# \u516d\u3001\u9884\u6d4b\u95ee\u9898\n\n## 1\u3001HMM\u6700\u53ef\u80fd\u9690\u85cf\u72b6\u6001\u5e8f\u5217\u6c42\u89e3\u6982\u8ff0\n\n\u5728HMM\u6a21\u578b\u7684\u89e3\u7801\u95ee\u9898\u4e2d\uff0c\u7ed9\u5b9a\u6a21\u578b$\ud835\udf06=(\ud835\udc34,\ud835\udc35,\u03a0)$\u548c\u89c2\u6d4b\u5e8f\u5217$\ud835\udc42={\ud835\udc5c_1,\ud835\udc5c_2,...\ud835\udc5c_\ud835\udc47}$\uff0c\u6c42\u7ed9\u5b9a\u89c2\u6d4b\u5e8f\u5217O\u6761\u4ef6\u4e0b\uff0c\u6700\u53ef\u80fd\u51fa\u73b0\u7684\u5bf9\u5e94\u7684\u72b6\u6001\u5e8f\u5217$I^*= \\{i_1^*,i_2^*,...i_T^*\\}$,\u5373$\ud835\udc43(\ud835\udc3c^\u2217|\ud835\udc42)$\u8981\u6700\u5927\u5316\u3002\n \n\u4e00\u4e2a\u53ef\u80fd\u7684\u8fd1\u4f3c\u89e3\u6cd5\u662f\u6c42\u51fa\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u5728\u6bcf\u4e2a\u65f6\u523b\ud835\udc61\u6700\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001$\ud835\udc56_\ud835\udc61^*$\u7136\u540e\u5f97\u5230\u4e00\u4e2a\u8fd1\u4f3c\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217$I^*= \\{i_1^*,i_2^*,...i_T^*\\}$\u3002\u8981\u8fd9\u6837\u8fd1\u4f3c\u6c42\u89e3\u4e0d\u96be\uff0c\u5229\u7528\u524d\u5411\u540e\u5411\u7b97\u6cd5\u8bc4\u4f30\u89c2\u5bdf\u5e8f\u5217\u6982\u7387\u7684\u5b9a\u4e49\uff1a\u5728\u7ed9\u5b9a\u6a21\u578b\ud835\udf06\u548c\u89c2\u6d4b\u5e8f\u5217\ud835\udc42\u65f6\uff0c\u5728\u65f6\u523b\ud835\udc61\u5904\u4e8e\u72b6\u6001$\ud835\udc5e_\ud835\udc56$\u7684\u6982\u7387\u662f$\ud835\udefe_\ud835\udc61(\ud835\udc56)$\uff0c\u8fd9\u4e2a\u6982\u7387\u53ef\u4ee5\u901a\u8fc7HMM\u7684\u524d\u5411\u7b97\u6cd5\u4e0e\u540e\u5411\u7b97\u6cd5\u8ba1\u7b97\u3002\u8fd9\u6837\u6211\u4eec\u6709\uff1a\n\n$$i_t^* = arg \\max_{1 \\leq i \\leq N}[\\gamma_t(i)], \\; t =1,2,...T$$\n\n\u8fd1\u4f3c\u7b97\u6cd5\u5f88\u7b80\u5355\uff0c\u4f46\u662f\u5374\u4e0d\u80fd\u4fdd\u8bc1\u9884\u6d4b\u7684\u72b6\u6001\u5e8f\u5217\u662f\u6574\u4f53\u662f\u6700\u53ef\u80fd\u7684\u72b6\u6001\u5e8f\u5217\uff0c\u56e0\u4e3a\u9884\u6d4b\u7684\u72b6\u6001\u5e8f\u5217\u4e2d\u67d0\u4e9b\u76f8\u90bb\u7684\u9690\u85cf\u72b6\u6001\u53ef\u80fd\u5b58\u5728\u8f6c\u79fb\u6982\u7387\u4e3a0\u7684\u60c5\u51b5\u3010\u4e5f\u5c31\u662f\u8fd9\u6837\u6c42\u89e3\u6ca1\u6709\u8003\u8651\u5230\u8f6c\u79fb\u6982\u7387\u3011\u3002\n\n\u800c\u7ef4\u7279\u6bd4\u7b97\u6cd5\u53ef\u4ee5\u5c06HMM\u7684\u72b6\u6001\u5e8f\u5217\u4f5c\u4e3a\u4e00\u4e2a\u6574\u4f53\u6765\u8003\u8651\uff0c\u907f\u514d\u8fd1\u4f3c\u7b97\u6cd5\u7684\u95ee\u9898\u3010\u8003\u8651\u8f6c\u79fb\u6982\u7387\u3011\uff0c\u4e0b\u9762\u6211\u4eec\u6765\u770b\u770b\u7ef4\u7279\u6bd4\u7b97\u6cd5\u8fdb\u884cHMM\u89e3\u7801\u7684\u65b9\u6cd5\u3002\n\n## 2\u3001\u7ef4\u7279\u6bd4\u7b97\u6cd5\u6982\u8ff0\n\n\u7ef4\u7279\u6bd4\u7b97\u6cd5\u662f\u4e00\u4e2a\u901a\u7528\u7684\u89e3\u7801\u7b97\u6cd5\uff0c\u662f\u57fa\u4e8e\u52a8\u6001\u89c4\u5212\u7684\u6c42\u5e8f\u5217\u6700\u77ed\u8def\u5f84\u7684\u65b9\u6cd5\u3002\n\n\u65e2\u7136\u662f\u52a8\u6001\u89c4\u5212\u7b97\u6cd5\uff0c\u90a3\u4e48\u5c31\u9700\u8981\u627e\u5230\u5408\u9002\u7684\u5c40\u90e8\u72b6\u6001\uff0c\u4ee5\u53ca\u5c40\u90e8\u72b6\u6001\u7684\u9012\u63a8\u516c\u5f0f\u3002\u5728HMM\u4e2d\uff0c\u7ef4\u7279\u6bd4\u7b97\u6cd5\u5b9a\u4e49\u4e86\u4e24\u4e2a\u5c40\u90e8\u72b6\u6001\u7528\u4e8e\u9012\u63a8\u3002\n\n\u7b2c\u4e00\u4e2a\u5c40\u90e8\u72b6\u6001\u662f\u5728\u65f6\u523b\ud835\udc61\u9690\u85cf\u72b6\u6001\u4e3a\ud835\udc56\u6240\u6709\u53ef\u80fd\u7684\u72b6\u6001\u8f6c\u79fb\u8def\u5f84$i_1,i_2,...i_t$\u4e2d\u7684\u6982\u7387\u6700\u5927\u503c\u3002\u8bb0\u4e3a$\ud835\udeff_\ud835\udc61(\ud835\udc56)$:\n\n$$\\delta_t(i) = \\max_{i_1,i_2,...i_{t-1}}\\;P(i_t=i, i_1,i_2,...i_{t-1},o_t,o_{t-1},...o_1|\\lambda),\\; i =1,2,...N$$\n\n\u7531$\ud835\udeff_\ud835\udc61(\ud835\udc56)$\u7684\u5b9a\u4e49\u53ef\u4ee5\u5f97\u5230\ud835\udeff\u7684\u9012\u63a8\u8868\u8fbe\u5f0f\uff1a\n\n$$\n\\begin{align} \n\\delta_{t+1}(i) & = \\max_{i_1,i_2,...i_{t}}\\;P(i_{t+1}=i, i_1,i_2,...i_{t},o_{t+1},o_{t},...o_1|\\lambda) \\tag{1}\\\\ \n& = \\max_{1 \\leq j \\leq N}\\;[\\delta_t(j)a_{ji}]b_i(o_{t+1}) \\tag{2}\n\\end{align}\n$$\n\n\u7b2c\u4e8c\u4e2a\u5c40\u90e8\u72b6\u6001\u7531\u7b2c\u4e00\u4e2a\u5c40\u90e8\u72b6\u6001\u9012\u63a8\u5f97\u5230\u3002\u6211\u4eec\u5b9a\u4e49\u5728\u65f6\u523b\ud835\udc61\u9690\u85cf\u72b6\u6001\u4e3a\ud835\udc56\u7684\u6240\u6709\u5355\u4e2a\u72b6\u6001\u8f6c\u79fb\u8def\u5f84$(i_1,i_2,...,i_{t-1},i)$\u4e2d\u6982\u7387\u6700\u5927\u7684\u8f6c\u79fb\u8def\u5f84\u4e2d\u7b2c\ud835\udc61\u22121\u4e2a\u8282\u70b9\u7684\u9690\u85cf\u72b6\u6001\u4e3a$\u03a8_\ud835\udc61(\ud835\udc56)$,\u5176\u9012\u63a8\u8868\u8fbe\u5f0f\u53ef\u4ee5\u8868\u793a\u4e3a\uff1a\n\n$$\\Psi_t(i) = arg \\; \\max_{1 \\leq j \\leq N}\\;[\\delta_{t-1}(j)a_{ji}]$$\n\n\u6709\u4e86\u8fd9\u4e24\u4e2a\u5c40\u90e8\u72b6\u6001\uff0c\u6211\u4eec\u5c31\u53ef\u4ee5\u4ece\u65f6\u523b0\u4e00\u76f4\u9012\u63a8\u5230\u65f6\u523b\ud835\udc47\uff0c\u7136\u540e\u5229\u7528\u03a8\ud835\udc61(\ud835\udc56)\u8bb0\u5f55\u7684\u524d\u4e00\u4e2a\u6700\u53ef\u80fd\u7684\u72b6\u6001\u8282\u70b9\u56de\u6eaf\uff0c\u76f4\u5230\u627e\u5230\u6700\u4f18\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217\u3002\n\n## 3\u3001\u7ef4\u7279\u6bd4\u7b97\u6cd5\u6d41\u7a0b\u603b\u7ed3\n\n\u603b\u7ed3\u4e0b\u7ef4\u7279\u6bd4\u7b97\u6cd5\u7684\u6d41\u7a0b\uff1a\n\n**\u8f93\u5165\uff1a** HMM\u6a21\u578b\ud835\udf06=(\ud835\udc34,\ud835\udc35,\u03a0)\uff0c\u89c2\u6d4b\u5e8f\u5217$O=(o_1,o_2,...o_T)$\n\n**\u8f93\u51fa\uff1a** \u6700\u6709\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217$I^*= \\{i_1^*,i_2^*,...i_T^*\\}$\n\n1\uff09\u521d\u59cb\u5316\u5c40\u90e8\u72b6\u6001\u3010\u65f6\u523b1\u7684\u5404\u4e2a\u9690\u85cf\u72b6\u6001\u524d\u5411\u6982\u7387\u548c\u4e0a\u4e00\u4e2a\u65f6\u523b\u7684\u6240\u6709\u6700\u53ef\u80fd\u7684\u9690\u72b6\u6001\u3011\uff1a\n\n$$\\delta_1(i) = \\pi_ib_i(o_1),\\;i=1,2...N$$\n\n$$\\Psi_1(i)=0,\\;i=1,2...N$$\n\n2)\u8fdb\u884c\u52a8\u6001\u89c4\u5212\u9012\u63a8\u65f6\u523b\ud835\udc61=2,3,...\ud835\udc47\u65f6\u523b\u7684\u5c40\u90e8\u72b6\u6001\uff1a\n\n$$\\delta_{t}(i) = \\max_{1 \\leq j \\leq N}\\;[\\delta_{t-1}(j)a_{ji}]b_i(0_{t}),\\;i=1,2...N$$\n\n$$\\Psi_t(i) = arg \\; \\max_{1 \\leq j \\leq N}\\;[\\delta_{t-1}(j)a_{ji}],\\;i=1,2...N$$\n\n3)\u8ba1\u7b97\u65f6\u523b\ud835\udc47\u6700\u5927\u7684$\ud835\udeff_\ud835\udc47(\ud835\udc56)$,\u5373\u4e3a\u6700\u53ef\u80fd\u9690\u85cf\u72b6\u6001\u5e8f\u5217\u51fa\u73b0\u7684\u6982\u7387\u3002\u8ba1\u7b97\u65f6\u523b\ud835\udc47\u6700\u5927\u7684$\u03a8_\ud835\udc61(\ud835\udc56)$,\u5373\u4e3a\u65f6\u523b\ud835\udc47\u6700\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001\u3002\n\n$$P* = \\max_{1 \\leq j \\leq N}\\delta_{T}(i)$$\n\n$$i_T^* = arg \\; \\max_{1 \\leq j \\leq N}\\;[\\delta_{T}(i)]$$\n\n4)\u5229\u7528\u5c40\u90e8\u72b6\u6001\u03a8(\ud835\udc56)\u5f00\u59cb\u56de\u6eaf\u3002\u5bf9\u4e8e\ud835\udc61=\ud835\udc47\u22121,\ud835\udc47\u22122,...,1\uff1a\n\n$$i_t^* = \\Psi_{t+1}(i_{t+1}^*)$$\n\n\u6700\u7ec8\u5f97\u5230\u6700\u6709\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217$I^*= \\{i_1^*,i_2^*,...i_T^*\\}$\n\n## 4\u3001HMM\u7ef4\u7279\u6bd4\u7b97\u6cd5\u6c42\u89e3\u5b9e\u4f8b\n\n\u6211\u4eec\u7684\u89c2\u5bdf\u96c6\u5408\u662f:\n\n$$\ud835\udc49={\u7ea2\uff0c\u767d}\uff0c\ud835\udc40=2$$\n\n\u6211\u4eec\u7684\u72b6\u6001\u96c6\u5408\u662f\uff1a\n\n$$\ud835\udc44=\\{\u76d2\u5b501\uff0c\u76d2\u5b502\uff0c\u76d2\u5b503\\}\uff0c\ud835\udc41=3$$\n\n\u800c\u89c2\u5bdf\u5e8f\u5217\u548c\u72b6\u6001\u5e8f\u5217\u7684\u957f\u5ea6\u4e3a3.\n\n\u521d\u59cb\u72b6\u6001\u5206\u5e03\u4e3a\uff1a\n\n$$\\Pi = (0.2,0.4,0.4)^T$$\n\n\u72b6\u6001\u8f6c\u79fb\u6982\u7387\u5206\u5e03\u77e9\u9635\u4e3a\uff1a\n\n$$A = \\left( \\begin{array} {ccc} 0.5 & 0.2 & 0.3 \\\\ 0.3 & 0.5 & 0.2 \\\\ 0.2 & 0.3 &0.5 \\end{array} \\right)$$\n\n\u89c2\u6d4b\u72b6\u6001\u6982\u7387\u77e9\u9635\u4e3a\uff1a\n\n$$B = \\left( \\begin{array} {ccc} 0.5 & 0.5 \\\\ 0.4 & 0.6 \\\\ 0.7 & 0.3 \\end{array} \\right)$$\n\n\u7403\u7684\u989c\u8272\u7684\u89c2\u6d4b\u5e8f\u5217:\n\n$$O=\\{\u7ea2\uff0c\u767d\uff0c\u7ea2\\}$$\n\n\u9996\u5148\u8ba1\u7b97\u65f6\u523b1\u4e09\u4e2a\u72b6\u6001\u7684\u524d\u5411\u6982\u7387\uff1a\n\n\n\n\u9996\u5148\u9700\u8981\u5f97\u5230\u4e09\u4e2a\u9690\u85cf\u72b6\u6001\u5728\u65f6\u523b1\u65f6\u5bf9\u5e94\u7684\u5404\u81ea\u4e24\u4e2a\u5c40\u90e8\u72b6\u6001\uff0c\u6b64\u65f6\u89c2\u6d4b\u72b6\u6001\u4e3a1\uff1a\n\n$$\ud835\udeff_1(1)=\ud835\udf0b_1\ud835\udc4f_1(\ud835\udc5c_1)=0.2\u00d70.5=0.1$$\n$$\ud835\udeff_1(2)=\ud835\udf0b_2\ud835\udc4f_2(\ud835\udc5c_1)=0.4\u00d70.4=0.16$$\n$$\ud835\udeff_1(3)=\ud835\udf0b_3\ud835\udc4f_3(\ud835\udc5c_1)=0.4\u00d70.7=0.28$$\n$$\u03a8_1(1)=\u03a8_1(2)=\u03a8_1(3)=0$$\n\n\u73b0\u5728\u5f00\u59cb\u9012\u63a8\u4e09\u4e2a\u9690\u85cf\u72b6\u6001\u5728\u65f6\u523b2\u65f6\u5bf9\u5e94\u7684\u5404\u81ea\u4e24\u4e2a\u5c40\u90e8\u72b6\u6001\uff0c\u6b64\u65f6\u89c2\u6d4b\u72b6\u6001\u4e3a2\uff1a\n\n$$\\delta_2(1) = \\max_{1\\leq j \\leq 3}[\\delta_1(j)a_{j1}]b_1(o_2) = \\max_{1\\leq j \\leq 3}[0.1 \\times 0.5, 0.16 \\times 0.3, 0.28\\times 0.2] \\times 0.5 = 0.028$$\n\n$$\\Psi_2(1)=3$$\n\n$$\\delta_2(2) = \\max_{1\\leq j \\leq 3}[\\delta_1(j)a_{j2}]b_2(o_2) = \\max_{1\\leq j \\leq 3}[0.1 \\times 0.2, 0.16 \\times 0.5, 0.28\\times 0.3] \\times 0.6 = 0.0504$$\n\n$$\\Psi_2(2)=2$$\n\n$$\\delta_2(3) = \\max_{1\\leq j \\leq 3}[\\delta_1(j)a_{j3}]b_3(o_2) = \\max_{1\\leq j \\leq 3}[0.1 \\times 0.3, 0.16 \\times 0.2, 0.28\\times 0.5] \\times 0.3 = 0.042$$\n\n$$\\Psi_2(3)=3$$\n\n\u7ee7\u7eed\u9012\u63a8\u4e09\u4e2a\u9690\u85cf\u72b6\u6001\u5728\u65f6\u523b3\u65f6\u5bf9\u5e94\u7684\u5404\u81ea\u4e24\u4e2a\u5c40\u90e8\u72b6\u6001\uff0c\u6b64\u65f6\u89c2\u6d4b\u72b6\u6001\u4e3a1\uff1a\n\n$$\\delta_3(1) = \\max_{1\\leq j \\leq 3}[\\delta_2(j)a_{j1}]b_1(o_3) = \\max_{1\\leq j \\leq 3}[0.028 \\times 0.5, 0.0504 \\times 0.3, 0.042\\times 0.2] \\times 0.5 = 0.00756$$\n\n$$\\Psi_3(1)=2$$\n\n$$\\delta_3(2) = \\max_{1\\leq j \\leq 3}[\\delta_2(j)a_{j2}]b_2(o_3) = \\max_{1\\leq j \\leq 3}[0.028 \\times 0.2, 0.0504\\times 0.5, 0.042\\times 0.3] \\times 0.4 = 0.01008$$\n\n$$\\Psi_3(2)=2$$\n\n$$\\delta_3(3) = \\max_{1\\leq j \\leq 3}[\\delta_2(j)a_{j3}]b_3(o_3) = \\max_{1\\leq j \\leq 3}[0.028 \\times 0.3, 0.0504 \\times 0.2, 0.042\\times 0.5] \\times 0.7 = 0.0147$$\n\n$$\\Psi_3(3)=3$$\n\n\u6b64\u65f6\u5df2\u7ecf\u5230\u6700\u540e\u7684\u65f6\u523b\uff0c\u6211\u4eec\u5f00\u59cb\u51c6\u5907\u56de\u6eaf\u3002\u6b64\u65f6\u6700\u5927\u6982\u7387\u4e3a$\ud835\udeff_3(3)$,\u4ece\u800c\u5f97\u5230$\ud835\udc56_3^*=3$\n\n\u7531\u4e8e$\u03a8_3(3)=3$,\u6240\u4ee5$\ud835\udc56_2^*=3$, \u800c\u53c8\u7531\u4e8e$\u03a8_2(3)=3$,\u6240\u4ee5$\ud835\udc56_1^*=3$\u3002\u4ece\u800c\u5f97\u5230\u6700\u7ec8\u7684\u6700\u53ef\u80fd\u7684\u9690\u85cf\u72b6\u6001\u5e8f\u5217\u4e3a\uff1a$(3,3,3)$\n\n## 5\u3001HMM\u6a21\u578b\u7ef4\u7279\u6bd4\u7b97\u6cd5\u603b\u7ed3\n\n\u7ef4\u7279\u6bd4\u7b97\u6cd5\u4e5f\u662f\u5bfb\u627e\u5e8f\u5217\u6700\u77ed\u8def\u5f84\u7684\u4e00\u4e2a\u901a\u7528\u65b9\u6cd5\uff0c\u548cdijkstra\u7b97\u6cd5\u6709\u4e9b\u7c7b\u4f3c\uff0c\u4f46\u662fdijkstra\u7b97\u6cd5\u5e76\u6ca1\u6709\u4f7f\u7528\u52a8\u6001\u89c4\u5212\uff0c\u800c\u662f\u8d2a\u5fc3\u7b97\u6cd5\u3002\u540c\u65f6\u7ef4\u7279\u6bd4\u7b97\u6cd5\u4ec5\u4ec5\u5c40\u9650\u4e8e\u6c42\u5e8f\u5217\u6700\u77ed\u8def\u5f84\uff0c\u800cdijkstra\u7b97\u6cd5\u662f\u901a\u7528\u7684\u6c42\u6700\u77ed\u8def\u5f84\u7684\u65b9\u6cd5\u3002\n\n\n```python\n\n```\n", "meta": {"hexsha": "f73123c0c11222990fefa056646b367eab7c2443", "size": 30991, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "algorithm/hmm/hmm.ipynb", "max_stars_repo_name": "v-smwang/AI-NLP-Tutorial", "max_stars_repo_head_hexsha": "3dbfdc7e19a025e00febab97f4948da8a3710f34", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "algorithm/hmm/hmm.ipynb", "max_issues_repo_name": "v-smwang/AI-NLP-Tutorial", "max_issues_repo_head_hexsha": "3dbfdc7e19a025e00febab97f4948da8a3710f34", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithm/hmm/hmm.ipynb", "max_forks_repo_name": "v-smwang/AI-NLP-Tutorial", "max_forks_repo_head_hexsha": "3dbfdc7e19a025e00febab97f4948da8a3710f34", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.793902439, "max_line_length": 389, "alphanum_fraction": 0.521603046, "converted": true, "num_tokens": 16450, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41869692386284973, "lm_q2_score": 0.27202455699569283, "lm_q1q2_score": 0.11389584522925103}} {"text": "\n\n
\n GEOS 639 Geodetic Imaging \n\n Lab 1: DEM Generation with InSAR using ISCE -- [20 Points] \n
\nAssignment Due Date: February 10, 2022 \n\n

\n Heresh Fattahi with modifications by Franz J Meyer \n
\n Date: Jan 25, 2022 \n
\n\n\n
\n THIS NOTEBOOK INCLUDES TWO HOMEWORK ASSIGNMENTS. \n
\n The homework assignments in this lab are indicated by markdown fields with red background. Please complete these assignments to achieve full score.
\n\n To submit your homework, please download your completed Jupyter Notebook from the server both as PDF (*.pdf) and Notebook file (*.ipynb) and submit them as a ZIP bundle via the GEOS 639 Canvas page. To download, please select the following options in the main menu of the notebook interface:\n\n
    \n
  1. Save your notebook with all of its content by selecting File / Save and Checkpoint
  2. \n
  3. To export in Notebook format, click the radio button next to the notebook file in the main Jupyter Hub browser tab. Once clicked, a download field will appear near the top of the page.
  4. \n
  5. To export in PDF format, right-click on your browser window and print the browser content to PDF
  6. \n
\n\nContact me at fjmeyer@alaska.edu should you run into any problems.\n
\n
\n
\n\n# Set Conda Environment\n\n\n```javascript\n%%javascript\nvar kernel = Jupyter.notebook.kernel;\nvar command = [\"notebookUrl = \",\n \"'\", window.location, \"'\" ].join('')\nkernel.execute(command)\n```\n\n\n```python\nfrom IPython.display import Markdown\nfrom IPython.display import display\n\nuser = !echo $JUPYTERHUB_USER\nenv = !echo $CONDA_PREFIX\nif env[0] == '':\n env[0] = 'Python 3 (base)'\nif env[0] != '/home/jovyan/.local/envs/unavco':\n display(Markdown(f'WARNING:'))\n display(Markdown(f'This notebook should be run using the \"unavco\" conda environment.'))\n display(Markdown(f'It is currently using the \"{env[0].split(\"/\")[-1]}\" environment.'))\n display(Markdown(f'Select the \"unavco\" from the \"Change Kernel\" submenu of the \"Kernel\" menu.'))\n display(Markdown(f'If the \"unavco\" environment is not present, use Create_OSL_Conda_Environments.ipynb to create it.'))\n display(Markdown(f'Note that you must restart your server after creating a new environment before it is usable by notebooks.'))\n```\n\n# Intro 1: DEM Generation using InSAR\n\nInSAR is a great apporach to generage digital elevation models (DEMs), especially in areas where frequent cloud cover limits the applicability of stereo-photogrammetry, which is using optical sensors.\n\nIn InSAR, we analyze the phase difference $\\phi$ between two images acquired from slightly different vantage points. The phase differencing is needed as the phase in a single SAR image is randomized by speckle. Once the phase difference (interferometric phase) was calculated, topographic information can be extracted. \n\nInSAR Workflow: To generate DEMs using InSAR, we will follow the InSAR processing steps we discussed in lecture #3:\n
    \n
  1. Image co-registration (this needs to be done very precisely to make sure that the phase noise patterns (see image) in the two InSAR parners are perfectly aligned)
  2. \n
  3. Interferogram formation (i.e., phase difference calculation)
  4. \n
  5. Removal of all known phase patterns $\\rightarrow$ this will make phase unwrapping simpler (we will subtract the flat earth phase and an already known DEM).
  6. \n
  7. Phase filtering - this is a spatial smoothing process to reduce phase noise $\\rightarrow$ improves phase unwrapping performance.
  8. \n
  9. Phase unwrapping (using a Minimum-Cost-Flow Algorithm).
  10. \n
  11. Geocoding and phase-to-height conversion.
  12. \n
\n\n\nInSAR Performance Considerations: As was discussed in the lectures, DEM generation from InSAR data provides highest quality information if the following conditions are met:\n \n
    \n
  1. The InSAR pair has a sufficiently large spatial baseline.
  2. \n
  3. The temporal separation of image partners is small enough to warrant sufficient coherence.
  4. \n
  5. The temporal baseline is small enough to reduce potential impacts from surface deformation on the interferometric phase.
  6. \n
\n\nOutside of the TanDEM-X sensor constellation discussed in the lecture, not many satellite platforms meet these conditions well. Here we process a pair of ALOS PALSAR images acquired over Okmok volcano with the goal of DEM mapping. \n \n The ALOS PALSAR mission is another good option as it created InSAR pairs with long spatial baselines. The L-band wavelength also ensures that the InSAR coherence is comparatively high. \n \nThat being said, a downside of ALOS PALSAR is that the temporal baselines of available InSAR pairs is often rather long. So there may be deformation information hiding in the InSAR phase.\n\n# Intro 2: About Stripmap Data\n\nIn conventional stripmap Synthetic Aperture Radar(SAR) imaging mode, the radar antenna is fixed to a specific direction, illuminating a single swath of the scene with a fixed squint angle (i.e., the angle between the radar beam and the cross-track direction). The imaging swath width can be increased using the scanning SAR (ScanSAR) or Terrain Observation by Progressive Scan(TOPS). In this notebook we focus on interferometric processing of stripmap data using stripmapApp.py. \n\nThe stripmap mode has been used by sevreal SAR missions, such as Envisat, ERS, RadarSAT-1, Radarsat-2, ALOS-1, Cosmo Sky-Med and TerraSAR-X. Although Sentinel-1 A/B and ALOS-2 are capable of acuqiring SAR data with stripmap mode, their operational imaging modes are TOPS and ScanSAR respectively. Both missions have been acquiring stripmap data over certain regions.\n\nFor processing TOPS data using topsApp, please see the topsApp notebook. However, we recommend that new InSAR users may start with the stripmapApp notebook first, and then try the topsApp notebook. \n\nThe detailed algorithms for stripmap processing and TOPS processing implemented in ISCE software can be found in the following literatures:\n\n### stripmapApp:\n\nH. Fattahi, M. Simons, and P. Agram, \"InSAR Time-Series Estimation of the Ionospheric Phase Delay: An Extension of the Split Range-Spectrum Technique\", IEEE Trans. Geosci. Remote Sens., vol. 55, no. 10, 5984-5996, 2017.\n(https://ieeexplore.ieee.org/abstract/document/7987747/)\n\n### topsApp:\n\nH. Fattahi, P. Agram, and M. Simons, \u201cA network-based enhanced spectral diversity approach for TOPS time-series analysis,\u201d IEEE Trans. Geosci. Remote Sens., vol. 55, no. 2, pp. 777\u2013786, Feb. 2017. (https://ieeexplore.ieee.org/abstract/document/7637021/)\n\n### ISCE framework:\nRosen et al, IGARSS 2018 [Complete reference here] \n\n\n\n(Figure from Fattahi et. al., 2017)\n\n# stripmapApp, a General Overview\n\nstripmapApp.py is an ISCE application, designed for interferometric processing of SAR data acquired with stripmap mode onboard platforms with precise orbits. The main features of stripmapApp includes the following:\n\n#### a) Focusing RAW data:\n\nIf processing starts from RAW data, JPL's ROI software is used for focusing the raw data to SLC(Single Look Complex) SAR images. If data are provided in an already focused SLC format, this step will be skipped.\n\n#### b) Interferometric processing of SLCs \nInterferograms will be formed from the focused SLCs using the following steps\n\n#### c) Coregistration using SAR acquisition geometry (Orbit + DEM) \nThe geometry module of ISCE is used for coregistration of SAR images, i.e., range and azimuth offsets are computed for each pixel using SAR acquisition geometry, orbit information and an existing Digital Elevation Model(DEM). The geometrical offsets are refined with a small constant shift in range and azimuth directions. The constant shifts are estimated using incoherent cross-correlation of the two SAR images already coregistered using pure geometrical information. \n\n#### d) More optional precise coregistration step\nAn optional step called \"rubbersheeting\" is available for more precise coregistration. If \"rubbersheeting\" is requested, a dense azimuth offsets is computed using incoherent cross-correlation between the two SAR images, and is added to the geometrical offsets for more precise coregistration. Rubbersheeting may be required if SAR images are affected by ionospheric scintillation. \n\n#### e) Ionospheric phase estimation\nSplit Range-Spectrum technique and ionospheric phase estimation are available as optional processing steps.\n\n\n\n# Prepare directories, download raw data\n\n## Importing Python Libraries and Setting up Environment Variables \n\n\n```python\n# If you want to use the staged data on S3 bucket, change this flag to True. \n# when the falg is False, the notebook downloads the ALOS raw data from ASF and \n# ISCE automatically downloads an SRTM DEM for processing\nUse_Staged_Data = True\nimport os\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom osgeo import gdal \nimport shutil\nfrom tqdm import tqdm\nimport urllib.request\n\nimport isce\nimport isceobj.StripmapProc.StripmapProc as St\nfrom isceobj.Planet.Planet import Planet\n\n\n#\nASF_USER = \" \"\nASF_PASS = \" \"\n\n# the working directory:\nhome_dir = os.path.join(os.getenv(\"HOME\"), \"work\")\nPROCESS_DIR = os.path.join(home_dir, \"Okmok\")\nDATA_DIR = os.path.join(PROCESS_DIR, \"data\")\n\n\n# if the ASF user/pass is not provided above, try to read it from ~/.netrc file\nif (len(ASF_PASS)==0 | len(ASF_USER)==0) & (os.path.exists(os.path.join(os.getenv(\"HOME\"), \".netrc\"))):\n netrc_path = os.path.join(os.getenv(\"HOME\"), \".netrc\")\n print('Hello')\n count = len(open(netrc_path).readlines( ))\n if count == 1:\n file = open(os.path.join(os.getenv(\"HOME\"), \".netrc\"), \"r\")\n contents = file.read().split(\" \")\n ASF_USER = contents[3]\n ASF_PASS = contents[5]\n file.close()\n else:\n ASF_USER = np.loadtxt(os.path.join(os.getenv(\"HOME\"), \".netrc\"), skiprows=1, usecols=1, dtype=str)[0]\n ASF_PASS = np.loadtxt(os.path.join(os.getenv(\"HOME\"), \".netrc\"), skiprows=1, usecols=1, dtype=str)[1]\n\nif (len(ASF_PASS)==0 | len(ASF_USER)==0) | (not os.path.exists(os.path.join(os.getenv(\"HOME\"), \".netrc\"))) :\n print(\"WARNING: The ASF USER pass needs to be included in ~/.netrc file.\")\n print(\" The ~/.netrc file does not exixt or is not setup properly.\")\n print(\"Follow this link for instructions on setting up your ~/.netrc file\")\n print(\"\"\"If you wish to download the data from ASF please make sure: \n 1) you have valid earthdata login and password stored in your ~/.netrc file\n 2) make sure that you have logged into ASF vertex page and have accepted the EULA agreement.\"\"\")\n print(\"Without further actions you will still be able to run this notebook using already available data on S3 bucket.\")\n print(\"Using data on S3 bucket ...\")\n Use_Staged_Data = True\n\nif Use_Staged_Data:\n print(\"Using the staged data for this notebook has been turned on.\")\n```\n\n\n```python\ndef configure_inputs(outDir, Use_Staged_Data): \n\n \"\"\"Wraite Configuration files for ISCE2 stripmapApp to process NISAR sample products\"\"\"\n cmd_reference_config = '''\n \n [data/20080822/ALPSRP137311060-L1.0/IMG-HH-ALPSRP137311060-H1.0__A]\n \n \n [data/20080822/ALPSRP137311060-L1.0/LED-ALPSRP137311060-H1.0__A]\n \n \n 20080822\n \n'''\n\n print(\"writing reference.xml\")\n with open(os.path.join(outDir,\"reference.xml\"), \"w\") as fid:\n fid.write(cmd_reference_config)\n \n cmd_secondary_config = '''\n \n [data/20081007/ALPSRP144021060-L1.0/IMG-HH-ALPSRP144021060-H1.0__A]\n \n \n [data/20081007/ALPSRP144021060-L1.0/LED-ALPSRP144021060-H1.0__A]\n \n \n 20081007\n \n\n'''\n \n print(\"writing secondary.xml\")\n with open(os.path.join(outDir,\"secondary.xml\"), \"w\") as fid:\n fid.write(cmd_secondary_config)\n\n if Use_Staged_Data:\n cmd_stripmap_config = '''\n\n \n ALOS\n \n reference.xml\n \n \n secondary.xml\n \n\n \n demLat_N52_N55_Lon_W169_W167.dem.wgs84\n \n\n icu\n\n False\n\n False\n\n\n'''\n else:\n \n cmd_stripmap_config = '''\n\n \n ALOS\n \n reference.xml\n \n \n secondary.xml\n \n\n \n\n icu\n\n False\n\n False\n\n\n'''\n\n print(\"writing stripmapApp.xml\")\n with open(os.path.join(outDir,\"stripmapApp.xml\"), \"w\") as fid:\n fid.write(cmd_stripmap_config)\n```\n\nCheck if the PROCESS_DIR and DATA_DIR already exist. If they don't exist, we create them:\n\n\n```python\nif not os.path.exists(PROCESS_DIR):\n print(\"create \", PROCESS_DIR)\n os.makedirs(PROCESS_DIR)\nelse:\n print(PROCESS_DIR, \" already exists!\")\n\nif not os.path.exists(DATA_DIR):\n print(\"create \", DATA_DIR)\n os.makedirs(DATA_DIR)\nelse:\n print(DATA_DIR, \" already exists!\")\n\n\nos.chdir(DATA_DIR)\n```\n\n## Area of Interest for this Lab\n\nIn this tutorial we will process two ALOS1 PALSAR acquistions over Umnak Island in the Aleutians, Alaska. The two acquisitions cover Okmok Volcano right after its eruption in June 2008.\n\n\n## Downloading and Unzipping of SAR RAW Data\n\nDownload two ALOS-1 acquistions from ASF using the following command:\n\n\n\n```python\nif Use_Staged_Data:\n # Check if a stage file from S3 already exist, if not try and download it\n if not os.path.isfile('ALPSRP137311060-L1.0.zip'):\n !aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/ALPSRP137311060-L1.0.zip ALPSRP137311060-L1.0.zip\n\n if not os.path.isfile('ALPSRP144021060-L1.0.zip'):\n !aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/ALPSRP144021060-L1.0.zip ALPSRP144021060-L1.0.zip\n\nelse:\n print(\"Will not be using S3 pre-staged data, Data will be downloaded from ASF\")\n cmd = \"wget https://datapool.asf.alaska.edu/L1.0/A3/ALPSRP137311060-L1.0.zip --user={0} --password={1}\".format(ASF_USER, ASF_PASS)\n if not os.path.exists(os.path.join(DATA_DIR, \"ALPSRP137311060-L1.0.zip\")):\n os.system(cmd)\n else:\n print(\"ALPSRP137311060-L1.0.zip already exists\")\n \n cmd = \"wget https://datapool.asf.alaska.edu/L1.0/A3/ALPSRP144021060-L1.0.zip --user={0} --password={1}\".format(ASF_USER, ASF_PASS)\n if not os.path.exists(os.path.join(DATA_DIR, \"ALPSRP144021060-L1.0.zip\")):\n os.system(cmd)\n else:\n print(\"ALPSRP144021060-L1.0.zip already exists\")\n```\n\nunzip the downloaded files\n\n\n```python\nif not os.path.exists(os.path.join(DATA_DIR, \"ALPSRP137311060-L1.0\")):\n !unzip ALPSRP137311060-L1.0.zip\n \nif not os.path.exists(os.path.join(DATA_DIR, \"ALPSRP144021060-L1.0\")):\n !unzip ALPSRP144021060-L1.0.zip\n```\n\n looking at the unzipped directories there are multiple files:\n\n\n```python\nls ALPSRP137311060-L1.0\n```\n\nWhen you download PALSAR data from a data provider, each frame comprises an image data file and an image leader file, as well as possibly some other ancillary files that are not used by ISCE. \n\nFiles with IMG as prefix are images. \nFiles with LED as prefix are leaders. \n\nThe leader file contains parameters of the sensor that are relevant to the imaging mode, all the information necessary to process the data. The data file contains the raw data samples if Level 1.0 raw data (this is just a different name from what other satellites call Level 0) and processed imagery if Level 1.1 or 1.5 image data. The naming convention for these files is standardized across data archives, and has the following taxonomy:\n\n\n\n\n\nTo see the acquisition date of this PALSAR acquisition we can look at the following file:\n\n\n```python\n!cat ALPSRP137311060-L1.0/ALPSRP137311060.l0.workreport\n```\n\n\n```python\n!grep Img_SceneCenterDateTime ALPSRP137311060-L1.0/ALPSRP137311060.l0.workreport\n!grep Img_SceneCenterDateTime ALPSRP144021060-L1.0/ALPSRP144021060.l0.workreport\n```\n\nfor clarity let's create two directories for the two acquisition dates and move the unziped folders there:\n\n\n```python\nif not os.path.exists('/home/jovyan/work/Okmok/data/20080822'):\n os.mkdir('/home/jovyan/work/Okmok/data/20080822')\nif not os.path.exists('/home/jovyan/work/Okmok/data/20081007'):\n os.mkdir('/home/jovyan/work/Okmok/data/20081007')\n\nif not os.path.exists('/home/jovyan/work/Okmok/data/20080822/ALPSRP137311060-L1.0'): \n shutil.move('/home/jovyan/work/Okmok/data/ALPSRP137311060-L1.0',\n '/home/jovyan/work/Okmok/data/20080822')\nif not os.path.exists('/home/jovyan/work/Okmok/data/20081007/ALPSRP144021060-L1.0'):\n shutil.move('/home/jovyan/work/Okmok/data/ALPSRP144021060-L1.0',\n '/home/jovyan/work/Okmok/data/20081007')\n```\n\nNow that we have the data ready let's cd to the main PROCESS directroy\n\n\n```python\nos.chdir(PROCESS_DIR)\n```\n\nTo make sure where we are, run pwd:\n\n\n```python\n!pwd\n```\n\n# Setting up Input xml Files for Processing with stripmapApp\n\nCreate the input configuration files (refernce.xml, secondary.xml, stripmapApp.xml) to configure the inputs and the processing parameters.\nThe configurations files can be created using your favorit editor or by calling the \"configure\" funstion which is defined at the top of this notebook:\n\n\n\n```python\nif Use_Staged_Data:\n # Check if a stage file from S3 already exist, if not try and download it\n if not os.path.isfile('demLat_N52_N55_Lon_W169_W167.dem.wgs84'):\n !aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/demLat_N52_N55_Lon_W169_W167.dem.wgs84 demLat_N52_N55_Lon_W169_W167.dem.wgs84\n !aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/demLat_N52_N55_Lon_W169_W167.dem.wgs84.vrt demLat_N52_N55_Lon_W169_W167.dem.wgs84.vrt\n !aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/demLat_N52_N55_Lon_W169_W167.dem.wgs84.xml demLat_N52_N55_Lon_W169_W167.dem.wgs84.xml\n else:\n print(\"The DEM already exists\")\nelse:\n print(\"Will not be using S3 pre-staged data, The DEM will be automatically downloaded by ISCE\")\n \n```\n\n\n```python\nconfigure_inputs(PROCESS_DIR, Use_Staged_Data)\n```\n\nHere is an example refernce.xml file for this tutorial:\n\n### reference.xml\n\n```xml\n\n \n [data/20080822/ALPSRP137311060-L1.0/IMG-HH-ALPSRP137311060-H1.0__D]\n \n \n [data/20080822/ALPSRP137311060-L1.0/LED-ALPSRP137311060-H1.0__D]\n \n \n 20080822\n \n\n```\n\n### secondary.xml\n\n```xml\n\n \n [data/20110306/ALPSRP144021060-L1.0/IMG-HH-ALPSRP144021060-H1.0__D]\n \n \n [data/20110306/ALPSRP144021060-L1.0/LED-ALPSRP144021060-H1.0__D]\n \n \n 20110306\n \n\n\n```\n\n### stripmapApp.xml\n\n\n\n```xml\n\n\n \n ALOS\n \n refernce.xml\n \n \n secondary.xml\n \n\n \n \n icu\n \n\n\n```\n\n
\n
\nNote : \n\nIn this example, demFilename is commented out in the stripmapApp.xml. This means that user has not specified the DEM. Therefore, isce looks online and download the SRTM dem.\n\n
\n\n\nAfter downloading the data to process, and setting up the input xml files, we are ready to start processing with stripmapApp. To see a full list of the processing steps run the following command:\n\n\n```python\n!stripmapApp.py --help --steps\n```\n\n# Creating an Interferogram with stripmapApp\n\nBy default, stripmapApp includes the following processing steps to generate a geocoded interferogram from raw data or SLC images:\n\n
    \n
  1. Data Preparation [Steps: startup, preprocess, cropraw]
  2. \n
  3. SLC Formation [Steps: formslc]
  4. \n
  5. DEM Assisted Co-Registration [Steps: verifyDEM, topo, geo2rdr, coarse_resample, misregistration, refined_resample]
  6. \n
  7. Interferogram Formation [Steps: interferogram]
  8. \n
  9. Phase Filtering and Phase Unwrapping [Steps: filter, unwrap]
  10. \n
  11. Geocoding and Phase2Height Conversion [Steps: geocode]
  12. \n
\n
\nIn this tutorial we will process the interferogram step-by-step.\n\n
\n
\nAt the end of each step, you will see a mesage showing the remaining steps: \n\nThe remaining steps are (in order): [.....]\n
\n\n\n
\nNote that you can process the interferogram with one command: \n\n```stripmapApp.py stripmapApp.xml --start=startup --end=endup```\n\n
\n\n## Data Preparation: \n\n### Step: preprocess\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=startup --end=preprocess\n```\n\nBy the end of \"preprocess\", the following folders are created:\n\n20080822_raw\n\n20081007_raw\n\nIf you look into one of these folders:\n\n\n```python\nls 20080822_raw\n```\n\n20080822.raw contains the raw data (I/Q real and imaginary parts of each pulse, sampled along track (azimuth direction) with Pulse Repitition Frequency (PRF) and across track(range direction) with Range Sampling Frequency. stripmapApp currently only handles data acquired (or resampled) to a constant PRF. \n\n### Step: cropraw\n\nThe \"cropraw\" step would crop the raw data based on the region of interest if it was requested in the stripmapApp.xml. The region of interest can be added to stripmapApp.xml as:\n```xml\n[19.0, 19.9, -155.4, -154.7]\n```\n\nSince we have not specified the region of interest, then \"cropraw\" will be ignored and the whole frame will be processed.\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=cropraw --end=cropraw\n```\n\n## SLC Formation\n\n### Step: formslc \n\nStep \"formslc\", focuses SLC images from the raw data for both the reference and secondary scenes.\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=formslc --end=formslc\n```\n\n\n```python\nls 20080822_slc\n```\n\n20080822.slc: Single Look Comlex image for 20080822 acquisition. \n\n20080822.slc.vrt: A gdal VRT file which contains the size, data type, etc.\n\n20080822.slc.xml: ISCE xml metadat file\n\nIn order to see the number of lines and pixels for an SLC image (or any data readable by GDAL):\n\n\n```python\n!gdalinfo 20080822_slc/20080822.slc\n```\n\nDisplay a subset of SLC's amplitude and phase\n\n\n```python\nds = gdal.Open(\"20080822_slc/20080822.slc\", gdal.GA_ReadOnly)\n# extract a part of the SLC to display\nx0 = 0\ny0 = 10000\nx_offset = 4000\ny_offset = 10000\nslc = ds.GetRasterBand(1).ReadAsArray(x0, y0, x_offset, y_offset)\nds = None\n\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(14, 12))\n\n# display amplitude of the slc\nax = fig.add_subplot(1,2,1)\nax.imshow(np.abs(slc), vmin = -2, vmax=2, cmap='gray')\nax.set_title(\"amplitude\")\n\n#display phase of the slc\nax = fig.add_subplot(1,2,2)\nax.imshow(np.angle(slc))\nax.set_title(\"phase\")\n\nplt.show()\n\nslc = None\n```\n\n### Step: crop SLC\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=cropslc --end=cropslc\n```\n\nSimilar to crop raw data but for SLC. Since region of interest has not been specified, the whole frame is processed.\n\n## DEM-Assisted Co-Registration \n\n### Step: verifyDEM\n\nThis step checks if the DEM was provided in the input xml file. If a DEM is not provided, then the app downloads SRTM DEM.\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=verifyDEM --end=verifyDEM\n```\n\n### Step: topo (mapping radar coordinates to geo coordinates)\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=topo --end=topo\n```\n\nAt this step, based on the SAR acquisition geometry of the reference Image (including Doppler information), platforms trajectory and an existing DEM, each pixel of the reference image is geolocated. The geolocated coordinates will be at the same coordinate system of the platforms state vectors, which are usually given in WGS84 coordinate system. Moreover the incidence angle and heading angles will be computed for each pixel. \n\n\n\nOutputs of the step \"topo\" are written to \"geometry\" directory:\n\n\n\n```python\n!ls geometry\n```\n\nlat.rdr.full: latitude of each pixel on the ground. \"full\" stands for full SAR image resolution grid (before multi-looking)\n\nlon.rdr.full: longitude\n\nz.rdr.full: height\n\nlos.rdr.full: incidence angle and heading angle\n\n\n```python\n# Read a bounding box of latitude\nds = gdal.Open('geometry/lat.rdr.full', gdal.GA_ReadOnly)\nlat = ds.GetRasterBand(1).ReadAsArray(0,10000,3000, 10000)\nds = None\n\n# Read a bounding box of longitude\nds = gdal.Open('geometry/lon.rdr.full', gdal.GA_ReadOnly)\nlon = ds.GetRasterBand(1).ReadAsArray(0,10000,3000, 10000)\nds = None\n\n# Read a bounding box of height\nds = gdal.Open('geometry/z.rdr.full', gdal.GA_ReadOnly)\nhgt = ds.GetRasterBand(1).ReadAsArray(0,10000,3000, 10000)\nds = None\n\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(18, 16))\n\nax = fig.add_subplot(1,3,1)\ncax=ax.imshow(lat)\nax.set_title(\"latitude\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, orientation='horizontal')\n\n\nax = fig.add_subplot(1,3,2)\ncax=ax.imshow(lon)\nax.set_title(\"longitude\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, orientation='horizontal')\n\n\nax = fig.add_subplot(1,3,3)\ncax=ax.imshow(hgt, vmin = -100, vmax=1000)\nax.set_title(\"height\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, orientation='horizontal')\n\nplt.show()\n\nlat = None\nlon = None\nhgt = None\n```\n\n### Step: geo2rdr (mapping from geo coordinates to radar coordinates)\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=geo2rdr --end=geo2rdr\n```\n\nIn this step, given the geo-ccordinates of each pixel in the reference image (outputs of topo), the range and azimuth time (radar coordinates) is computed given the acquisition geometry and orbit information of the secondary image. \n\n\n\nThe computed range and azimuth time for the secondary image, gives the pure geometrical offset, required for resampling the secondary image to the reference image in the next step.\n\n\n\nAfter running this step, the geometrical offsets are available in \"offsets\" folder:\n\n\n```python\n!ls offsets\n```\n\nazimuth.off: contains the offsets betwen reference ans secondary images in azimuth direction\n\nrange.off: contains the offsets betwen reference ans secondary images in range direction\n\n\n```python\nds = gdal.Open('offsets/azimuth.off', gdal.GA_ReadOnly)\n# extract only part of the data to display\naz_offsets = ds.GetRasterBand(1).ReadAsArray(100,100,2000,5000)\nds = None\n\nds = gdal.Open('offsets/range.off', gdal.GA_ReadOnly)\n# extract only part of the data to display\nrng_offsets = ds.GetRasterBand(1).ReadAsArray(100,100,2000,5000)\nds = None\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(14, 12))\n\nax = fig.add_subplot(1,2,1)\ncax=ax.imshow(az_offsets)\nax.set_title(\"azimuth offsets\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, orientation='horizontal')\n\nax = fig.add_subplot(1,2,2)\ncax = ax.imshow(rng_offsets)\nax.set_title(\"range offsets\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, orientation='horizontal')\n\nplt.show()\n\naz_offsets = None\nrng_offsets = None\n```\n\n### Step: resampling (using only geometrical offsets)\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=coarse_resample --end=coarse_resample\n```\n\nAt this step, the gemetrical offsets are used to resample the secondary image to the same grid as the reference image, i.e., the secondary image is co-registered to the reference image. The output of this step is written to \"coregisteredSlc\" folder.\n\n\n```python\n!ls coregisteredSlc/\n```\n\ncoarse_coreg.slc: is the secondary SLC coregistered to the reference image\n\n\n```python\nds = gdal.Open(\"coregisteredSlc/coarse_coreg.slc\", gdal.GA_ReadOnly)\nslc = ds.GetRasterBand(1).ReadAsArray(0, 5000, 4000, 15000)\nds = None\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(18, 16))\nax = fig.add_subplot(1,2,1)\nax.imshow(np.abs(slc), vmin = -2, vmax=2, cmap='gray')\nax.set_title(\"amplitude\")\n\nslc = None\n```\n\n### Step: misregistration (estimating residual constant offsets in range and azimuth directions)\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=misregistration --end=misregistration\n```\n\nThe range and azimuth offsets derived from pure geometry can be potentially affected by inaccuracy of orbit information or inaccurate DEMs, or inaccurate SAR metadata. The current available DEMs (e.g., SRTM DEMs) are accurate enough to estimate offsets with accuracies of 1/100 of a pixel. The Orbit information of most modern SAR sensors are also precise enough to obtain the same order of accuracy. However, inaccurate metadata (such as timing error, constant range bias), or range bulk delay may affect the estimated offsets. To account for such sources of errors the misregistration step is performed to estimate possible constant offsets between coarse coregistered SLC and reference SLC. For this purpose an incoherent cross correlation is performed. \n\nThe results of the \"misregistration\" step is written to the \"misreg\" folder.\n\n\n\n```python\n!ls misreg/\n```\n\nIn order to extract the estimated misregistration offsets:\n\n\n```python\nstObj=St()\nstObj.configure()\n\naz = stObj.loadProduct(\"misreg/misreg_az.xml\")\nrng = stObj.loadProduct(\"misreg/misreg_rg.xml\")\n\nprint(\"azimuth misregistration: \", az._coeffs)\nprint(\"range misregistration: \", rng._coeffs)\n```\n\n### Step: refine_resample (resampling using geometrical offsets + misregistration)\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=refined_resample --end=refined_resample\n```\n\nAt this step resampling is re-run to account for the misregistration estimated at the previous step. The new coregisterd SLC (named refined_coreg.slc) is written to the \"coregisteredSlc\" folder.\n\n\n```python\n!ls coregisteredSlc/\n```\n\n### optional steps ('dense_offsets', 'rubber_sheet', 'fine_resample', 'split_range_spectrum' , 'sub_band_resample')\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=dense_offsets --end=sub_band_resample\n```\n\nThese steps are optional and will be skipped if user does not request them in the input xml file. We will get back to these steps in a different session where we estimate ionospheric phase.\n\n## Interferogram Formation\n\n### Step: interferogram\n\nAt this step the reference image and refined_coreg.slc is used to generate the interferogram. The generated interferogram is multi-looked based on the user inputs in the input xml file. If user does not specify the number of looks in range and azimuth directions, then they will be estimated based on posting. The default posting is 30 m which can be also specified in the input xml file.\n\nThe results of the interferogram step is written to the \"interferogram\" folder:\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=interferogram --end=interferogram\n```\n\n\n```python\n!ls interferogram/\n```\n\ntopophase.flat: flattened (geometrical phase removed) and multi-looked interferogram.(one band complex64 data).\n\ntopophase.cor: coherence and magnitude for the flattened multi-looked interferogram. (two bands float32 data).\n\ntopophase.cor.full: similar to topophase.cor but at full SAR resolution.\n\ntopophase.amp: amplitudes of reference amd secondary images. (two bands float32) \n\n\n### Optional Step: sub-band interferogram\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=sub_band_interferogram --end=sub_band_interferogram\n```\n\nThis step will be skipped as we have not asked for ionospheric phase estimation. We will get back to this step in the ionospheric phase estimation notebook.\n\n## Phase Filtering and Phase Unwrapping\n\n### Step: filter\n\nA power spectral filter is applied to the multi-looked interferogram to reduce noise.\n\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=filter --end=filter\n```\n\nNext we visualize the interferogram before and after filtering.\n\n\n\n```python\n# reading the multi-looked wrapped interferogram\nds = gdal.Open(\"interferogram/topophase.flat\", gdal.GA_ReadOnly)\nigram = ds.GetRasterBand(1).ReadAsArray()\nds = None\n\n# reading the multi-looked un-wrapped interferogram\nds = gdal.Open(\"interferogram/filt_topophase.flat\", gdal.GA_ReadOnly)\nfilt_igram = ds.GetRasterBand(1).ReadAsArray()\nds = None\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(18, 16))\n\nax = fig.add_subplot(1,3,1)\nax.imshow(np.abs(igram), vmin = 0 , vmax = 60.0, cmap = 'gray')\nax.set_title(\"magnitude\")\n#ax.set_axis_off()\n\nax = fig.add_subplot(1,3,2)\nax.imshow(np.angle(igram), cmap='jet')\nax.plot([10,1500,1500,10,10],[2500,2500,1000,1000,2500],'-k')\nax.set_title(\"multi-looked interferometric phase\")\nax.set_axis_off()\n\nax = fig.add_subplot(1,3,3)\nax.imshow(np.angle(filt_igram), cmap='jet')\nax.plot([10,1500,1500,10,10],[2500,2500,1000,1000,2500],'-k')\nax.set_title(\"multi-looked & filtered phase\")\n#ax.set_axis_off()\n\nfig = plt.figure(figsize=(18, 16))\n\nax = fig.add_subplot(1,3,1)\nax.imshow(np.abs(igram[1000:2500, 10:1500]), vmin = 0 , vmax = 60.0, cmap = 'gray')\nax.set_title(\"magnitude\")\n#ax.set_axis_off()\n\nax = fig.add_subplot(1,3,2)\nax.imshow(np.angle(igram[1000:2500, 10:1500]), cmap='jet')\nax.plot([600,1400,1400,600,600],[1400,1400,800,800,1400],'--k')\nax.set_title(\"multi-looked interferometric phase\")\nax.set_axis_off()\n\nax = fig.add_subplot(1,3,3)\nax.imshow(np.angle(filt_igram[1000:2500, 10:1500]), cmap='jet')\nax.plot([600,1400,1400,600,600],[1400,1400,800,800,1400],'--k')\nax.set_title(\"multi-looked & filtered phase\")\nax.set_axis_off()\n\nfig = plt.figure(figsize=(18, 16))\n\nax = fig.add_subplot(1,3,1)\nax.imshow(np.abs(igram[1800:2400, 610:1410]), vmin = 0 , vmax = 60.0, cmap = 'gray')\nax.set_title(\"magnitude\")\nax.set_axis_off()\n\nax = fig.add_subplot(1,3,2)\nax.imshow(np.angle(igram[1800:2400, 610:1410]), cmap='jet')\nax.set_title(\"multi-looked interferometric phase\")\nax.set_axis_off()\n\nax = fig.add_subplot(1,3,3)\nax.imshow(np.angle(filt_igram[1800:2400, 610:1410]), cmap='jet')\nax.set_title(\"multi-looked & filtered phase\")\nax.set_axis_off()\n\nfilt_igram = None\nigram = None\n\n```\n\n### Homework Assignment #1 \n\n
\n ASSIGNMENT #1: Flat Earth Phase Correction -- [5 Points] \n\nDuring Lecture 3 (InSAR), we saw that before flat earth correction, the interferogram contains many fringes. Here a couple of short questions related to that:\n\n
    \n
  1. Question 1.1: In addition to improve the visualization of InSAR data, one important reason why we remove the flat earth phase from interferograms is to make phase unwrapping easier. Explain in a few sentences why you think removing the flat earth phase makes phase unwrapping less complicated. To answer this question, please edit the markdown cell below. -- [3 Points]
  2. \n
    \n
  3. Question 1.2: It turns out, ISCE tries to reduce the unwrapping complexity even more. Instead of \"just\" removing the flat earth phase, it subtracts any already know topography from the interferogram by using an existing DEM. Once this is done, what does the residual phase information in the interferogram shown above represent? -- [2 Points]
  4. \n
\n\n
\n\n
\n
\n Question 1.1 [3 Points]: \n\nADD DISCUSSION HERE:\n
\n\n
\n
\n Question 1.2 [3 Points]: \n\nADD DISCUSSION HERE:\n
\n\n### Optional Steps ('filter_low_band', 'filter_high_band')\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=filter_low_band --end=filter_high_band\n```\n\nThese steps will be skipped since we have not asked for ionospheric phase estimation in the input xml file.\n\n### Step: unwrap\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=unwrap --end=unwrap\n```\n\nAt this step the wrapped phase of the filtered and multi-looked interferogram is unwrapped. The unwrapped interferogram is a two band data with magnitude and phase components.\n\n\n```python\n# reading the multi-looked wrapped interferogram\nds = gdal.Open(\"interferogram/filt_topophase.flat\", gdal.GA_ReadOnly)\nigram = ds.GetRasterBand(1).ReadAsArray()\nds = None\n\n# reading the multi-looked unwrapped interferogram\nds = gdal.Open(\"interferogram/filt_topophase.unw\", gdal.GA_ReadOnly)\nigram_unw = ds.GetRasterBand(2).ReadAsArray()\nds = None\n\n# reading the connected component file\nds = gdal.Open(\"interferogram/filt_topophase.conncomp\", gdal.GA_ReadOnly)\nconnected_components = ds.GetRasterBand(1).ReadAsArray()\nds = None\n\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(18, 6))\n\nax = fig.add_subplot(1,3,1)\ncax=ax.imshow(np.angle(igram[1000:2600, 10:1800]), cmap='jet')\nax.set_title(\"wrapped\")\n#ax.set_axis_off()\ncbar = fig.colorbar(cax, ticks=[-3.14,0,3.14],orientation='horizontal')\ncbar.ax.set_xticklabels([\"$-\\pi$\",0,\"$\\pi$\"])\n\nax = fig.add_subplot(1,3,2)\ncax = ax.imshow(igram_unw[1000:2600, 10:1800], vmin = -5 , vmax = 2.0, cmap = 'jet')\nax.set_title(\"unwrapped\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, ticks=[-5,0, 2\n ], orientation='horizontal')\n\n\nax = fig.add_subplot(1,3,3)\ncax = ax.imshow(connected_components[1000:2600, 10:1800], cmap = 'jet')\nax.set_title(\"components\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, ticks=[0, 1] , orientation='horizontal')\ncbar.ax.set_xticklabels([0,1])\n\n\nconnected_components = None\n```\n\n
\n
\nNote (wrapped vs unwrapped) : \nNote the colorscale for the wrapped and unwrapped interferograms. The wrapped interferometric phase varies from $-\\pi$ to $\\pi$, while the unwrapped interferogram varies from -15 to 15 radians.\n
\n\n
\n
\nNote : \nThe connected components file is a product of the phase unwrapping. Each interferogram may have several connected compoenets. The unwrapped phase within each component is expected to be correctly unwrapped. However, there might be $2\\pi$ phase jumps between the components. Advanced ISCE users may use the 2-stage unwrapping to adjust ambiguities among different components. stripmapApp currently does not support 2-stage unwrapping. Look for this option in future releases. \n
\n\n\n\n```python\nprofile_wrapped_1 = np.angle(igram[2000,1000:1500])\nprofile_unwrapped_1 = igram_unw[2000,1000:1500]\nprofile_wrapped_2 = np.angle(igram[1400,400:600])\nprofile_unwrapped_2 = igram_unw[1400,400:600]\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(20,8))\n\nax = fig.add_subplot(2,3,1)\ncax=ax.plot(profile_wrapped_1)\nax.set_title(\"wrapped\")\n\nax = fig.add_subplot(2,3,2)\ncax=ax.plot(profile_unwrapped_1)\nax.set_title(\"unwrapped\")\n\nax = fig.add_subplot(2,3,3)\ncax=ax.plot(np.round((profile_unwrapped_1-profile_wrapped_1)/2.0/np.pi))\nax.set_title(\"(unwrapped - wrapped)/(2$\\pi$)\")\n\nax = fig.add_subplot(2,3,4)\ncax=ax.plot(profile_wrapped_2)\nax.set_title(\"wrapped\")\n\nax = fig.add_subplot(2,3,5)\ncax=ax.plot(profile_unwrapped_2)\nax.set_title(\"unwrapped\")\n\nax = fig.add_subplot(2,3,6)\ncax=ax.plot((profile_unwrapped_2-profile_wrapped_2)/2.0/np.pi)\nax.set_title(\"(unwrapped - wrapped)/(2$\\pi$)\")\n\n\nigram = None\nigram_unw = None\n```\n\n### Optional Steps ('unwrap_low_band', 'unwrap_high_band', 'ionosphere')\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=unwrap_low_band --end=ionosphere\n```\n\nSince we have not asked for ionospheric phase estimation, all these steps will be skipped.\n\n## Geocoding and Phase2Height Conversion\n\n### Step: geocoding\n\n\n```python\n!stripmapApp.py stripmapApp.xml --start=geocode --end=geocode\n```\n\n\n```python\n# reading the multi-looked wrapped interferogram\nds = gdal.Open(\"interferogram/filt_topophase.unw.geo\", gdal.GA_ReadOnly)\nunw_geocoded = ds.GetRasterBand(2).ReadAsArray()\nds = None\n\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(12,10))\n\nax = fig.add_subplot(1,1,1)\ncax = ax.imshow(unw_geocoded, vmin = -5 , vmax = 2.0, cmap = 'jet')\nax.set_title(\"geocoded unwrapped\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, ticks=[-5,0, 2], orientation='horizontal')\n\nplt.show()\nunw_geocoded = None\n```\n\n### Homework Assignment #2 \n\n
\n ASSIGNMENT #2: Succeed with the Geocoding of your Interferogram -- [15 Points] \n\nRun the notebook all the way to this point and compare your geocoded interferogram to the interferogram shown below. If your product looks very similar then you succeeded with this notebook. All you need to do is download the notebook as described above and you completed this assignment. \n\n\n
\n\n### Step: Phase-to-Height Conversion\n\nOnce the phase is geocoded, the phase $\\phi$ information has to be scaled into height $h$ using equation\n\n\\begin{equation}\nh = \\frac{\\lambda}{4\\pi} \\cdot \\frac{R \\cdot sin(\\theta)}{B_{\\perp}} \\cdot \\phi\n\\end{equation}\n\nWe will use the ISCE function ```imageMath.py``` for this step. \n\nThe following code cell uses ```imageMath.py``` to do the phase-to-height conversion. The parameters are as follows:\n
    \n
  • -e: This argument encodes the equation we are trying to execute. We are using a interferometric baseline of $B_{\\perp} = 900m$ here.
  • \n
  • -o: This argument specifies the output file we create.
  • \n
  • -t: Sets the format of the output file. We are setting this format to float
  • \n
  • --a: specifies the input file use in our equation.
  • \n
\n\n\n```python\n!imageMath.py -e='a_0 ; 0.0566/(4.0 * 3.14) * 850000 * 0.5 / (900) * a_1' -o DEMupdate.geo -t float --a=interferogram/filt_topophase.unw.geo\n```\n\nNow we can plot the DEM update height map we generated:\n\n\n```python\n# reading the multi-looked wrapped interferogram\nds = gdal.Open(\"DEMupdate.geo\", gdal.GA_ReadOnly)\nDEMupdate = ds.GetRasterBand(2).ReadAsArray()\nds = None\nDEMupdate_m = np.ma.masked_where(DEMupdate==0, DEMupdate)\nplt.rcParams['font.size'] = '14'\nfig = plt.figure(figsize=(12,10))\n\nax = fig.add_subplot(1,1,1)\ncax = ax.imshow(DEMupdate - np.mean(DEMupdate_m), vmin = -5 , vmax = 5, cmap = 'jet')\nax.set_title(\"geocoded unwrapped\")\nax.set_axis_off()\ncbar = fig.colorbar(cax, ticks=[-5,0, 5], orientation='horizontal')\n\nplt.show()\nunw_geocoded = None\n```\n\n# Supplementary Information\n\n### understanding xml files\n\nThe format of this type of file may seem unfamiliar or strange to you, but with the following description of the basics of the format, it will hopefully become more familiar. The first thing to point out is that the indentations and line breaks seen above are not required and are simply used to make the structure more clear and the file more readable to humans. The xml file provides structure to data for consumption by a computer. As far as the computer is concerned the data structure is equally readable if all of the information were contained on a single very long line, but human readers would have a hard time reading it in that format. \n\nThe next thing to point out is the method by which the data are structured through the use of tags and attributes. An item enclosed in the < (less-than) and > (greater-than) symbols is referred to as a tag. The name enclosed in the < and > symbols is the name of the tag. Every tag in an xml file must have an associated closing tag that contains the same name but starts with the symbol . This is the basic unit of structure given to the data. Data are enclosed inside of opening and closing tags that have names identifying the enclosed data. This structure is nested to any order of nesting necessary to represent the data. The Python language (in which the ISCE user interface is written) provides powerful tools to parse the xml structure into a data structure object and to very easily \u201cwalk\u201d through the structure of that object. \n\nIn the above xml file the first and last tags in the file are a tag pair: and (note again, tags must come in pairs like this). The first of these two tags, or the opening tag, marks the beginning of the contents of the tag and the second of these two tags, or the closing tag, marks the end of the contents of the tag. ISCE expects a \u201cfile tag\u201d of this nature to bracket all inputs contained in the file. The actual name of the file tag, as far as ISCE is concerned, is user selectable. In this example it is used, as a convenience to the user, to document the ISCE application, named insarApp.py, for which it is meant to provide inputs; it could have been named and insarApp.py would have been equally happy provided that the closing tag were . \n\nThe next tag is . Its closing tag is located at the penultimate line of the file (one line above the tag). The name of this tag is component and it has an attribute called name with value \u201cinsarApp\u201d. The component tags bound a collection of information that is used by a computational element within ISCE that has the name specified by the name attribute. The name \u201cinsarApp\u201d in the first component tag tells ISCE that the enclosed information correspond to a functional component in ISCE named \u201cinsarApp\u201d, which in this case is actually the application that is run at the command line. \n\nIn general, component tags contain information in the form of other component tags or property tags, all of which can be nested to any required level. In this example the insarApp component contains a property tag and two other component tags.\n\nThe first tag we see in the insarApp component tag is the property tag with attribute name=\u201csensor name\u201d. The property tag contains a value tag that contains the name of the sensor, ALOS in this case. The next tag is a component tag with attribute name=\u201dreference\u201d. This tag contains a catalog tag containing refernce.xml. The catalog tag in general informs ISCE to look in the named file (reference.xml in this case) for the contents of the current tag. The next component tag has the same structure with the catalog tag containing a different file named secondary.xml.\n\n### Extra configuration parameters \n\nThe input configuration file in this tutorial only included mandatory parameters including the reference and secondary images, which are enough to run the application. This means that the application is configured with default parameters hardwired in the code or computed during processing. \nFor custom processing, user may want to set parameters in the input configuration file. In the following a few more parameters are shown that can be added to stripmapApp.xml. \n\n### regionOfInterest\n\nTo specify a region of interest to process:\n\n```xml\n[South, North, West, East]\n```\n\nExample: \n\n```xml\n[19.0, 19.9, -155.4, -154.7]\n```\n\nDefault: Full frame is processed.\n\n### range looks\nnumber of looks in range direction \n\n```xml\nUSER_INPUT\n```\n\nDeafult: is computed based on the posting parameter.\n\n### azimuth looks\nnumber of looks in azimuth direction \n\nDeafult: is computed based on the posting parameter.\n\n\n### posting\nInterferogram posting in meters.\n\n```xml\nUSER_INPUT\n```\n\nDefault: 30\n\n
\n
\nNote : \nIf \"range looks\" and \"azimuth looks\" have not been specified, then posting is used to compute them such that the interferogram is generated with a roughly square pixels size with each dimension close to the \"posting\" parameter.\n\n
\n\n### filter strength\n\nstrength of the adaptive filter used for filtering the wrapped interferogram\n\n```xml\nUSER_INPUT\n```\n\nDefault: 0.5\n\n\n### useHighResolutionDemOnly\n```xml\nTrue\n```\n\nIf True and a dem is not specified in input, it will only\n download the SRTM highest resolution dem if it is available\n and fill the missing portion with null values (typically -32767)\n\nDefault: False\n\n### do unwrap\n\nTo turn phase unwrapping off\n```xml\nFalse\n```\n\nDefault: True\n\n\n### unwrapper name\nTo choose the name of the phase unwrapping method. e.g., to choose \"snaphu\" for phase unwrapping\n```xml\nsnaphu\n```\n\nDefault: \"icu\".\n\n\n\n\n### do rubbersheeting\n\nTo turn on rubbersheeting (estimating azimuth offsets caused by strong ionospheric scentilation)\n\n```xml\nTrue\n```\nDefault : False\n\n### rubber sheet SNR Threshold\n\n```xml\nUSER_INPUT\n```\nIf \"do rubbersheeting\" is turned on, then this values is used to mask out azimuth offsets with SNR less that the input threshold. \n\nDefault: 5\n\n### rubber sheet filter size\nthe size of the median filter used for filtering the azimuth offsets\n\n```xml\nUSER_INPUT\n```\n\nDefault: 8\n\n### do denseoffsets\nturn on the dense offsets computation from cross correlation\n\n```xml\nTrue\n```\n\nDefault: False\n\n
\n
\nNote : \n\nIf \"do rubbersheeting\" is turned on, then dense offsets computation is turned on regardless of the user input for \"do denseoffsets\"\n\n
\n\n### setting the dense offsets parameters \n\n```xml\nUSER_INPUT\nUSER_INPUT\nUSER_INPUT\nUSER_INPUT\nUSER_INPUT\nUSER_INPUT\n```\n\nDefault values:\n
\n dense window width = 64 \n
\n dense window height = 64\n
\n dense search width = 20\n
\n dense search height = 20\n
\n dense skip width = 32\n
\n dense skip height = 32\n\n\n### geocode list\n\nList of products to be geocoded.\n```xml\n\"a list of files to geocode\">\n```\nDefault: multilooked, filtered wrapped and unwrapped interferograms, coherence, ionospehric phase\n\n### offset geocode list\nList of offset-specific files to geocode\n```xml\n\"a list of offset files to geocode\">\n```\n\n\n### do split spectrum\n\nturn on split spectrum \n\n```xml\nTrue\n```\n\nDefault: False\n\n### do dispersive\nturn on disperive phase estimation\n\n```xml\nTrue\n```\n\nDefault: False\n\n
\n
\nNote : \n\nBy turning on \"do dispersive\", the user input for \"do split spectrum\" is ignored and the split spectrum will be turned on as it is needed for dispersive phase estimation. \n\n
\n\n\n### control the filter kernel for filtering the dispersive phase \n```xml\n800\n800\n100\n100\n0\n5\ncoherence\n0.6\n \n``` \n\n\n\n### processing data from other stripmap sensors\n\nstripmapApp.py is able to process the stripmap data from the following sensors. So far it has been sucessfully tested on the following sensors: \n
\n ALOS1 (Raw and SLC)\n ALOS2 (SLC, one frame)\n COSMO_SkyMed (Raw and SLC)\n ERS\n ENVISAT ()\n Radarsat-1\n Radarsat-2\n TerraSARX\n TanDEMX\n Sentinel1\n \n \n envisat_slc\n \n \n### Sample input data xml for different sensors:\n\n#### Envisat: \n```xml\n\n\n data/ASA_IMS_1PNESA20050519_140259_000000172037_00239_16826_0000.N1\n /u/k-data/agram/sat_metadata/ENV/INS_DIR\n /u/k-data/agram/sat_metadata/ENV/Doris/VOR\n \n 20050519\n \n\n\n```\n\n
\n
\nNote : \nNote that for processing the ENVISAT data a directory that contains the orbits is required. \n
\n\n\n### Sentinel-1 stripmap:\n```xml\n \n /u/data/sat_metadata/S1/aux_poeorb/\n 20151024\n /u/data/S1A_S1_SLC__1SSV_20151024T234201_20151024T234230_008301_00BB43_068C.zip\n \n \n /u/k-raw/sat_metadata/S1/aux_poeorb/\n 20150930\n /u/data/S1A_S1_SLC__1SSV_20150930T234200_20150930T234230_007951_00B1CC_121C.zip\n \n```\n\n
\n
\nNote : \nNote that for processing the Sentinel-1 data a directory that contains the orbits is required. \n
\n\n### ALOS2 SLC\n```xml\n\n \n data/20141114/ALOS2025732920-141114/IMG-HH-ALOS2025732920-141114-UBSL1.1__D\n \n \n data/20141114/ALOS2025732920-141114/LED-ALOS2025732920-141114-UBSL1.1__D\n \n \n 20141114\n \n\n```\n\n### ALOS1 raw data\n``` xml\n\n \n [data/20080822/ALPSRP137311060-L1.0/IMG-HH-ALPSRP137311060-H1.0__D]\n \n \n [data/20080822/ALPSRP137311060-L1.0/LED-ALPSRP137311060-H1.0__D]\n \n \n 20080822\n \n\n```\n\n### CosmoSkyMed raw or SLC data\n\n```xml\n\n data/CSKS3_RAW_B_HI_03_HH_RD_SF_20111007021527_20111007021534.h5\n \n 20111007\n \n\n\n```\n\n### TerraSAR-X and TanDEM-X\n\n```xml\n\n PATH_TO_TSX_DATA_XML\n OUTPUT_NAME \n \n```\n\n\n### Using ISCE as a python library\n\nISCE can be used a python library. Users can develop their own workflows within ISCE framework. Here are few simple examples where we try to call isce modules:\n\n\n#### Example 1: (extract metadata, range and azimuth pixel size)\n\n\n```python\nstObj = St()\nstObj.configure()\nframe = stObj.loadProduct(\"20080822_slc.xml\")\nprint(\"Wavelength = {0} m\".format(frame.radarWavelegth))\nprint(\"Slant Range Pixel Size = {0} m\".format(frame.instrument.rangePixelSize))\n\n#For azimuth pixel size we need to multiply azimuth time interval by the platform velocity along the track\n\n# the acquisition time at the middle of the scene\nt_mid = frame.sensingMid\n\n#get the orbit for t_mid\nst_mid=frame.orbit.interpolateOrbit(t_mid)\n\n# platform velocity\nVs = st_mid.getScalarVelocity()\n\n# pulse repitition frequency\nprf = frame.instrument.PRF\n\n#Azimuth time interval \nATI = 1.0/prf\n\n#Azimuth Pixel size\naz_pixel_size = ATI*Vs\nprint(\"Azimuth Pixel Size = {0} m\".format(az_pixel_size))\n\n\n\n\n```\n\n#### Example 2: compute ground range pixels size\n\n\n```python\nr0 = frame.startingRange\nrmax = frame.getFarRange()\nrng =(r0+rmax)/2\n\nelp = Planet(pname='Earth').ellipsoid\ntmid = frame.sensingMid\n\nsv = frame.orbit.interpolateOrbit( tmid, method='hermite') #.getPosition()\nllh = elp.xyz_to_llh(sv.getPosition())\n\n\nhdg = frame.orbit.getENUHeading(tmid)\nelp.setSCH(llh[0], llh[1], hdg)\nsch, vsch = elp.xyzdot_to_schdot(sv.getPosition(), sv.getVelocity())\n\nRe = elp.pegRadCur\nH = sch[2]\ncos_beta_e = (Re**2 + (Re + H)**2 -rng**2)/(2*Re*(Re+H))\nsin_bet_e = np.sqrt(1 - cos_beta_e**2)\nsin_theta_i = sin_bet_e*(Re + H)/rng\nprint(\"incidence angle at the middle of the swath: \", np.arcsin(sin_theta_i)*180.0/np.pi)\ngroundRangeRes = frame.instrument.rangePixelSize/sin_theta_i\nprint(\"Ground range pixel size: {0} m \".format(groundRangeRes))\n\n```\n\n
\n
\nNote : \nOne can easily get the incidence angle from the los.rdr file inside geometry folder. Even without opening the file, here is a way to get the statistics and the average value of the incidence angle: gdalinfo geometry/los.rdr -stats\n
\n\n\n```python\n\n```\n", "meta": {"hexsha": "e2a132a4756d4675db157b21f2b0ab3fe73894d3", "size": 92769, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Week-2/GEOS639_Lab1.ipynb", "max_stars_repo_name": "uafgeoteach/GEOS639-InSARGeoImaging", "max_stars_repo_head_hexsha": "2f0804f875fe3dbc4972c1dfc785dc585ebbd482", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-02-22T06:29:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T19:09:31.000Z", "max_issues_repo_path": "Week-2/GEOS639_Lab1.ipynb", "max_issues_repo_name": "uafgeoteach/GEOS639-InSARGeoImaging", "max_issues_repo_head_hexsha": "2f0804f875fe3dbc4972c1dfc785dc585ebbd482", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week-2/GEOS639_Lab1.ipynb", "max_forks_repo_name": "uafgeoteach/GEOS639-InSARGeoImaging", "max_forks_repo_head_hexsha": "2f0804f875fe3dbc4972c1dfc785dc585ebbd482", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.0224370038, "max_line_length": 883, "alphanum_fraction": 0.5881059406, "converted": true, "num_tokens": 16408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4378234844434674, "lm_q2_score": 0.25982564369245537, "lm_q1q2_score": 0.11375776866919764}} {"text": "```python\nimport numpy as np\n%load_ext autoreload\n```\n\nNotebook by **Maxime Dion**
\nFor the QSciTech-QuantumBC virtual workshop on gate-based quantum computing\n\n## Tutorial for Activity 3.2\n\nFor this activity, make sure you can easily import your versions of `hamiltonian.py`, `pauli_string.py` and `mapping.py` that you have completed in the Activity 3.1 tutorial. You will also need your verions of `evaluator.py` and `solver.py`. Placing this notebook in the same `path` as these files is the easiest way to acheive this. At the end of this notebook, you should be in good position to complete these 2 additionnal files.\n\nThe solution we suggest here is NOT mandatory. If you find ways to make it better and more efficient, go on and impress us! On the other hand, by completing all sections of this notebook you'll be able to :\n- Prepare a Quantum State based on a varitional form (circuit);\n- Measure qubits in the X, Y and Z basis;\n- Estimate expectation value of Pauli String on a quantum state;\n- Evaluate the expectation value of an Hamiltonian in the form of a Linear Combinaison of Pauli Strings;\n- Run a minimization algorithm on the energy expectation function to find the ground state of a Hamiltonian;\n- Dance to express your overwhelming sense of accomplishment\n\n**Important**\n\nWhen you modify and save a `*.py` file you need to re-import it so that your modifications can be taken into account when you re-execute a call. By adding the magic command `%autoreload` at the beginning of a cell, you make sure that the modifications you did to the `*.py` files are taken into account when you re-run a celll and that you can see the effect.\n\nIf you encounter unusual results, restart the kernel and try again.\n\n**Note on numbering**\n\nWhen you ask a question in the Slack channel you can refer to the section name or the section number.\n\nTo enable the section numbering, please make sure you install [nbextensions](https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/install.html). It is available in the conda distribution. After installation it you need to enable the option 'Table of contents (2)'.\n\n# Variational Quantum States\n\nEvery quantum circuit starts with all qubits in the the state $|0\\rangle$. In order to prepare a quantum state $|\\psi\\rangle$ we need to prepare a `QuantumCircuit` that will modify the states of the qubits in order to get this specific state. The action of a circuit can always be represented as a unitiary operator.\n\n\\begin{align}\n |\\psi\\rangle &= \\hat{U} |0 \\ldots 0\\rangle\n\\end{align}\n\nFor a parametric state the `QuantumCircuit` and therefore the unitary $U$ will depend on some parameters that we wirte as $\\boldsymbol{\\theta}$.\n\n\\begin{align}\n |\\psi(\\boldsymbol{\\theta})\\rangle &= \\hat{U}(\\boldsymbol{\\theta}) |0 \\ldots 0\\rangle\n\\end{align}\n\nWe will see 2 ways to define Parametrized Quantum Circuits that represent Variational Quantum States. For the first method we only need the `QuantumCircuit` class from `qiskit.circuit`.\n\n\n```python\nfrom qiskit.circuit import QuantumCircuit\n```\n\n## Generating function\nThe easiest way to generate a parametrized `QuantumCircuit` is to implement a function that takes parameters as arguments and returns a `QuantumCircuit`. Here is such a function that generates a 2 qubits QuantumCircuit.\n\n\n```python\ndef example_2qubits_2params_quantum_circuit(theta,phi):\n qc = QuantumCircuit(2)\n qc.ry(theta,0)\n qc.rz(phi,0)\n qc.cx(0,1)\n return qc\n```\n\nTo visualize this circuit we first need to call the generating function with dummy argument values for it to return a circuit. We can draw the circuit. The `'mpl'` option draws the circuit in a fancy way using `matplotlib`. If you are experiencing problems, you can remove this option.\n\n\n```python\nvarform_qc = example_2qubits_2params_quantum_circuit\nqc = varform_qc(1,2)\nqc.draw('mpl')\n```\n\n## Using qiskit parameter\n\nThe other way to generate a parametrized `QuantumCircuit` is to use the `Parameter` class in `qiskit`.\n\n\n```python\nfrom qiskit.circuit import Parameter\n```\n\nHere is the same circuit as before done with this method.\n\n\n```python\na = Parameter('a')\nb = Parameter('b')\nvarform_qc = QuantumCircuit(2)\nvarform_qc.ry(a,0)\nvarform_qc.rz(b,0)\nvarform_qc.cx(0,1)\n```\n\n\n\n\n \n\n\n\nDone this way the parametrized circuit can be drawn right away.\n\n\n```python\nvarform_qc.draw('mpl')\n```\n\nTo see what are the parameters of a parametrized `QuantumCircuit` you can use\n\n\n```python\nvarform_qc.parameters\n```\n\n\n\n\n {Parameter(a), Parameter(b)}\n\n\n\n**Important** Beware that sometimes the parameters will not appear in the same order as you declared them!\n\nTo assign values to the different parameters we need to use the `QuantumCircuit.assign_paremeters()` method. This methods takes a `dict` as an argument containing the `Parameter`s and their `value`s.\n\n\n```python\nparam_dict = {a : 1, b : 2}\nqc = varform_qc.assign_parameters(param_dict)\nqc.draw('mpl')\n```\n\n\n```python\nparam_dict = {a : 3, b : 4}\nqc = varform_qc.assign_parameters(param_dict)\nqc.draw('mpl')\n```\n\nIf you want to provide the parameter values as a `list` or a `np.array` you can build the `dict` directly. Just make sure that the order you use in `param_values` corresponds to the other of `varform_qc.parameters`.\n\n\n```python\nparam_values = [1, 2]\nparam_dict = dict(zip(varform_qc.parameters,param_values))\nprint(param_dict)\n```\n\n {Parameter(a): 1, Parameter(b): 2}\n\n\n## Varforms circuits for H2\nUsing the method of your choice, prepare 2 different 4-qubit `QuantumCircuit`s. \n- The first should take 1 parameter to cover the real coefficients state sub space spanned by $|0101\\rangle$ and $|1010\\rangle$.\n- The second should take 3 parameters to cover the real coefficients state sub space spanned by $|0101\\rangle$, $|0110\\rangle$, $|1001\\rangle$ and $|1010\\rangle$.\n\nRevisit the presentation to find such circuits.\n\n\n```python\nvarform_4qubits_1param = QuantumCircuit(4)\na = Parameter('a')\n\"\"\"\nYour code here\n\"\"\"\nvarform_4qubits_1param.ry(a,1)\nvarform_4qubits_1param.x(0)\nvarform_4qubits_1param.cx(1,0)\nvarform_4qubits_1param.cx(0,2)\nvarform_4qubits_1param.cx(1,3)\n\nvarform_4qubits_1param.draw('mpl')\n```\n\n\n```python\nvarform_4qubits_3params = QuantumCircuit(4)\na = Parameter('a')\nb = Parameter('b')\nc = Parameter('c')\n\"\"\"\nYour code here\n\"\"\"\nvarform_4qubits_3params.x(0)\nvarform_4qubits_3params.x(2)\nvarform_4qubits_3params.barrier(range(4))\nvarform_4qubits_3params.ry(a,1)\nvarform_4qubits_3params.cx(1,3)\nvarform_4qubits_3params.ry(b,1)\nvarform_4qubits_3params.ry(c,3)\nvarform_4qubits_3params.cx(3,2)\nvarform_4qubits_3params.cx(1,0)\n\nvarform_4qubits_3params.draw('mpl')\n```\n\n# Evaluator\nThe `Evaluator` is an object that will help us to evaluate the expectation value of a quantum operator (`LCPS`) on a specific variational form and backend. To initialize and `Evaluator` you should provide :\n\n**Mandatory**\n- A **variational form** that can create a `QuantumCircuit` given a set of `params`;\n- A **backend** `qiskit.Backend` (a simulator or an actual device handle) on which to run the `QuantumCircuit`\n\n**Optional**\n- `execute_opt` is a `dict` containing the optionnal argument to pass to the `qiskit.execute` method (ex : `{'shots' : 1024}`.\n- `measure_filter` a `qiskit.ignis...MeasurementFilter` that can be applied to the result of a circuit executation to mitigate readout errors.\n\nThe creation/usage of an `Evaluator` such as `BasicEvaluator` goes like this :\n\n\nevaluator = BasicEvaluator(varform_qc,backend)
evaluator.set_linear_combinaison_pauli_string(operator_lcps)
expected_value = evaluator.eval(params)\n
\n\nFirst you initialize the evaluator.\n\nNext, you provide the operator you want to evaluate using the `set_linear_combinaison_pauli_string(LCPS)` method. \n\nFinally, you call the `eval(params)` method that will return the estimation of the operator's expected value. Mathematicaly, the use of this method corresponds to \n\n\\begin{align}\nE(\\boldsymbol{\\theta}).\n\\end{align}\n\nWe will now go through the different pieces neccessary to complete the `Evaluator` class.\n\n## Static methods\nBeing static, these method do not need an instance of a class to be used. They can be called directly from the class.\n\nThese methods are called before the first call to `eval(params)`. Most of these methods are implemented inside the abstract class `Evaluator` (except for `prepare_measurement_circuits_and_interpreters(LCPS)`)\n\n### Pauli Based Measurements\nWe have seen that even if a quantum computer can only measure qubits in the Z-basis, the X and Y-basis are accessible if we *rotate* the quantum state before measuring. \n\nImplement the `@staticmethod` : `pauli_string_based_measurement(PauliString)` in the `Evaluator` class in file `Evaluator.py` that returns a `QuantumCircuit` that measures each qubit in the basis given by the `PauliString`.\n\nFirst we import the abstract class `Evaluator` and the `PauliString` class.\n\n\n```python\nfrom evaluator import Evaluator\nfrom pauli_string import PauliString\n```\n\nTest your code with the next cell.\n\n\n```python\n%autoreload\npauli_string = PauliString.from_str('ZIXY')\nmeasure_qc = Evaluator.pauli_string_based_measurement(pauli_string)\nmeasure_qc.draw('mpl')\n```\n\n### Measurable eigenvalues\n\nImplement the `@staticmethod` : `measurable_eigenvalues(PauliString)` in the `Evaluator` class in file `Evaluator.py` that returns a `np.array` that contains the eigenvalues of the measurable `PauliString` for each basis state. We noted this vector\n\n\\begin{align}\n \\Lambda_q^{\\hat{(\\mathcal{P})}}.\n\\end{align}\n\nBe mindful of the order of the basis state.\n\n\\begin{align}\n 0000, 0001, 0010, \\ldots, 1110, 1111 \n\\end{align}\n\nYou can test your implementation on the `ZIXY` Pauli string.\n\n\n```python\n%autoreload\npauli_string = PauliString.from_str('ZIXY')\nmeasurable_eigenvalues = Evaluator.measurable_eigenvalues(pauli_string)\nprint(measurable_eigenvalues)\n```\n\n [ 1 -1 -1 1 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1]\n\n\nFor the `PauliString` `'ZIXY'` (measurable `'ZIZZ'`) you should get the following eigenvalues :\n\n\n[ 1 -1 -1 1 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1]\n\n\n### Measurement Circuits and Interpreters\nThe `prepare_measurement_circuits_and_interpreters(LCPS)` is specific to the sub-type of `Evaluator`. The two different types of `Evaluator`s considered in this workshop are :\n- The `BasicEvaluator` will run a single `QuantumCircuit` for each `PauliString` present in the provided `LCPS`.\n- The `BitwiseCommutingCliqueEvaluator` will exploit Bitwise Commuting Clique to combine the evaluation of Commuting `PauliStrin`s and reduce the number of different `QuantumCircuit` run for each evaluation. \n\nImplement the `prepare_measurement_circuits_and_interpreters(LCPS)` method in the `BasicEvaluator` class in file `Evaluator.py`. This method should return 2 `list`. The first should contain one measurement `QuantumCircuit` for each `PauliString` in the `LCPS`. The second list should contain one `np.array` of the eigenvalues of the measurable `PauliString` for each basis state.\n\n**Note** You can try to implement similar methods for the `BitwiseCommutingCliqueEvaluator`.\n\nYou can test your method on `2 ZIXY + 1 IXYZ`.\n\n\n```python\nfrom evaluator import BasicEvaluator\n```\n\n\n```python\n%autoreload\nlcps = 2*PauliString.from_str('ZIXY') + 1*PauliString.from_str('IXYZ')\nmeasurement_circuits, interpreters = BasicEvaluator.prepare_measurement_circuits_and_interpreters(lcps)\n```\n\nYou can visualize the interpreter and the measurement circuit for each term in the `LCPS` by using `i = 0` and `i = 1`.\n\n\n```python\ni = 1\nprint(interpreters[i])\nmeasurement_circuits[i].draw('mpl')\n```\n\nThe interpreters should be respectively :\n\n\n[ 2 -2 -2 2 2 -2 -2 2 -2 2 2 -2 -2 2 2 -2]
[ 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1 1 -1]\n
\n\n### Set the LCPS\nThe method `set_linear_combinaison_pauli_string(LCPS)` is already implemented inside the abstract class `Evaluator`. Please take a look at it to notice that this method makes an immediate call to the `prepare_measurement_circuits_and_interpreters(LCPS)` method you have just implemented. The `measurement_circuits` and `interpreters` are also stored in attributes of the same name.\n\n## Methods called inside `eval(params)`\nSince we are entering the action of the `eval(params)` method we will need to instantiate an `Evaluator`. This will require a `backend`. We will use a local `qasm_simulator` for now, which is part of the `Aer` module. In the future, you can use a different `backend`. We will also soon need the `execute` method.\n\n\n```python\nfrom qiskit import Aer, execute\nqasm_simulator = Aer.get_backend('qasm_simulator')\n```\n\n### Circuit preparation\nThe `prepare_eval_circuits(params)` will combine the variational form with these measurement `QuantumCircuit`s to form the complete circuit to be run. This method has 2 tasks :\n- Assign the `params` to the variationnal form to get a `QuantumCircuit` that prepares the quantum state\n- Combine this circuit with all the measurement circuits to return as many `QuantumCircuit` inside a `list`.\n\nImplement this method inside the `Evaluator` class and test it here.\n\n\n```python\n%autoreload\nlcps = 2*PauliString.from_str('ZXZX') + 1*PauliString.from_str('IIZZ')\nvarform = varform_4qubits_1param\nbackend = qasm_simulator\nevaluator = BasicEvaluator(varform,backend)\nevaluator.set_linear_combinaison_pauli_string(lcps)\nparams = [0,]\neval_circuits = evaluator.prepare_eval_circuits(params)\n```\n\nYou can take a look at the `QuantumCircuit` for the first (`i=0`) and second (`i=1`) PauliString. What you should get is a circuit that begins with the state preparation circuit with the `params` applied to it followed by the measurement circuit.\n\n\n```python\ni = 0\neval_circuits[i].draw('mpl')\n```\n\n### Execution\nThe ultimate goal to the execution of a circuit is to get the number of times each basis state is measured. Let's execute our `eval_circuits`. We can run many `QuantumCircuit`s at the same time by placing them into a `list`, which they already are!\n\n\n```python\nexecute_opts = {'shots' : 1024}\njob = execute(eval_circuits, backend=qasm_simulator, **execute_opts)\nresult = job.result()\n```\n\nWe can get the number of counts of each state for the execution of a given circuit with the follow lines. The counts are returned as a `dict`.\n\n\n```python\ni = 0\n#i = 1\ncounts = result.get_counts(eval_circuits[i])\nprint(counts)\n```\n\n {'0000': 266, '0001': 251, '0100': 262, '0101': 245}\n\n\nIf you `eval_circuits` are correct, you should get for, `i = 0` and `i = 1` respectively, something like this (exact value may vary since there is some randomness in the executation of a quantum circuit)\n\n\n{'0000': 266, '0001': 262, '0100': 240, '0101': 256}
{'0101': 1024} \n
\n\n### counts2array\n\nWe will transform this `dict` into an array with the `counts2array` method. Implement this method that will return the vector $N_q$. Be mindful of the order of the basis state.\n\n\\begin{align}\n 0000, 0001, 0010, \\ldots, 1110, 1111 \n\\end{align}\n\n**optional remark** While doing this will allow us to interpret the counts with a simple inner product, this implies creating an array of size $2^n$ where $n$ is the numbers of qubits. This might not be such a good idea for larger systems and the use of a `dict` might be more appropriate. Can you interpret the counts efficiently while keeping them in a `dict`?\n\n\n```python\n%autoreload\ni = 0\ncounts = result.get_counts(eval_circuits[i])\nevaluator.counts2array(counts)\n```\n\n\n\n\n array([266, 251, 0, 0, 262, 245, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0])\n\n\n\nFor `i=0` in particular you should get something similar to:\n\n\narray([228., 276., 0., 0., 269., 251., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n\n\n### Interpret counts\nThe action of interpreting the counts is actually the task of estimating the expectation value of a `PauliString` and multiplying by the coefficient associated with this `PauliString` in the LCPS. Implemented the `interpret_count_array` method that should return an array with the values for this expression.\n\n\\begin{align}\n h_i \\langle \\hat{\\mathcal{P}}_i \\rangle = \\frac{h_i}{N_\\text{tot}}\\sum_{q} N_q \\Lambda_q^{(\\hat{\\mathcal{P}}_i)}\n\\end{align}\n\n\n```python\n%autoreload\ni = 1\ncounts_array = evaluator.counts2array(result.get_counts(eval_circuits[i]))\ninterpreter = evaluator.interpreters[i]\nexpected_value = evaluator.interpret_count_array(interpreter,counts_array)\nprint(expected_value)\n```\n\n (-1+0j)\n\n\nYou should get something close to `0` for the first one and `-1` for the second.\n\n### Evaluation\nYou have now all the pieces to complete the `eval(params)` method. This method should use all the methods you've implemented since the section *Methods called inside `eval(params)`* and then sum all the interpreted values. Mathematicaly, it should return the value of the expression\n\n\\begin{align}\n E(\\boldsymbol{\\theta}) = \\sum_i h_i \\langle\\psi(\\boldsymbol{\\theta}) | \\hat{\\mathcal{P}}_i | \\psi(\\boldsymbol{\\theta}) \\rangle.\n\\end{align}\n\n\n```python\n%autoreload\nlcps = 2*PauliString.from_str('ZXZX') + 1*PauliString.from_str('IIZZ')\nvarform = varform_4qubits_1param\nbackend = qasm_simulator\nexecute_opts = {'shots' : 1024}\nevaluator = BasicEvaluator(varform, backend, execute_opts=execute_opts)\nevaluator.set_linear_combinaison_pauli_string(lcps)\nparams = [0,]\nexpected_value = evaluator.eval(params)\nprint(expected_value)\n```\n\n -1.0703125\n\n\nYes that's right, your code now returns an estimate of the expression\n\n\\begin{align}\n E(\\theta) = \\langle \\psi(\\theta) | \\hat{\\mathcal{H}} | \\psi(\\theta) \\rangle.\n\\end{align} \n\nfor\n\n\\begin{align}\n \\hat{\\mathcal{H}} = 2\\times \\hat{Z}\\hat{X}\\hat{Z}\\hat{X} + 1\\times \\hat{I}\\hat{I}\\hat{Z}\\hat{Z}\n\\end{align} \n\nand the varform `varform_4qubits_1param` for $\\theta = 0$. The `evaluator.eval(params)` is now a method you can call like a function and it will return the energy $E(\\theta)$.\n\nNow comes the time to test this on the $\\text{H}_2$ molecule Hamiltonian!\n\n## The Hamiltonian evaluation test\n\nWe will now import the classes from the previous activity.\n\n\n```python\nfrom hamiltonian import MolecularFermionicHamiltonian\nfrom mapping import JordanWigner\n```\n\nFor ease of use we will import the integral values instead of using `pyscf`. We also import the Coulomb repulsion energy for later use. By now we are experts in building the Hamiltonian.\n\n\n```python\nwith open('Integrals_sto-3g_H2_d_0.7350_no_spin.npz','rb') as f:\n out = np.load(f)\n h1_load_no_spin = out['h1']\n h2_load_no_spin = out['h2']\n energy_nuc = out['energy_nuc']\n \nmolecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1_load_no_spin,h2_load_no_spin).include_spin()\n```\n\nWe use the Jordan-Wigner mapping to the get the `LCPS` for the H2 molecule with `d=0.735`. \n\n\n```python\n%autoreload\nmapping = JordanWigner()\n\nlcps_h2 = mapping.fermionic_hamiltonian_to_linear_combinaison_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()\nprint(lcps_h2)\n```\n\n 15 pauli strings for 4 qubits (Real, Imaginary)\n IIII (-0.81055,+0.00000)\n IIIZ (+0.17218,+0.00000)\n IIZI (-0.22575,+0.00000)\n IIZZ (+0.12091,+0.00000)\n IZII (+0.17218,+0.00000)\n IZIZ (+0.16893,+0.00000)\n IZZI (+0.16615,+0.00000)\n ZIII (-0.22575,+0.00000)\n ZIIZ (+0.16615,+0.00000)\n ZIZI (+0.17464,+0.00000)\n ZZII (+0.12091,+0.00000)\n XXXX (+0.04523,+0.00000)\n XXYY (+0.04523,+0.00000)\n YYXX (+0.04523,+0.00000)\n YYYY (+0.04523,+0.00000)\n\n\nWe build an evaluator and feed it the `LCPS` of H2. And them we evaluate the energy. Use `params` in order that your `varform` prepares the state $|0101\\rangle$.\n\n\n```python\n%autoreload\nvarform = varform_4qubits_1param\nbackend = qasm_simulator\nexecute_opts = {'shots' : 1024}\nevaluator = BasicEvaluator(varform,backend,execute_opts = execute_opts)\nevaluator.set_linear_combinaison_pauli_string(lcps_h2)\nparams = [0,]\nexpected_value = evaluator.eval(params)\nprint(expected_value)\n```\n\n -1.8370563365153778\n\n\nIf your `varform` prepares the state $|0101\\rangle$, you should get something around `-1.83`. This energy is already close to the ground state energy because the ground state is close to $|0101\\rangle$, but still it's not the ground state. We need to find the `params` that will minimise the energy.\n\n\\begin{align}\n E_0 = \\min_{\\boldsymbol{\\theta}} E(\\boldsymbol{\\theta})\n\\end{align}\n\n# Solver\n\nIn a final step we need to implement a solver that will try to find the minimal energy. We will implement 2 solvers. The second is optional.\n- First the one using the VQE algo in conjunction with a minimizer to try to minimize `evaluator.eval(params)`.\n- Next we will make use of the `to_matrix()` method you implemented in the previous activity to find the exact value/solution.\n\n## VQE Solver\n\nLike any minimzation process this solver will need a couple of ingredients :\n- A function to minimize, we will provide this with the evaluator\n- A minimizer, an algorithm that generaly takes in a function and a set of starting parameters and returns the best guess for the optimal parameters that correspond to the minimal value of the function to minimize.\n- A set of starting parameters.\n\n### Minimizer\n\nA minimizer that works OK for the VQE algorithme is the Sequential Least SQuares Programming (SLSQP) algorithm. It's available in the `minimize` sub-module of [scipy](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html).\n\n\n```python\nfrom scipy.optimize import minimize\n```\n\nWe will make a Lambda function with the minimizer so we can set all sorts of parameter before feeding it to the solver.\n\n\n```python\nminimizer = lambda fct, start_param_values : minimize(\n fct,\n start_param_values,\n method = 'SLSQP', \n options = {'maxiter' : 5,'eps' : 1e-1, 'ftol' : 1e-4, 'disp' : True, 'iprint' : 2})\n```\n\nThe `minimizer` now takes only 2 arguments : the function and the starting parameters values. We also specify some options :\n- A small value for the maximum number of iteration. You will find that running the VQE algorithm is expensive because of the `evaluator.eval(params)` method. Either it's long to simulate on `qasm_simulator` or because it's running on an actual quantum computer.\n- A `eps` of `0.1`. This is the size of the step the algorithm is going to change the values of the parameters to try to estimate the slope of the function. By the way, a lot of minimizing algorithms use the slope of the function to know in which direction is the minimum. Since our parameters are all angles in radians a value of 0.1 seems reasonnable. Play with this value if you like.\n- A `ftol` value of `1e-4`. This is the goal for the precision of the value of the minimum value. The chemical accuracy is around 1 milli-Hartree.\n- We set `iprint` to `2` so see what is going on. For your final implementation you can set this to `0`.\n\nBefore implementing the `VQESolver` let's try this minimizer! The function is `evaluator.eval` and we start with a parameter of `0`. This will take a while.\n\n\n```python\nminimization_result = minimizer(evaluator.eval,[0,])\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.834406E+00 2.276211E-01\n 2 6 -1.857755E+00 4.369813E-02\n 3 11 -1.860118E+00 1.790857E-02\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.8601177724457674\n Iterations: 3\n Function evaluations: 11\n Gradient evaluations: 3\n\n\nIn the end you should get an minimal energy around `1.86` Hartree. Which is a bit smaller then what we had before minimizing. You can explore the `minimization_result` to retreive this value but also the set of optimal parameters.\n\n\n```python\nopt_params = minimization_result.x\nopt_value = minimization_result.fun\nprint(opt_params)\nprint(opt_value)\n```\n\n [-0.22882531]\n -1.8601177724457674\n\n\n### VQE Solver\n\nNow you should be in good position to implement the `lowest_eig_value(lcps)` of the `VQESolver` class inside the `Solve.py` file. Test your method here.\n\n\n```python\nfrom solver import VQESolver\n```\n\n\n```python\n%autoreload\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver')\nopt_value, opt_params = vqe_solver.lowest_eig_value(lcps_h2)\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.839000E+00 2.553151E-01\n 2 6 -1.865535E+00 6.395731E-02\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.865591974724343\n Iterations: 2\n Function evaluations: 10\n Gradient evaluations: 2\n\n\nThe is only one thing missing to have the complete molecular energy : the Coulomb repulsion energy from the nucleus. This value was loaded when we imported the integrals. Let's add it to the electronic energy.\n\n\n```python\nprint('Ground state position estimate (vqe) : ', opt_params)\nprint('Ground state energy estimate (electronic, vqe) : ', opt_value)\nprint('Ground state energy estimate (molecular, vqe) : ', opt_value + energy_nuc)\n```\n\n Ground state position estimate (vqe) : [-0.25526396]\n Ground state energy estimate (electronic, vqe) : -1.865591974724343\n Ground state energy estimate (molecular, vqe) : -1.1456229802753635\n\n\n### The Eigenstate\n\nWhat is the eigenstate? We can partially find out by using the `varform` with the parameters we have found and measure everything in the Z basis.\n\n\n```python\neigenstate_qc = varform.copy()\neigenstate_qc.measure_all()\n\nparam_dict = dict(zip(eigenstate_qc.parameters,opt_params))\neigenstate_qc = eigenstate_qc.assign_parameters(param_dict)\n\neigenstate_qc.draw('mpl')\n```\n\nWe now execute this circuit.\n\n\n```python\nexecute_opts = {'shots' : 1024}\njob = execute(eigenstate_qc,backend=qasm_simulator,**execute_opts)\nresult = job.result()\ncounts = result.get_counts(eigenstate_qc)\n```\n\nWe will use the `plot_histogram` method from `qiskit.visualization` that takes the counts `dict` as an input. \n\n\n```python\nfrom qiskit.visualization import plot_histogram\n```\n\n\n```python\nplot_histogram(counts)\n```\n\n\n```python\nprint(f\"|a_0101| ~ {np.sqrt(counts['0101']/1024)}\")\nprint(f\"|a_1010| ~ {np.sqrt(counts['1010']/1024)}\")\n```\n\n |a_0101| ~ 0.9936320684740404\n |a_1010| ~ 0.11267347735824966\n\n\nWe see that the found solution is mostly the state $|0101\\rangle$ which is the Hartree-Fock solution when the 2-body Hamiltonian is not present. Adding this 2-body physics, shifts the energy down a bit by introducing a small contribution of $|1010\\rangle$. The actual statevector has a `-` sign between these two states.\n\n\\begin{align}\n\\alpha_{0101}|0101\\rangle - \\alpha_{1010}|1010\\rangle\n\\end{align}\n\nBut this is not something we can know from this. Fortunatly, H2 is a small system which can be solved exactly and we can find out this phase.\n\n## Exact Solver (optional)\n\nIf you want to compare the value you get with the VQE algorithm it would be nice to have the exact value. If you were able to implement the `to_matrix()` method for `PauliString` and `LinearCombinaisonPauliString` then you can find the exact value of the ground state. All you need is to diagonalise the matrix reprensenting the whole Hamiltonian and find the lowest eigenvalue! Obviously this will not be possible to do for very large systems.\n\n\n```python\nhamiltonian_matrix_h2 = lcps_h2.to_matrix()\neig_values, eig_vectors = np.linalg.eigh(hamiltonian_matrix_h2)\neig_order = np.argsort(eig_values)\neig_values = eig_values[eig_order]\neig_vectors = eig_vectors[:,eig_order]\nground_state_value, ground_state_vector = eig_values[0], eig_vectors[:,0]\nprint('Ground state vector (exact) : \\n', ground_state_vector)\nprint('Ground state energy (electronic, exact) : ', ground_state_value)\nprint('Ground state energy (molecular, exact) : ', ground_state_value + energy_nuc)\n```\n\n Ground state vector (exact) : \n [-0. -0.j -0. -0.j -0. -0.j -0. -0.j\n -0. -0.j -0.9937604 -0.j 0. +0.j 0. +0.j\n 0. +0.j 0. +0.j 0.11153594+0.j 0. +0.j\n 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n Ground state energy (electronic, exact) : -1.8572750302023788\n Ground state energy (molecular, exact) : -1.137306035753399\n\n\nNow you can complete the `ExactSolver` in the `Solver.py` file.\n\n\n```python\nfrom solver import ExactSolver\n```\n\n\n```python\n%autoreload\nexact_solver = ExactSolver()\nground_state_value, ground_state_vector = exact_solver.lowest_eig_value(lcps_h2)\nprint('Ground state vector (exact) : ', ground_state_vector)\nprint('Ground state energy (electronic, exact) : ', ground_state_value)\nprint('Ground state energy (molecular, exact) : ', ground_state_value + energy_nuc)\n```\n\n Ground state vector (exact) : [-0. -0.j -0. -0.j -0. -0.j -0. -0.j\n -0. -0.j -0.9937604 -0.j 0. +0.j 0. +0.j\n 0. +0.j 0. +0.j 0.11153594+0.j 0. +0.j\n 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n Ground state energy (electronic, exact) : -1.8572750302023788\n Ground state energy (molecular, exact) : -1.137306035753399\n\n\nWhat are the two basis state involved in the ground state? Let's plot the state vector using `matplotlib`.\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nfig,ax = plt.subplots(1,1)\ni_max = np.argmax(np.abs(ground_state_vector))\nstate = ground_state_vector * np.sign(ground_state_vector[i_max])\nax.bar(range(len(state)), np.abs(state), color=(np.real(state) > 0).choose(['r','b']))\nplt.xticks(range(len(state)),[f\"{i:04b}\" for i in range(len(state))], size='small',rotation=60);\n```\n\n# What's next?\n\nNow that you can find the ground state for a specific H2 molecule configuration (`d = 0.735`), you should be able to do that for many configurations, say `d = 0.2` to `2.5`. Doing that will enable you to plot the so-called dissociation curve : energy vs distance. Do not forget to include the Coulomb repulsion energy of the nucleus!\n\nYou could also run your algorithm on a noisy backend, either a noisy simulator or a real quantum computer. You've already seen on day 1 how to set/get a noisy backend. You'll see that noise messes things up pretty bad.\n\nRunning on real machine will introduce the problem of the qubit layout. You might want to change the `initial_layout` in the `execute_opts` so that your `varform` is not applying CNOT gates between qubits that are not connected. You know this needs to insert SWAP gate and this introduce more noise. Also covered in day 1.\n\nTo limit the effect of readout noise, you could add a `measure_filter` to your `evaluator`, so that each time you execute the `eval_circuits` you apply the filter to the results. Also covered in day 1.\n\nImplement the simulatneous evaluation for bitwise commuting cliques or even for general commuting cliques.\n\nNotebook by **Maxime Dion**
\nFor the QSciTech-QuantumBC virtual workshop on gate-based quantum computing\n\n### Plot the H2 Dissociation Curve\n\n\n```python\nfrom pyscf import gto\n```\n\n\n```python\n%autoreload\nn = 50\ndistances = np.linspace(0.3, 2.5, n)\ngs_energies_exact = np.zeros(n)\ngs_energies_vqe = np.zeros(n)\nenergy_nuc = np.zeros(n)\n\n# define mapping\nmapping = JordanWigner()\n\n# define minimizer\nminimizer = lambda fct, start_param_values : minimize(\n fct,\n start_param_values,\n method = 'SLSQP', \n options = {'maxiter' : 5,'eps' : 1e-1, 'ftol' : 1e-4})\n\n# instantiate an exact solver for comparison\nexact_solver = ExactSolver()\n\n# VQE setup\nvqe_evaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator, execute_opts={'shots' : 1024})\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver')\n\n# try a range of internuclear distances\nfor i, distance in enumerate(distances): #units in AA\n print('Trying Distance '+str(i+1), end=\"\\r\")\n \n # build the molecule and basis functions\n mol = gto.M(\n atom = [['H', (0,0,-distance/2)], ['H', (0,0,distance/2)]], \n basis = 'sto-3g'\n )\n \n # build the molecular Hamiltonian\n molecular_hamiltonian = MolecularFermionicHamiltonian.from_pyscf_mol(mol).include_spin()\n \n # map the Hamiltonian to a LCPS\n lcps_h2 = mapping.fermionic_hamiltonian_to_linear_combinaison_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()\n \n # store the nuclear energy\n energy_nuc[i] = mol.energy_nuc()\n \n # diagonalize the Hamiltonian to get energies\n Eh2_exact, _ = exact_solver.lowest_eig_value(lcps_h2)\n gs_energies_exact[i] = Eh2_exact + energy_nuc[i]\n \n # get the vqe energy\n Eh2_vqe, _ = vqe_solver.lowest_eig_value(lcps_h2)\n gs_energies_vqe[i] = Eh2_vqe + energy_nuc[i]\n \nprint(\"Done! \", end=\"\\r\")\n```\n\n Done! \r\n\n\n```python\n# plot dissociation curve of H2\nfig, ax = plt.subplots(1, 1, figsize=(10,8))\nax.plot(distances, gs_energies_exact, c='tab:red', label='Exact', linewidth=5)\nax.plot(distances, gs_energies_vqe, '.', c='tab:blue', label='VQE', ms=20)\nax.set_xlabel(r'Internuclear Distance / $\\AA$', fontsize=20)\nax.set_ylabel('Energy / $E_h$', fontsize=20)\nax.set_title('Dissociation Curve of H2', fontsize=28)\nax.legend()\nfig.savefig('H2_dissociation.png')\nplt.show()\n```\n\n\n```python\n# save these results\nwith open('h2_dissociation.npz','wb') as f:\n np.savez(f, atom='H2', basis=mol.basis, distances=distances, energy_nuc=energy_nuc, gs_exact=gs_energies_exact, \n gs_vqe=gs_energies_vqe, varform='varform_4qubits_1param', backend='qasm_simulator', execute_opts=execute_opts,\n mapping='Jordan Wigner', initial_params=[0,], minimizer='SLSQP', minimizer_options={'maxiter' : 5,'eps' : 1e-1, 'ftol' : 1e-4})\n```\n\n\n```python\n# to reload any data...\nwith open('h2_dissociation.npz','rb') as f:\n out = np.load(f, allow_pickle=True)\n varform_load = out['varform']\n basis_load = out['basis']\n energy_nuc_load = out['energy_nuc']\n gs_exact_load = out['gs_exact']\n gs_vqe_load = out['gs_vqe']\n execute_opts_load = out['execute_opts']\n```\n\n# Now Let's Add A Realistic Noise Model to our Simulator\n\n\n```python\nfrom qiskit import IBMQ\nfrom qiskit.providers.aer.noise import NoiseModel\n\n# IBMQ.save_account(TOKEN)\nIBMQ.load_account()\nIBMQ.providers()\n\n#provider = IBMQ.get_provider(hub='ibm-q-education')\n\nprovider = IBMQ.get_provider(hub='ibm-q-education', group='qscitech-quantum', project='qc-bc-workshop')\nprovider2 = IBMQ.get_provider(hub='ibm-q', group='open', project='main')\n```\n\n /Users/bhenders/opt/miniconda3/envs/qiskit/lib/python3.8/site-packages/qiskit/providers/ibmq/ibmqfactory.py:192: UserWarning: Timestamps in IBMQ backend properties, jobs, and job results are all now in local time instead of UTC.\n warnings.warn('Timestamps in IBMQ backend properties, jobs, and job results '\n ibmqfactory.load_account:WARNING:2021-02-01 14:15:45,480: Credentials are already in use. The existing account in the session will be replaced.\n\n\n## Let's Play Around With a Few Backends with different Topologies\n\n\n```python\nbogota = provider.get_backend('ibmq_bogota')\n# santiago = provider.get_backend('ibmq_santiago')\ncasablanca = provider.get_backend('ibmq_casablanca')\nrome = provider.get_backend('ibmq_rome')\nqasm_simulator = Aer.get_backend('qasm_simulator')\nvalencia = provider2.get_backend('ibmq_valencia')\nmelbourne = provider2.get_backend('ibmq_16_melbourne')\n```\n\n\n```python\n# Bogota\nbogota_prop = bogota.properties()\nbogota_conf = bogota.configuration()\nbogota_nm = NoiseModel.from_backend(bogota_prop)\n\n# Casablanca\ncasablanca_conf = casablanca.configuration()\ncasablanca_prop = casablanca.properties()\ncasablanca_nm = NoiseModel.from_backend(bogota_prop)\n\n# Valencia\nvalencia_conf = valencia.configuration()\nvalencia_prop = valencia.properties()\nvalencia_nm = NoiseModel.from_backend(bogota_prop)\n\n# Melbourne\nmelbourne_conf = melbourne.configuration()\nmelbourne_prop = melbourne.properties()\nmelbourne_nm = NoiseModel.from_backend(bogota_prop)\n```\n\n\n```python\nexecute_opts = {'shots' : 1024, \n 'noise_model': bogota_nm, \n 'coupling_map':bogota_conf.coupling_map,\n 'basis_gates':bogota_conf.basis_gates}\nevaluator = BasicEvaluator(varform,backend,execute_opts = execute_opts)\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver')\nopt_energy, opt_params = vqe_solver.lowest_eig_value(lcps_h2)\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.644299E+00 4.420509E-02\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.6534026525155174\n Iterations: 1\n Function evaluations: 9\n Gradient evaluations: 1\n\n\n\n```python\nexecute_opts = {'shots' : 1024}\nevaluator = BasicEvaluator(varform, bogota, execute_opts=execute_opts)\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver_bogota')\nopt_energy, opt_params = vqe_solver.lowest_eig_value(lcps_h2)\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.615201E+00 3.814424E-01\n 2 10 -1.619136E+00 8.165022E-02\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.6191355224178043\n Iterations: 2\n Function evaluations: 10\n Gradient evaluations: 2\n\n\nNotice that the ground state energy of an $H_2$ molecule is found to be -1.653 $E_h$ when our circuit runs on a noisy backend, with the same qubit coupling map as the Bogota device. When we run on the actual Bogota backend, we obtain an even worse result of -1.619 $E_h$. We compare this to -1.866 $E_h$ when running on an ideal simulator with only statistical noise contributing to potential error. We can try to mitigate some of this discrepancy by applying a `MeasurementFilter` to our circuit measurements.\n\n**Plan of Attack:**\n1. Decide on an optimal qubit layout (or 2 good ones) to minimize extra CX gates and error-prone U2 gates.\n2. Generate **measurement calibration circuits** using these layouts.\n3. Add a measurement filter to the VQE evaluator\n4. Compare to un-filtered results\n\nNote that our circuit uses 4 qubits. Perhaps one of the biggest optimizations we could do would be using parity mapping to reduce qubit requirements.\n\n### Qubit Mapping\n\nKeep in mind our variational circuit:\n\n\n```python\nqc = varform_4qubits_1param.assign_parameters({a: 1})\nqc.draw()\n```\n\nTake a look at the coupling on Bogota, the actual machine we hope to use. Then examine the error rates for the CX and single qubit unitaries on the Bogota machine to get a sense of how we might better map our problem to this device.\n\n\n```python\nbogota_conf.coupling_map\n```\n\n\n\n\n [[0, 1], [1, 0], [1, 2], [2, 1], [2, 3], [3, 2], [3, 4], [4, 3]]\n\n\n\n\n```python\n# Print CNOT error from Bogota calibration data\ncx_errors = list(map(lambda cm: bogota_prop.gate_error(\"cx\", cm), bogota_conf.coupling_map))\nfor i in range(len(bogota_conf.coupling_map)):\n print(f' -> qubits {bogota_conf.coupling_map[i]} CNOT error: {cx_errors[i]}')\n```\n\n -> qubits [0, 1] CNOT error: 0.02502260182900265\n -> qubits [1, 0] CNOT error: 0.02502260182900265\n -> qubits [1, 2] CNOT error: 0.010158268780223023\n -> qubits [2, 1] CNOT error: 0.010158268780223023\n -> qubits [2, 3] CNOT error: 0.014415524414420677\n -> qubits [3, 2] CNOT error: 0.014415524414420677\n -> qubits [3, 4] CNOT error: 0.010503141811223582\n -> qubits [4, 3] CNOT error: 0.010503141811223582\n\n\n* CNOT gates between [0,1] and between [2,3] seem to have the largest error. Can we avoid these?\n\n\n```python\n# Print U2 error from Bogota calibration data\nu2_errors = list(map(lambda q: bogota_prop.gate_error(\"sx\", q), range(bogota_conf.num_qubits)))\nfor i in range(bogota_conf.num_qubits):\n print(f' -> qubits {i} U2 error: {u2_errors[i]}')\n```\n\n -> qubits 0 U2 error: 0.00031209152498965555\n -> qubits 1 U2 error: 0.00029958716199301446\n -> qubits 2 U2 error: 0.00017693377775244873\n -> qubits 3 U2 error: 0.0004023787341145875\n -> qubits 4 U2 error: 0.00016725621608793646\n\n\n* Qubit 3 seems to have the largest error. Can we avoid using it?\n\nLet's experiment with several different qubit layouts to see if we can reduce the number of CX gates on problematic pairs and U2 gates on problematic qubits.\n\n\n```python\n# all of these have optimal number of CNOTS (3)\nlayout = [2,3,1,4] # 1\nlayout = [1,2,0,3] # 2\nlayout = [3,2,4,1] # 3 Looks the most promising\nlayout = [2,1,3,0] # 4\n\n# These are the equivalent topologies on Valencia\nlayout_valencia = [1,3,2,4]\nlayout_valencia = [1,3,4,2]\nlayout_valencia = [3,1,4,2]\nlayout_valencia = [3,1,2,4]\nlayout_valencia = [3,1,4,0]\nlayout_valencia = [1,3,4,0]\n\n\nqc_l1 = transpile(qc,\n coupling_map=bogota_conf.coupling_map,\n basis_gates=bogota_conf.basis_gates,\n initial_layout=layout,\n optimization_level=1)\nqc_l1.draw()\n```\n\n\n```python\nprint(f'Original circuit depth: {qc.depth()} - Transpiled circuit depth: {qc_l1.depth()}')\n```\n\n Original circuit depth: 3 - Transpiled circuit depth: 6\n\n\n**Summary of Findings**:\n\n* **Layout 1**: depth 6 with opt level 1 (no improvement for higher)\n - downside: uses cx between 0, 1, which has highest error rate, plus 4 U2s on q3, which has highest error rate. one cx on 2,3\n* **Layout 2**: depth 6 with opt level 1 (no improvement for higher)\n - downside: uses cx between 0, 1, which has highest error rate, one cx on 2,3\n* **Layout 3**: depth 6 with opt level 1 (no improvement for higher)\n - downside: One cx between 2,3\n* **Layout 4**: depth 6 with opt level 1 (no improvement for higher)\n - downside: uses cx between 0, 1, which has highest error rate, plus one on 2,3.\n\n\n**Conclusion**: Layout 3 is probably optimal.\n\n## Now Create a Measurement Filter\n\n\n```python\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.ignis.mitigation.measurement import complete_meas_cal\n\n# Generate the calibration circuits for the 3 qubits we measure\nqr = QuantumRegister(4)\n\n# we need our measurement filter to handle 4 qubits\nqubit_list = [0,1,2,3]\n\n# meas_calibs is a list containing 2^n circuits, one for each state.\nmeas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal')\n\nprint(f'Number of circuits: {len(meas_calibs)}')\nmeas_calibs[1].draw()\n```\n\n\n```python\n# We need the filter to correspond to the layout we are using\ncalibration_layout = [3,2,4,1]\nresult = execute(meas_calibs,\n qasm_simulator,\n shots=8192,\n noise_model=bogota_nm,\n coupling_map=bogota_conf.coupling_map,\n basis_gates=bogota_conf.basis_gates,\n initial_layout=calibration_layout).result()\n```\n\n\n```python\nfrom qiskit.visualization import plot_histogram\n\n# For example, plot histogram for circuit corresponding to state '001' (index 1)\nplot_histogram(result.get_counts(meas_calibs[5]))\n```\n\n\n```python\nfrom qiskit.ignis.mitigation.measurement import CompleteMeasFitter\n\n# Initialize the measurement correction fitter for a full calibration\nmeas_fitter = CompleteMeasFitter(result, state_labels)\n\n# Get the filter object\nmeas_filter = meas_fitter.filter\n\nfig, ax = plt.subplots(1,1, figsize=(10,8))\nmeas_fitter.plot_calibration(ax=ax)\nfig.savefig('images/4_qubit_measurement_filter.svg')\n```\n\n### Bogota Noise Model But With all Default Qubit Layout and No Measurement Filtersavefig\n\n\n```python\n%autoreload\nexecute_opts = {'shots' : 1024, \n 'noise_model': bogota_nm, \n 'coupling_map':bogota_conf.coupling_map,\n 'basis_gates':bogota_conf.basis_gates,\n }\nevaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator,execute_opts=execute_opts, measure_filter=None)\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name='vqe_solver')\nenergy_default, opt_params_default = vqe_solver.lowest_eig_value(lcps_h2)\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.660741E+00 1.105373E-01\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.6647674403625325\n Iterations: 1\n Function evaluations: 10\n Gradient evaluations: 1\n\n\n### Bogota Noise Model With An Improved Qubit Layout but no Measurement Filter\n\n\n```python\n%autoreload\nexecute_opts = {'shots' : 1024, \n 'noise_model': bogota_nm, \n 'coupling_map':bogota_conf.coupling_map,\n 'basis_gates':bogota_conf.basis_gates,\n 'initial_layout': [3,2,4,1]\n }\nevaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator,execute_opts=execute_opts, measure_filter=None)\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name='vqe_solver')\nenergy_layout, opt_params_layout = vqe_solver.lowest_eig_value(lcps_h2)\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.684631E+00 2.120353E-01\n 2 8 -1.689694E+00 2.218513E-01\n 3 14 -1.697248E+00 3.986125E-02\n 4 18 -1.704745E+00 1.969790E-01\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.6806716594964146\n Iterations: 4\n Function evaluations: 29\n Gradient evaluations: 4\n\n\n### Bogota Noise Model With An Improved Qubit Layout and Measurement Filter\n\n\n```python\n%autoreload\nexecute_opts = {'shots' : 1024, \n 'noise_model': bogota_nm, \n 'coupling_map':bogota_conf.coupling_map,\n 'basis_gates':bogota_conf.basis_gates,\n 'initial_layout': [3,2,4,1]\n }\nevaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator,execute_opts=execute_opts, measure_filter=meas_filter)\nvqe_solver = VQESolver(evaluator, minimizer, [0,], name='vqe_solver')\nenergy_layout_meas, opt_params_layout_meas = vqe_solver.lowest_eig_value(lcps_h2)\n```\n\n NIT FC OBJFUN GNORM\n 1 3 -1.789338E+00 7.476305E-02\n 2 6 -1.821437E+00 1.549079E-01\n 3 10 -1.825480E+00 8.058220E-02\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -1.8349246159398094\n Iterations: 3\n Function evaluations: 15\n Gradient evaluations: 3\n\n\n### Analysis\n\n**Running with a Simulated Noise Model and 1024 Shots**\n\nCalculated electronic energies at $0.735 Angstrom$\n\n| Method | Shots | Backend | Noise Model | Measurement Filter | Energy ($E_h$) | % Error |\n|------|-------|------|-----|-----|-----|-------|\n| Exact | N/A | N/A | N/A | N/A | -1.857275 | 0 |\n| VQE | 1024 | Simulator | None | No | -1.86559 | -0.45 |\n| VQE | 1024 | Simulator | Bogota | No | -1.66074 | 10.58 |\n| VQE (layout optimized) | 1024 | Simulator | Bogota | No | -1.68067 | 9.51 |\n| VQE (layout optimized) | 1024 | Simulator | Bogota | Yes | -1.83492 | 1.20 |\n| VQE | 1024 | Bogota | N/A | No | -1.61914 | 12.82 |\n\nWith no optimizations, the calculated ground state energy is off by approximately 10.6% relative to the \"exact\" solution. Adding an improved qubit layout, we reduce this error to between 9 and 10%. Applying a measurement filter on top of this, we obtain a final error of about 1.2%. \n\nMeasurement errors therefore seem to be the most significant source of error for such a short circuit, especially once the number of required CX gates is reduced to 3. This circuit is short enough that decoherence does not play a very large role.\n\n**Running on Bogota and 1024 Shots**\n\n???\n\n**Next Steps:**\nWe should examine how this error contributes to the dissociation curve, and whether it significantly alters the predicted equilibrium distance for $H_2$\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "284bcede78aa3d179144512c8584b6b00f2ee5bc", "size": 302320, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "activity3-2.ipynb", "max_stars_repo_name": "ibeneklins/qiskiteers_h2", "max_stars_repo_head_hexsha": "ed989ae074f19f8ff5153d92bc2eed3453c80fad", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "activity3-2.ipynb", "max_issues_repo_name": "ibeneklins/qiskiteers_h2", "max_issues_repo_head_hexsha": "ed989ae074f19f8ff5153d92bc2eed3453c80fad", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "activity3-2.ipynb", "max_forks_repo_name": "ibeneklins/qiskiteers_h2", "max_forks_repo_head_hexsha": "ed989ae074f19f8ff5153d92bc2eed3453c80fad", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 129.0862510675, "max_line_length": 38008, "alphanum_fraction": 0.8697836729, "converted": true, "num_tokens": 13652, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49609382947091957, "lm_q2_score": 0.22815650740914753, "lm_q1q2_score": 0.11318703547931423}} {"text": "\n\n\n```python\nfrom google.colab import drive\ndrive.mount('/content/drive')\n```\n\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n\n\n# Neuromatch Academy: Week 2, Day 5, Tutorial 1\n# Learning to Predict\n\n__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause\n\n__Content reviewers:__ Byron Galbraith and Michael Waskom\n\n\n---\n\n# Tutorial objectives\n \nIn this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a \"canonical\" model-free RPE. \n\nAt the end of this tutorial: \n* You will learn to use the standard tapped delay line conditioning model\n* You will understand how RPEs move to CS\n* You will understand how variability in reward size effects RPEs\n* You will understand how differences in US-CS timing effect RPEs\n\n\n```python\n# Imports\nimport numpy as np \nimport matplotlib.pyplot as plt\n```\n\n\n```python\n#@title Figure settings\nimport ipywidgets as widgets # interactive display\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n```\n\n\n```python\n# @title Helper functions\nfrom matplotlib import ticker\n\ndef plot_value_function(V, ax=None, show=True):\n \"\"\"Plot V(s), the value function\"\"\"\n if not ax:\n fig, ax = plt.subplots()\n\n ax.stem(V, use_line_collection=True)\n ax.set_ylabel('Value')\n ax.set_xlabel('State')\n ax.set_title(\"Value function: $V(s)$\")\n \n if show:\n plt.show()\n\ndef plot_tde_trace(TDE, ax=None, show=True, skip=400):\n \"\"\"Plot the TD Error across trials\"\"\"\n if not ax:\n fig, ax = plt.subplots()\n\n indx = np.arange(0, TDE.shape[1], skip)\n im = ax.imshow(TDE[:,indx])\n positions = ax.get_xticks()\n # Avoid warning when setting string tick labels\n ax.xaxis.set_major_locator(ticker.FixedLocator(positions))\n ax.set_xticklabels([f\"{int(skip * x)}\" for x in positions])\n ax.set_title('TD-error over learning')\n ax.set_ylabel('State')\n ax.set_xlabel('Iterations')\n ax.figure.colorbar(im)\n if show:\n plt.show()\n\ndef learning_summary_plot(V, TDE):\n \"\"\"Summary plot for Ex1\"\"\"\n fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})\n \n plot_value_function(V, ax=ax1, show=False)\n plot_tde_trace(TDE, ax=ax2, show=False)\n plt.tight_layout()\n\ndef reward_guesser_title_hint(r1, r2):\n \"\"\"\"Provide a mildly obfuscated hint for a demo.\"\"\"\n if (r1==14 and r2==6) or (r1==6 and r2==14):\n return \"Technically correct...(the best kind of correct)\"\n \n if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)\n return \"Congratulations! You solved it!\"\n\n return \"Keep trying....\"\n\n#@title Default title text\nclass ClassicalConditioning:\n \n def __init__(self, n_steps, reward_magnitude, reward_time):\n \n # Task variables\n self.n_steps = n_steps \n self.n_actions = 0\n self.cs_time = int(n_steps/4) - 1\n\n # Reward variables\n self.reward_state = [0,0]\n self.reward_magnitude = None\n self.reward_probability = None\n self.reward_time = None\n \n self.set_reward(reward_magnitude, reward_time)\n \n # Time step at which the conditioned stimulus is presented\n\n # Create a state dictionary\n self._create_state_dictionary()\n \n def set_reward(self, reward_magnitude, reward_time):\n \n \"\"\"\n Determine reward state and magnitude of reward\n \"\"\"\n if reward_time >= self.n_steps - self.cs_time:\n self.reward_magnitude = 0\n \n else:\n self.reward_magnitude = reward_magnitude\n self.reward_state = [1, reward_time]\n \n def get_outcome(self, current_state):\n \n \"\"\"\n Determine next state and reward\n \"\"\"\n # Update state\n if current_state < self.n_steps - 1: \n next_state = current_state + 1\n else:\n next_state = 0\n \n # Check for reward\n if self.reward_state == self.state_dict[current_state]:\n reward = self.reward_magnitude\n else:\n reward = 0\n \n return next_state, reward\n \n def _create_state_dictionary(self):\n \n \"\"\"\n This dictionary maps number of time steps/ state identities\n in each episode to some useful state attributes:\n \n state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...\n is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...\n t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...\n \"\"\"\n d = 0\n\n self.state_dict = {}\n for s in range(self.n_steps):\n if s <= self.cs_time:\n self.state_dict[s] = [0,0]\n else: \n d += 1 # Time in delay \n self.state_dict[s] = [1,d]\n \nclass MultiRewardCC(ClassicalConditioning):\n \"\"\"Classical conditioning paradigm, except that one randomly selected reward, \n magnitude, from a list, is delivered of a single fixed reward.\"\"\"\n def __init__(self, n_steps, reward_magnitudes, reward_time=None):\n \"\"\"\"Build a multi-reward classical conditioning environment\n Args:\n - nsteps: Maximum number of steps\n - reward_magnitudes: LIST of possible reward magnitudes.\n - reward_time: Single fixed reward time\n Uses numpy global random state.\n \"\"\"\n super().__init__(n_steps, 1, reward_time)\n self.reward_magnitudes = reward_magnitudes\n \n def get_outcome(self, current_state):\n next_state, reward = super().get_outcome(current_state)\n if reward:\n reward=np.random.choice(self.reward_magnitudes)\n return next_state, reward\n \n\nclass ProbabilisticCC(ClassicalConditioning):\n \"\"\"Classical conditioning paradigm, except that rewards are stochastically omitted.\"\"\"\n def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):\n \"\"\"\"Build a multi-reward classical conditioning environment\n Args:\n - nsteps: Maximum number of steps\n - reward_magnitudes: Reward magnitudes.\n - reward_time: Single fixed reward time.\n - p_reward: probability that reward is actually delivered in rewarding state\n Uses numpy global random state.\n \"\"\"\n super().__init__(n_steps, reward_magnitude, reward_time)\n self.p_reward = p_reward\n \n def get_outcome(self, current_state):\n next_state, reward = super().get_outcome(current_state)\n if reward:\n reward*= int(np.random.uniform(size=1)[0] < self.p_reward)\n return next_state, reward\n\n```\n\n---\n# Section 1: TD-learning\n\n\n```python\n#@title Video 1: Introduction\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"YoNbc9M92YY\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtu.be/\" + video.id)\nvideo\n```\n\n Video available at https://youtu.be/YoNbc9M92YY\n\n\n\n\n\n\n\n\n\n\n\n__Environment:__\n\n- The agent experiences the environment in episodes or trials. \n- Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. \n- The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation\n- Within each episode, the agent is presented a CS and US (reward). \n- The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.\n- The agent's goal is to learn to predict expected rewards from each state in the trial. \n\n\n**General concepts**\n\n* Return $G_{t}$: future cumulative reward, which can be written in arecursive form\n\\begin{align}\nG_{t} &= \\sum \\limits_{k = 0}^{\\infty} \\gamma^{k} r_{t+k+1} \\\\\n&= r_{t+1} + \\gamma G_{t+1}\n\\end{align}\nwhere $\\gamma$ is discount factor that controls the importance of future rewards, and $\\gamma \\in [0, 1]$. $\\gamma$ may also be interpreted as probability of continuing the trajectory.\n* Value funtion $V_{\\pi}(s_t=s)$: expecation of the return\n\\begin{align}\nV_{\\pi}(s_t=s) &= \\mathbb{E} [ G_{t}\\; | \\; s_t=s, a_{t:\\infty}\\sim\\pi] \\\\\n& = \\mathbb{E} [ r_{t+1} + \\gamma G_{t+1}\\; | \\; s_t=s, a_{t:\\infty}\\sim\\pi]\n\\end{align}\nWith an assumption of **Markov process**, we thus have:\n\\begin{align}\nV_{\\pi}(s_t=s) &= \\mathbb{E} [ r_{t+1} + \\gamma V_{\\pi}(s_{t+1})\\; | \\; s_t=s, a_{t:\\infty}\\sim\\pi] \\\\\n&= \\sum_a \\pi(a|s) \\sum_{r, s'}p(s', r)(r + V_{\\pi}(s_{t+1}=s'))\n\\end{align}\n\n**Temporal difference (TD) learning**\n\n* With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\n\\begin{align}\n\\delta_{t} = r_{t+1} + \\gamma V(s_{t+1}) - V(s_{t})\n\\end{align}\n\n* Value updated by using the learning rate constant $\\alpha$:\n\\begin{align}\nV(s_{t}) \\leftarrow V(s_{t}) + \\alpha \\delta_{t}\n\\end{align}\n\n (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)\n\n\n\n__Definitions:__\n\n* TD-error:\n\\begin{align}\n\\delta_{t} = r_{t+1} + \\gamma V(s_{t+1}) - V(s_{t})\n\\end{align}\n\n* Value updates:\n\\begin{align}\nV(s_{t}) \\leftarrow V(s_{t}) + \\alpha \\delta_{t}\n\\end{align}\n\n\n## Exercise 1: TD-learning with guaranteed rewards\n \nImplement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. \n\nIn order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.\n\nUse the provided code to estimate the value function.\n\n\n```python\ndef td_learner(env, n_trials, gamma=0.98, alpha=0.001):\n \"\"\" Temporal Difference learning\n\n Args:\n env (object): the environment to be learned\n n_trials (int): the number of trials to run\n gamma (float): temporal discount factor\n alpha (float): learning rate\n \n Returns:\n ndarray, ndarray: the value function and temporal difference error arrays\n \"\"\"\n V = np.zeros(env.n_steps) # Array to store values over states (time)\n TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors\n\n for n in range(n_trials):\n state = 0 # Initial state\n for t in range(env.n_steps):\n # Get next state and next reward\n next_state, reward = env.get_outcome(state)\n # Is the current state in the delay period (after CS)?\n is_delay = env.state_dict[state][0]\n \n ########################################################################\n ## TODO for students: implement TD error and value function update \n # Fill out function and remove\n #raise NotImplementedError(\"Student excercise: implement TD error and value function update\")\n #################################################################################\n # Write an expression to compute the TD-error\n TDE[state, n] = (reward + gamma * V[next_state] - V[state])\n\n # Write an expression to update the value function\n V[state] += alpha * TDE[state, n] * is_delay\n\n # Update state\n state = next_state\n\n return V, TDE\n\n\n# Uncomment once the td_learner function is complete\nenv = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)\nV, TDE = td_learner(env, n_trials=20000)\nlearning_summary_plot(V, TDE)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D5_ReinforcementLearning/solutions/W2D5_Tutorial1_Solution_0d9c50de.py)\n\n*Example output:*\n\n\n\n\n\n## Interactive Demo 1: US to CS Transfer \n\nDuring classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.\n\nUse the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). \n\nDopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!\n\n\n\n```python\n#@title \n\n#@markdown Make sure you execute this cell to enable the widget!\n\nn_trials = 20000\n\n@widgets.interact\ndef plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description=\"Trial #\")):\n if 'TDE' not in globals():\n print(\"Complete Exercise 1 to enable this interactive demo!\")\n else:\n\n fig, ax = plt.subplots()\n ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.\n ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ', \n label=\"Before Learning (Trial 0)\",\n use_line_collection=True)\n ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ', \n label=\"After Learning (Trial $\\infty$)\",\n use_line_collection=True)\n ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ', \n label=f\"Trial {trial}\",\n use_line_collection=True)\n \n ax.set_xlabel(\"State in trial\")\n ax.set_ylabel(\"TD Error\")\n ax.set_title(\"Temporal Difference Error by Trial\")\n ax.legend()\n```\n\n\n interactive(children=(IntSlider(value=5000, description='Trial #', max=19999), Output()), _dom_classes=('widge\u2026\n\n\n## Interactive Demo 2: Learning Rates and Discount Factors\n\nOur TD-learning agent has two parameters that control how it learns: $\\alpha$, the learning rate, and $\\gamma$, the discount factor. In Exercise 1, we set these parameters to $\\alpha=0.001$ and $\\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.\n\nBefore enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\\alpha$ necessarily better in more complex, realistic environments?\n\nThe discount rate $\\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\\gamma=0$ or $\\gamma \\geq 1$?\n\nUse the widget to test your hypotheses.\n\n\n\n\n\n```python\n#@title \n\n#@markdown Make sure you execute this cell to enable the widget!\n\n@widgets.interact\ndef plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description=\"alpha\"),\n gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description=\"gamma\")):\n env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10) \n try:\n V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)\n except NotImplementedError:\n print(\"Finish Exercise 1 to enable this interactive demo\")\n \n learning_summary_plot(V_params,TDE_params)\n\n\n```\n\n\n interactive(children=(FloatSlider(value=0.001, description='alpha', max=0.1, min=0.001, step=0.0001), FloatSli\u2026\n\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D5_ReinforcementLearning/solutions/W2D5_Tutorial1_Solution_12ce49be.py)\n\n\n\n---\n# Section 2: TD-learning with varying reward magnitudes\n\nIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. \n\n\n## Interactive Demo 3: Match the Value Functions\n\nFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). \n\nCan you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. \n\nHints:\n* Carefully consider the definition of the value function $V$. This can be solved analytically.\n* There is no need to change $\\alpha$ or $\\gamma$. \n* Due to the randomness, there may be a small amount of variation.\n\nThe average is 10\n\n\n```python\n#@title \n\n#@markdown Make sure you execute this cell to enable the widget!\n\nn_trials = 20000\nnp.random.seed(2020)\nrng_state = np.random.get_state()\nenv = MultiRewardCC(40, [6, 14], reward_time=10)\nV_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)\n\n@widgets.interact\ndef reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description=\"Reward 1\"),\n r2 = widgets.IntText(value=0, min=0, max=50, description=\"Reward 2\")): \n try:\n env2 = MultiRewardCC(40, [r1, r2], reward_time=10)\n V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)\n fig, ax = plt.subplots()\n m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label=\"Target\", \n use_line_collection=True)\n m.set_markersize(15)\n m.set_markerfacecolor('none')\n l.set_linewidth(4)\n m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label=\"Guess\",\n use_line_collection=True)\n m.set_markersize(15)\n\n ax.set_xlabel(\"State\")\n ax.set_ylabel(\"Value\")\n ax.set_title(\"Guess V(s)\\n\" + reward_guesser_title_hint(r1, r2))\n ax.legend()\n except NotImplementedError:\n print(\"Please finish Exercise 1 first!\")\n```\n\n\n interactive(children=(IntText(value=0, description='Reward 1'), IntText(value=0, description='Reward 2'), Outp\u2026\n\n\n## Section 2.1 Examining the TD Error\n\nRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?\n\n\n```python\nplot_tde_trace(TDE_multi)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D5_ReinforcementLearning/solutions/W2D5_Tutorial1_Solution_dea47c05.py)\n\n\n\n---\n# Section 3: TD-learning with probabilistic rewards\n\nIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual.\n\n Run the cell below to simulate. How does this compare with the previous experiment?\n\nEarlier in the notebook, we saw that changing $\\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?\n\n\n```python\nnp.random.set_state(rng_state) # Resynchronize everyone's notebooks\nn_trials = 20000\ntry:\n env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10, \n p_reward=0.8)\n V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=.001)\n learning_summary_plot(V_stochastic, TDE_stochastic)\nexcept NotImplementedError: \n print(\"Please finish Exercise 1 first\")\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D5_ReinforcementLearning/solutions/W2D5_Tutorial1_Solution_cbbb9c00.py)\n\n\n\n---\n# Summary\n\nIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\\alpha$, $\\gamma$), you developed an intuition for how it behaves. \n\nThis simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. \n\nHowever, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next!\n\n# Bonus\n\n## Exercise 2: Removing the CS\n\nIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?\nThis phenomena often fools people attempting to train animals--beware!\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D5_ReinforcementLearning/solutions/W2D5_Tutorial1_Solution_a35b23f3.py)\n\n\n", "meta": {"hexsha": "204a6723e3f5f960647c5b3da4250d1bbd6e7800", "size": 410386, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/W2D5_ReinforcementLearning/student/W2D5_Tutorial1.ipynb", "max_stars_repo_name": "hnoamany/course-content", "max_stars_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tutorials/W2D5_ReinforcementLearning/student/W2D5_Tutorial1.ipynb", "max_issues_repo_name": "hnoamany/course-content", "max_issues_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/W2D5_ReinforcementLearning/student/W2D5_Tutorial1.ipynb", "max_forks_repo_name": "hnoamany/course-content", "max_forks_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 410386.0, "max_line_length": 410386, "alphanum_fraction": 0.9345518609, "converted": true, "num_tokens": 5783, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49609382947091946, "lm_q2_score": 0.22815649691270323, "lm_q1q2_score": 0.11318703027209295}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n## Regulator Design: observer and state feedback\n\nThis example demonstrates the design of a regulator: a type of controller composed by an observer and a full-state feedback controller. It can be proven that, thanks to the separation principle, the controller and the observer can be designed independently - closed-loop eigenvalues and observer dynamics can be set separately without affecting each other.\n\nNonetheless, closed-loop transient performance depends on how fast the observer is with respect to the desired closed-loop dynamics. \n\nThis example shows the design of a regulator for the controllable and observable system:\n\n\\begin{cases}\n\\dot{\\textbf{x}}=\\begin{bmatrix}1&0&3\\\\0&-4&-1\\\\0&1&-4\\end{bmatrix}\\textbf{x}+\\begin{bmatrix}0\\\\0\\\\1\\end{bmatrix}\\textbf{u} \\\\ \\\\\n\\textbf{y}=\\begin{bmatrix}1&0&0\\end{bmatrix}\\textbf{x}\n\\end{cases}\n\nthat has the transfer function:\n\n$$\nG(s) = C(sI-A)^{-1}B.\n$$\n\n### Development of the state feedback\nThe goal is to place 3 eigenvalues in $-1$ rad/s or faster in order to have a good transient response. A possible solution is: $K = \\begin{bmatrix}\\frac{8}{15}&-4.4&-4\\end{bmatrix}$.\n\n\n### Development of the observer\nGiven the eigenvalues of the controlled system, a better (faster and stable) choice for the observer is $\\lambda_{1,2,3} = -10$ rad/s. This can be achieved with $L=\\begin{bmatrix}23&66&\\frac{107}{3}\\end{bmatrix}^T$.\n\n\n### Composition of the regulator\n\nThe regulator can be implemented in two ways, as a transfer function:\n\n\n\nwhere:\n\n$$\nK(s) = -K(sI-A+LC+BK)^{-1}L\\,.\n$$\n\nor as an observer with static feedback:\n\n\n\nRecall that although the obtained closed dynamics is the same, the transfer function implementation may lead to an unstable controller - the eigenvalues of matrix (matrix $A-BK-LC$) may be unstable even if closed-loop dynamics is stable. \n\n### How to use this notebook?\n- Try to change the initial conditions of the estimator (default is $\\begin{bmatrix}0.2&0.2&0.2\\end{bmatrix}^T$) and the observer's eigenvalues and see how the controlled system behaviour changes.\n- Try to change the values in order to achieve settling time for 5% tolerance band of less than 2 s.\n\n**Note:** \n\n- The ideal values refer to the case in which all the states of the system can be measured.\n- The **Inverse reference gain** slider denotes the value for which the reference is divided (as you change the static gain of the closed-loop transfer function); in order to reach zero error for the step response its value is equal to the static gain.\n\n\n```python\n%matplotlib inline\nimport control as control\nimport numpy\nimport sympy as sym\nfrom IPython.display import display, Markdown\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\n\n\n#print a matrix latex-like\ndef bmatrix(a):\n \"\"\"Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)\n\n :a: numpy array\n :returns: LaTeX bmatrix as a string\n \"\"\"\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{bmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{bmatrix}']\n return '\\n'.join(rv)\n\n\n# Display formatted matrix: \ndef vmatrix(a):\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{vmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{vmatrix}']\n return '\\n'.join(rv)\n\n\n#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !\nclass matrixWidget(widgets.VBox):\n def updateM(self,change):\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.M_[irow,icol] = self.children[irow].children[icol].value\n #print(self.M_[irow,icol])\n self.value = self.M_\n\n def dummychangecallback(self,change):\n pass\n \n \n def __init__(self,n,m):\n self.n = n\n self.m = m\n self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))\n self.value = self.M_\n widgets.VBox.__init__(self,\n children = [\n widgets.HBox(children = \n [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]\n ) \n for j in range(n)\n ])\n \n #fill in widgets and tell interact to call updateM each time a children changes value\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n self.children[irow].children[icol].observe(self.updateM, names='value')\n #value = Unicode('example@example.com', help=\"The email value.\").tag(sync=True)\n self.observe(self.updateM, names='value', type= 'All')\n \n def setM(self, newM):\n #disable callbacks, change values, and reenable\n self.unobserve(self.updateM, names='value', type= 'All')\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].unobserve(self.updateM, names='value')\n self.M_ = newM\n self.value = self.M_\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].observe(self.updateM, names='value')\n self.observe(self.updateM, names='value', type= 'All') \n\n #self.children[irow].children[icol].observe(self.updateM, names='value')\n\n \n#overlaod class for state space systems that DO NOT remove \"useless\" states (what \"professor\" of automatic control would do this?)\nclass sss(control.StateSpace):\n def __init__(self,*args):\n #call base class init constructor\n control.StateSpace.__init__(self,*args)\n #disable function below in base class\n def _remove_useless_states(self):\n pass\n```\n\n\n```python\n# Preparatory cell\n\nA = numpy.matrix('1 0 3; 0 -4 -1; 0 1 -4')\nB = numpy.matrix('0; 0; 1')\nC = numpy.matrix('1 0 0')\nX0 = numpy.matrix('0.2; 0.2; 0.2')\nK = numpy.matrix([8/15,-4.4,-4])\nL = numpy.matrix([[23],[66],[107/3]])\n\nAw = matrixWidget(3,3)\nAw.setM(A)\nBw = matrixWidget(3,1)\nBw.setM(B)\nCw = matrixWidget(1,3)\nCw.setM(C)\nX0w = matrixWidget(3,1)\nX0w.setM(X0)\nKw = matrixWidget(1,3)\nKw.setM(K)\nLw = matrixWidget(3,1)\nLw.setM(L)\n\n\neig1c = matrixWidget(1,1)\neig2c = matrixWidget(2,1)\neig3c = matrixWidget(1,1)\neig1c.setM(numpy.matrix([-1.])) \neig2c.setM(numpy.matrix([[-1.],[0.]]))\neig3c.setM(numpy.matrix([-1.]))\n\neig1o = matrixWidget(1,1)\neig2o = matrixWidget(2,1)\neig3o = matrixWidget(1,1)\neig1o.setM(numpy.matrix([-10.])) \neig2o.setM(numpy.matrix([[-10.],[0.]]))\neig3o.setM(numpy.matrix([-10.]))\n```\n\n\n```python\n# Misc\n\n#create dummy widget \nDW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))\n\n#create button widget\nSTART = widgets.Button(\n description='Test',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Test',\n icon='check'\n)\n \ndef on_start_button_clicked(b):\n #This is a workaround to have intreactive_output call the callback:\n # force the value of the dummy widget to change\n if DW.value> 0 :\n DW.value = -1\n else: \n DW.value = 1\n pass\nSTART.on_click(on_start_button_clicked)\n\n# Define type of method \nselm = widgets.Dropdown(\n options= ['Set K and L', 'Set the eigenvalues'],\n value= 'Set the eigenvalues',\n description='',\n disabled=False\n)\n\n# Define the number of complex eigenvalues\nsele = widgets.Dropdown(\n options= ['0 complex eigenvalues', '2 complex eigenvalues'],\n value= '0 complex eigenvalues',\n description='Complex eigenvalues:',\n style = {'description_width': 'initial'},\n disabled=False\n)\n\n#define type of ipout \nselu = widgets.Dropdown(\n options=['impulse', 'step', 'sinusoid', 'square wave'],\n value='step',\n description='Type of reference:',\n style = {'description_width': 'initial'},\n disabled=False\n)\n# Define the values of the input\nu = widgets.FloatSlider(\n value=1,\n min=0,\n max=20.0,\n step=0.1,\n description='Reference:',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n)\nperiod = widgets.FloatSlider(\n value=0.5,\n min=0.01,\n max=4,\n step=0.01,\n description='Period: ',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.2f',\n)\n\ngain_w = widgets.FloatText(\n value=1.,\n description='',\n disabled=True\n)\n\ngain_id_w = widgets.FloatText(\n value=1.,\n description='',\n disabled=True\n)\n\ngain_w2 = widgets.FloatText(\n value=1.,\n description='',\n disabled=True\n)\n\ngain_id_w2 = widgets.FloatText(\n value=1.,\n description='',\n disabled=True\n)\n```\n\n\n```python\n# Support functions\n\ndef eigen_choice(sele):\n if sele == '0 complex eigenvalues':\n eig1c.children[0].children[0].disabled = False\n eig2c.children[1].children[0].disabled = True\n eig1o.children[0].children[0].disabled = False\n eig2o.children[1].children[0].disabled = True\n eig = 0\n if sele == '2 complex eigenvalues':\n eig1c.children[0].children[0].disabled = True\n eig2c.children[1].children[0].disabled = False\n eig1o.children[0].children[0].disabled = True\n eig2o.children[1].children[0].disabled = False\n eig = 2\n return eig\n\ndef method_choice(selm):\n if selm == 'Set K and L':\n method = 1\n sele.disabled = True\n if selm == 'Set the eigenvalues':\n method = 2\n sele.disabled = False\n return method\n```\n\n## Implementation as transfer function controller\n\n\n```python\nsols = numpy.linalg.eig(A)\n\ndef main_callback(Aw, Bw, X0w, K, L, eig1c, eig2c, eig3c, eig1o, eig2o, eig3o, u, period, selm, sele, selu, DW):\n eige = eigen_choice(sele)\n method = method_choice(selm)\n \n if method == 1:\n solc = numpy.linalg.eig(A-B*K)\n solo = numpy.linalg.eig(A-L*C)\n if method == 2:\n if eige == 0:\n K = control.acker(A, B, [eig1c[0,0], eig2c[0,0], eig3c[0,0]])\n Kw.setM(K)\n L = control.acker(A.T, C.T, [eig1o[0,0], eig2o[0,0], eig3o[0,0]]).T\n Lw.setM(L)\n if eige == 2:\n K = control.acker(A, B, [eig3c[0,0], \n numpy.complex(eig2c[0,0],eig2c[1,0]), \n numpy.complex(eig2c[0,0],-eig2c[1,0])])\n Kw.setM(K)\n L = control.acker(A.T, C.T, [eig3o[0,0], \n numpy.complex(eig2o[0,0],eig2o[1,0]), \n numpy.complex(eig2o[0,0],-eig2o[1,0])]).T\n Lw.setM(L)\n \n \n Gs = sss(A,B,C,0)\n Ks = sss(A-B*K-L*C,L,-K,0)\n Fs = control.series(-Ks,Gs)\n sys = control.feedback(Fs)\n \n Gs_id = sss(A,B,sym.eye(3),sym.zeros(3,1))\n Fs_id = control.series(K,Gs_id)\n A1 = numpy.matrix(Fs_id.A-Fs_id.B*Fs_id.C)\n B1 = numpy.matrix(Fs_id.B*sym.Matrix([[1],[0],[0]]))\n C1 = numpy.matrix(sym.Matrix([1,0,0]).T*Fs_id.C)\n sys_id = sss(A1,B1,C1,0)\n \n sys_o = sss(A-L*C,numpy.hstack((L,B)),sym.eye(3),sym.zeros(3,2))\n \n dcgain = control.dcgain(sys)\n t = numpy.linspace(0, 1000, 2)\n t, y = control.step_response(sys_id,t)\n dcgain_id = y[-1]\n gain_w.value = dcgain\n gain_id_w.value = dcgain_id\n if dcgain != 0 and dcgain_id != 0:\n u1 = u/gain_w.value\n u2 = u/gain_id_w.value\n else:\n print('The inverse gain set is 0 and it is changed to 1')\n u1 = u/1\n u2 = u/1\n \n solc = numpy.linalg.eig(sys.A)\n solo = numpy.linalg.eig(A-L*C-B*K)\n print('The system\\'s eigenvalues are:', round(sols[0][0],2),',', round(sols[0][1],2),'and', round(sols[0][2],2))\n print('The controlled closed loop system\\'s eigenvalues are:', \n round(solc[0][0],2),',', \n round(solc[0][1],2),',', \n round(solc[0][2],2),',',\n round(solc[0][3],2),',',\n round(solc[0][4],2),'and',\n round(solc[0][5],2))\n print('The controller\\'s eigenvalues are:', round(solo[0][0],2),',', round(solo[0][1],2),'and', round(solo[0][2],2))\n print('')\n print('The static gain of the closed loop system (from the reference to the output) is: %.5f' %dcgain)\n print('The static gain of the closed loop ideal system (from the reference to the output) is: %.5f' %dcgain_id)\n \n X0w1 = numpy.matrix([[X0w[0,0]],[X0w[1,0]],[X0w[2,0]],[0],[0],[0]])\n T = numpy.linspace(0, 12, 1000)\n \n if selu == 'impulse': #selu\n U = [0 for t in range(0,len(T))]\n U[0] = u\n U1 = [0 for t in range(0,len(T))]\n U1[0] = u1\n U2 = [0 for t in range(0,len(T))]\n U2[0] = u2\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n T, yout_k, xout_k = control.forced_response(Ks,T,yout-U1,X0w)\n T, yout_o, xout_o = control.forced_response(sys_o,T,[yout,yout_k],X0w)\n if selu == 'step':\n U = [u for t in range(0,len(T))]\n U1 = [u1 for t in range(0,len(T))]\n U2 = [u2 for t in range(0,len(T))]\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n T, yout_k, xout_k = control.forced_response(Ks,T,yout-U1,X0w)\n T, yout_o, xout_o = control.forced_response(sys_o,T,[yout,yout_k],X0w)\n if selu == 'sinusoid':\n U = u*numpy.sin(2*numpy.pi/period*T)\n U1 = u1*numpy.sin(2*numpy.pi/period*T)\n U2 = u2*numpy.sin(2*numpy.pi/period*T)\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n T, yout_k, xout_k = control.forced_response(Ks,T,yout-U1,X0w)\n T, yout_o, xout_o = control.forced_response(sys_o,T,[yout,yout_k],X0w)\n if selu == 'square wave':\n U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n U1 = u1*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n U2 = u2*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n T, yout_k, xout_k = control.forced_response(Ks,T,yout-U1,X0w)\n T, yout_o, xout_o = control.forced_response(sys_o,T,[yout,yout_k],X0w)\n # N.B. i primi 3 stati di xout sono quelli dello stimatore, mentre gli ultimi 3 sono quelli del sistema \"reale\"\n \n fig = plt.figure(num='Simulation1', figsize=(16,17))\n mag, phase, omega = control.bode_plot(sys,Plot = False)\n mag = control.mag2db(mag)\n phase = phase*180/numpy.pi\n fig.add_subplot(321)\n plt.title('Bode plot: magnitude')\n plt.semilogx(omega,mag)\n plt.xlabel('$\\omega$ [rad/s]')\n plt.ylabel('Mag. [dB]')\n plt.grid(True,which=\"both\")\n \n fig.add_subplot(323)\n plt.title('Bode plot: phase')\n plt.semilogx(omega,phase)\n plt.xlabel('$\\omega$ [rad/s]')\n plt.ylabel('Phase [deg]')\n plt.grid(True,which=\"both\")\n \n fig.add_subplot(325)\n plt.title('Output response')\n plt.ylabel('Output')\n plt.plot(T,yout,T,yout_id,'g',T,U,'r--')\n plt.xlabel('$t$ [s]')\n plt.legend(['$y$','$y_{ideal}$','Reference'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(322)\n plt.title('First state response')\n plt.ylabel('$x_1$')\n plt.plot(T,xout_o[0],T,xout[3],T,xout_id[0],'g')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_{1est}$','$x_{1real}$','$x_{1ideal}$'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(324)\n plt.title('Second state response')\n plt.ylabel('$x_2$')\n plt.plot(T,xout_o[1],T,xout[4],T,xout_id[1],'g')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_{2est}$','$x_{2real}$','$x_{2ideal}$'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(326)\n plt.title('Third state response')\n plt.ylabel('$x_3$')\n plt.plot(T,xout_o[2],T,xout[5],T,xout_id[2],'g')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_{3est}$','$x_{3real}$','$x_{3ideal}$'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n\n \nalltogether = widgets.VBox([widgets.HBox([selm, \n sele,\n selu]),\n widgets.Label(' ',border=3),\n widgets.HBox([widgets.Label('K:',border=3), Kw, \n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('Eigenvalues:',border=3), \n eig1c, \n eig2c, \n eig3c,\n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('X0 est.:',border=3), X0w]),\n widgets.Label(' ',border=3),\n widgets.HBox([widgets.Label('L:',border=3), Lw, \n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('Eigenvalues:',border=3), \n eig1o, \n eig2o, \n eig3o,\n widgets.Label(' ',border=3),\n widgets.VBox([widgets.Label('Inverse reference gain:',border=3),\n widgets.Label('Inverse ideal reference gain:',border=3)]),\n widgets.VBox([gain_w,gain_id_w])]),\n widgets.Label(' ',border=3),\n widgets.HBox([u, \n period, \n START])])\nout = widgets.interactive_output(main_callback, {'Aw':Aw, 'Bw':Bw, 'X0w':X0w, 'K':Kw, 'L':Lw,\n 'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, 'eig1o':eig1o, 'eig2o':eig2o, 'eig3o':eig3o, \n 'u':u, 'period':period, 'selm':selm, 'sele':sele, 'selu':selu, 'DW':DW})\nout.layout.height = '1120px'\ndisplay(out, alltogether)\n```\n\n\n Output(layout=Layout(height='1120px'))\n\n\n\n VBox(children=(HBox(children=(Dropdown(index=1, options=('Set K and L', 'Set the eigenvalues'), value='Set the\u2026\n\n\n## Implementation as observer\n\n\n```python\nsols = numpy.linalg.eig(A)\n\ndef main_callback2(Aw, Bw, X0w, K, L, eig1c, eig2c, eig3c, eig1o, eig2o, eig3o, u, period, selm, sele, selu, DW):\n eige = eigen_choice(sele)\n method = method_choice(selm)\n \n if method == 1:\n solc = numpy.linalg.eig(A-B*K)\n solo = numpy.linalg.eig(A-L*C)\n if method == 2:\n if eige == 0:\n K = control.acker(A, B, [eig1c[0,0], eig2c[0,0], eig3c[0,0]])\n Kw.setM(K)\n L = control.acker(A.T, C.T, [eig1o[0,0], eig2o[0,0], eig3o[0,0]]).T\n Lw.setM(L)\n if eige == 2:\n K = control.acker(A, B, [eig3c[0,0], \n numpy.complex(eig2c[0,0],eig2c[1,0]), \n numpy.complex(eig2c[0,0],-eig2c[1,0])])\n Kw.setM(K)\n L = control.acker(A.T, C.T, [eig3o[0,0], \n numpy.complex(eig2o[0,0],eig2o[1,0]), \n numpy.complex(eig2o[0,0],-eig2o[1,0])]).T\n Lw.setM(L)\n \n \n Gs = sss(A,B,numpy.vstack((numpy.eye(3),[0,0,0])),[[0],[0],[0],[1]])\n Os = sss(A-L*C,numpy.hstack((L,B)),numpy.vstack((-K,numpy.eye(3))),[[0,0],[0,0],[0,0],[0,0]])\n Gas = control.append(Gs,Os)\n sys = control.connect(Gas,[[2,1],[3,4],[1,5]],[1],[1,2,3,6,7,8])\n \n Gs_id = sss(A,B,sym.eye(3),sym.zeros(3,1))\n Fs_id = control.series(K,Gs_id)\n A1 = numpy.matrix(Fs_id.A-Fs_id.B*Fs_id.C)\n B1 = numpy.matrix(Fs_id.B*sym.Matrix([[1],[0],[0]]))\n C1 = numpy.matrix(sym.Matrix([1,0,0]).T*Fs_id.C)\n sys_id = sss(A1,B1,C1,0)\n\n \n dcgain = control.dcgain(sys[0,0])\n t = numpy.linspace(0, 1000, 2)\n t, y = control.step_response(sys_id,t)\n dcgain_id = y[-1]\n gain_w2.value = dcgain\n gain_id_w2.value = dcgain_id\n if dcgain != 0 and dcgain_id != 0:\n u1 = u/gain_w2.value\n u2 = u/gain_id_w2.value\n else:\n print('The inverse gain setted is 0 and it is changed to 1')\n u1 = u/1\n u2 = u/1\n \n solc = numpy.linalg.eig(sys.A)\n print('The system\\'s eigenvalues are:', round(sols[0][0],2),',', round(sols[0][1],2),'and', round(sols[0][2],2))\n print('The controlled closed loop system\\'s eigenvalues are:', \n round(solc[0][0],2),',', \n round(solc[0][1],2),',', \n round(solc[0][2],2),',',\n round(solc[0][3],2),',',\n round(solc[0][4],2),'and',\n round(solc[0][5],2))\n print('')\n print('The static gain of the closed loop system (from the reference to the output) is: %.5f' %dcgain)\n print('The static gain of the closed loop ideal system (from the reference to the output) is: %.5f' %dcgain_id)\n \n X0w1 = numpy.matrix([[0],[0],[0],[X0w[0,0]],[X0w[1,0]],[X0w[2,0]]])\n T = numpy.linspace(0, 12, 1000)\n \n if selu == 'impulse': #selu\n U = [0 for t in range(0,len(T))]\n U[0] = u\n U1 = [0 for t in range(0,len(T))]\n U1[0] = u1\n U2 = [0 for t in range(0,len(T))]\n U2[0] = u2\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n if selu == 'step':\n U = [u for t in range(0,len(T))]\n U1 = [u1 for t in range(0,len(T))]\n U2 = [u2 for t in range(0,len(T))]\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n if selu == 'sinusoid':\n U = u*numpy.sin(2*numpy.pi/period*T)\n U1 = u1*numpy.sin(2*numpy.pi/period*T)\n U2 = u2*numpy.sin(2*numpy.pi/period*T)\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n if selu == 'square wave':\n U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n U1 = u1*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n U2 = u2*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n T, yout, xout = control.forced_response(sys,T,U1,X0w1)\n T, yout_id, xout_id = control.forced_response(sys_id,T,U2,[0, 0, 0])\n # N.B. i primi 3 stati di xout sono quelli del sistema, mentre gli ultimi 3 sono quelli dell'osservatore\n \n fig = plt.figure(num='Simulation1', figsize=(16,17))\n mag, phase, omega = control.bode_plot(sys[0,0],Plot = False)\n mag = control.mag2db(mag)\n phase = phase*180/numpy.pi\n fig.add_subplot(321)\n plt.title('Bode plot: magnitude')\n plt.semilogx(omega,mag)\n plt.xlabel('$\\omega$ [rad/s]')\n plt.ylabel('Mag. [dB]')\n plt.grid(True,which=\"both\")\n \n fig.add_subplot(323)\n plt.semilogx(omega,phase)\n plt.title('Bode plot: phase')\n plt.xlabel('$\\omega$ [rad/s]')\n plt.ylabel('Phase [deg]')\n plt.grid(True,which=\"both\")\n \n fig.add_subplot(325)\n plt.title('Output response')\n plt.ylabel('Output')\n plt.plot(T,yout[0],T,yout_id,'g',T,U,'r--')\n plt.xlabel('$t$ [s]')\n plt.legend(['$y$','$y_{ideal}$','Reference'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(322)\n plt.title('First state response')\n plt.ylabel('$x_1$')\n plt.plot(T,yout[3],T,yout[0],T,xout_id[0],'g')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_{1est}$','$x_{1real}$','$x_{1ideal}$'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(324)\n plt.title('Second state response')\n plt.ylabel('$x_2$')\n plt.plot(T,yout[4],T,yout[1],T,xout_id[1],'g')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_{2est}$','$x_{2real}$','$x_{2ideal}$'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(326)\n plt.title('Third state response')\n plt.ylabel('$x_3$')\n plt.plot(T,yout[5],T,yout[2],T,xout_id[2],'g')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_{3est}$','$x_{3real}$','$x_{3ideal}$'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \nalltogether2 = widgets.VBox([widgets.HBox([selm, \n sele,\n selu]),\n widgets.Label(' ',border=3),\n widgets.HBox([widgets.Label('K:',border=3), Kw, \n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('Eigenvalues:',border=3), \n eig1c, \n eig2c, \n eig3c,\n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('X0 est.:',border=3), X0w]),\n widgets.Label(' ',border=3),\n widgets.HBox([widgets.Label('L:',border=3), Lw, \n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('Eigenvalues:',border=3), \n eig1o, \n eig2o, \n eig3o,\n widgets.Label(' ',border=3),\n widgets.VBox([widgets.Label('Inverse reference gain:',border=3),\n widgets.Label('Inverse ideal reference gain:',border=3)]),\n widgets.VBox([gain_w2,gain_id_w2])]),\n widgets.Label(' ',border=3),\n widgets.HBox([u, \n period, \n START])])\nout2 = widgets.interactive_output(main_callback2, {'Aw':Aw, 'Bw':Bw, 'X0w':X0w, 'K':Kw, 'L':Lw,\n 'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, 'eig1o':eig1o, 'eig2o':eig2o, 'eig3o':eig3o, \n 'u':u, 'period':period, 'selm':selm, 'sele':sele, 'selu':selu, 'DW':DW})\nout2.layout.height = '1120px'\ndisplay(out2, alltogether2)\n```\n\n\n Output(layout=Layout(height='1120px'))\n\n\n\n VBox(children=(HBox(children=(Dropdown(index=1, options=('Set K and L', 'Set the eigenvalues'), value='Set the\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "660f2ec5bcc01a1ed569cccf09b32cf5f24dfd42", "size": 38948, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_en/examples/04/SS-34-Regulator_design.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT/ENG/examples/04/SS-34-Regulator_design.ipynb", "max_issues_repo_name": "tuxsaurus/ICCT", "max_issues_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT/ENG/examples/04/SS-34-Regulator_design.ipynb", "max_forks_repo_name": "tuxsaurus/ICCT", "max_forks_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 42.060475162, "max_line_length": 365, "alphanum_fraction": 0.4689329362, "converted": true, "num_tokens": 8436, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47268347662043286, "lm_q2_score": 0.23934933647101644, "lm_q1q2_score": 0.11313647648991382}} {"text": "```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n### Library import\n\n\n```python\nfrom sympy import init_printing, Matrix, symbols, eye, Rational\ninit_printing()\n```\n\n# Elimination\n\n## A system of linear equations\n\nIn the previous lesson, we had a brief glimpse at linear systems. The _linear_ in _linear systems_ refers to the fact that each variable appears on its own (i.e. to the power $1$ and not in the form $x \\times y$ or the like) and it is not transcendental. If a solution exists, it then satisfies all of the equations at once. We will consider the linear system in (1).\n\n$$ \\begin{align} 1x+2y+1z &= 2 \\\\ 3x + 8y + 1z &= 12 \\\\ 0x + 4y + 1z &= 2 \\end{align} \\tag{1} $$\n\nA possible solution for $x,y$, and $z$ is given in (2), where $x=2$, $y=1$, and $z = -2$.\n\n$$ \\begin{align} 1\\left(2\\right)+2\\left(1\\right)+1\\left(-2\\right) &= 2 \\\\ 3\\left(2\\right)+8\\left(1\\right)+1\\left(-2\\right) &= 12 \\\\ 0\\left(2\\right)+4\\left(1\\right)+1\\left(-2\\right) &= 2 \\end{align} \\tag{2} $$\n\nSince (1) is a set (three) equations that have a solution ( or possibly solutions) for their variables in common, all left- and all right hand sides can be manipulated in certain ways.\n\nWe could simply exchange the order of the equations. In (3) the second and third equations have been exchanged, called _row exchange_.\n\n$$ \\begin{align}1x+2y+1z &= 2 \\\\ 0x + 4y + 1z &= 2 \\\\ 3x + 8y + 1z &= 12 \\end{align} \\tag{3} $$\n\nWe could multiply both the left- and right-hand side of one of the equations with a scalar. In (4) we multiply the first equation by $2$.\n\n$$ \\begin{align} 2x+4y+2z &= 4 \\\\ 3x + 8y + 1z &= 12 \\\\ 0x + 4y + 1z &= 2 \\end{align} \\tag{4}$$\n\nLastly, we can subtract a constant multiple of one equation from another.\n\nThese three _manipulations_ serve an excellent purpose, as it allows us to eliminate of one (or more) of the variables (that is, give it a coefficient of $0$). Remember that we are trying to solve for all three equations and have three unknowns. We can most definitely struggle by doing this problem algebraically by substitution, but linear algebra makes it much easier.\n\nIn (5) we have multiplied the first equation by $3$ (both sides, so that we maintain integrity of the equation) and subtracted the left hand side of this new equation from the left-hand side of the second equation and the new right-hand side of the first equation from the right-hand side of the second equation. This is quite legitimate, as the left- and right-hand sides are equal (it is an equation after all) and so, when subtracting from the second equation, we are still doing the same thing to the left-hand side as the right-hand side.\n\n$$ \\begin{align} 1x+2y+1z &= 2 \\\\ 0x + 2y - 2z &= 6 \\\\ 0x + 4y + 1z &= 2 \\end{align} \\tag{5} $$\n\nThis has introduced a nice $0$ in the second equation. Let's go further and multiply the second equation by $2$ and subtract that from the third equation as seen in (6) below.\n\n$$ \\begin{align} 1x+2y+1z &= 2 \\\\ 0x + 2y - 2z &= 6 \\\\ 0x + 0y + 5z &= -10 \\end{align} \\tag{6} $$\n\nNow let last equation is easy to solve for $z$.\n\n$$ z=-2 \\tag{7}$$\n\nKnowing this, we can go back up to the second equation and solve for $y$.\n\n$$ \\begin{align} 2y+2(-2) &= 6 \\\\ y &= 1 \\end{align} \\tag{8} $$\n\nFinally, up to the first equation.\n\n$$ \\begin{align} x+2(1)+1(-2) &= 2 \\\\ x &= 2 \\end{align} \\tag{9} $$\n\nW have solve the linear system by substitution. We need to have gone straight for substitution, though. Indeed, we could have tried to get zeros above all our leading (non-zero) coefficients. Let's just clean up the third equation by multiplying throughout by $\\frac{1}{5}$ as in (10) below.\n\n$$ \\begin{align} 1x+2y+1z &= 2 \\\\ 0x + 2y - 2z &= 6 \\\\ 0x + 0y + 1z &= -2 \\end{align} \\tag{10} $$\n\nNow we have to get rid of the $-2z$ in the second equation, which we can do by multiplying the third equation by $-2$ and subtracting from the second equation.\n\n$$ \\begin{align} 1x+2y+1z &= 2 \\\\ 0x + 2y - 0z &= 2 \\\\ 0x + 0y + 1z &= -2 \\end{align} \\tag{11}$$\n\nMultiplying the second equation by $\\frac{1}{2}$ yields (12).\n\n$$ \\begin{align}1x+2y+1z &= 2 \\\\ 0x + 1y + 0z &= 1 \\\\ 0x + 0y + 1z &= -2 \\end{align} \\tag{12} $$\n\nNow we can do the same to get rid of the $1z$ in the first equation (multiply the third equation by $1$ and subtract it from the first equation.\n\n$$ \\begin{align} 1x+2y+0z &= 4 \\\\ 0x + 1y + 0z &= 1 \\\\ 0x + 0y + 1z &= -2 \\end{align} \\tag{12}$$\n\nNow tow get rid of the $2y$ in the first equation, which is above our leading $1y$ in the second equation. Simple enough, we multiply he second equation by $2$ and subtract that from the first equation.\n\n$$ \\begin{align} 1x+0y+0z &= 2 \\\\ 0x + 1y + 0z &= 1 \\\\ 0x + 0y + 1z &= -2 \\end{align} \\tag{13} $$\n\nThe solution is now clear for $x,y$, and $z$.\n\nWe need not rewrite all of the variables all the time. We can simply write the coefficients. The augmented matrix of coefficients is in (14).\n\n$$ \\begin{bmatrix} 1&2&1&2\\\\3&8&1&12\\\\0&4&1&2 \\end{bmatrix} \\tag{14} $$\n\nA matrix has rows and columns (attached, in position, to our algebraic equation above). We simply omit the variables. The left-upper entry is called the pivot. Our aim is to get everything below it to be a zero (as we did with the algebra). We do exactly the same as we did above, which is multiply row 1 by 3 and subtract these new values from row 2.\n\n$$ \\begin{bmatrix} 1&2&1&2\\\\0&2&-2&6\\\\0&4&1&2 \\end{bmatrix} \\tag{15} $$\n\nNow $2$ times row 2 subtracted from row 3.\n\n$$ \\begin{bmatrix} 1&2&1&2\\\\0&2&-2&6\\\\0&0&5&-10 \\end{bmatrix} \\tag{16} $$\n\nMultiply the last row with $\\frac{1}{5}$.\n\n$$ \\begin{bmatrix} 1&2&1&2\\\\0&2&-2&6\\\\0&0&1&-2 \\end{bmatrix} \\tag{17} $$\n\nThis shows $z = -2$ in the last row of (17). \n\nWith this small matrix, it's easy to do back substitution as we did algebraically. The first non-zero number in each row is the pivot (just like the upper-left entry). The steps we have taken up to this point is called _Gauss elimination_ and the form we end up with is _row-echelon form_. We could carry on and do the same sort of thing to get rid of all the non-zero entries above each pivot. This is called _Gauss-Jordan elimination_ and the result is _reduced row-echelon form_ (see the computer code below).\n\nAll of these steps are called _elementary row operations_. The only one we didn't do is _row exchange_. We reserve this action so as not to have leading (in the pivot position) zeros.\n\nLet's create some code to show-case elementary row operations.\n\n\n```python\nA_augmented = Matrix([[1, 2, 1, 2], [3, 8, 1, 12], [0, 4, 1, 2]])\nA_augmented\n```\n\nWe can ask `sympy` to simply get the augmented matrix in reduced row-echelon form and read off the solutions. This is done with the `.rref()` method.\n\n\n```python\nA_augmented.rref() # The rref() method returns the reduced row-echelon form\n```\n\n## Elimination matrices\n\nMatrices can only be multiplied by each other if in order we have the first column size equal the second row size. Rows are usually called $m$ and columns $n$ when considering their dimensions. So, our augmented matrix above will be $m \\times n = 3 \\times 4$.\n\nLet's look at how matrices are multiplied by looking at two small matrices in (18).\n\n$$ \\begin{bmatrix} {a}_{11}&{a}_{12} \\\\ {a}_{21}&{a}_{22} \\end{bmatrix} \\\\ \\\\ \\begin{bmatrix} {b}_{11}&{b}_{12}\\\\{b}_{21}&{b}_{22} \\end{bmatrix} \\tag{18} $$\n\nThe subscripts refer to row and column position, i.e. $21$ means row $2$ column $1$>\n\nWe see that we have a two $ 2 \\times 2 $ matrices. The *inner* two values are the same ($2$ and $2$), so this multiplication is allowed. The resultant matrix will have the size equal to the *outer* two values (first row and last columns); here also a $2 \\times 2$ matrix.\n\n\n\nSo let's look at position $11$ (row $1$ and column $1$). To get this we take the entries in row $1$ of the first matrix and multiply them by the entries in the first column of the second matrix. We do this element by element and add the multiplication of each set of separate elements to each other. The python code below shows you exactly how this is done.\n\n\n```python\na11, a12, a21, a22, b11, b12, b21, b22 = symbols('a11 a12 a21 a22 b11 b12 b21 b22')\n```\n\n\n```python\nA = Matrix([[a11, a12], [a21, a22]])\nB = Matrix([[b11, b12], [b21, b22]])\nA, B\n```\n\n\n```python\nA * B\n```\n\nLet's constrain ourselves to the matrix of coefficients (this discards the right-hand side from the augmented matrix above).\n\n\n```python\nA = Matrix([[1, 2, 1], [3, 8, 1], [0, 4, 1]]) # I use the same computer variable above, which\n# will change its value in the computer memory\nA # A 3 by 3 matrix, which we call square\n```\n\nThe _identity matrix_ is akin to the number $1$, i.e. multiplying by it leaves everything unchanged. It has ones along what is called the main diagonal and zeros everywhere else.\n\n\n```python\nI = eye(3) # Identity matrices are always square and the argument\n# here is 3, so it is a 3 by 3 matrix\nI # Note what the main diagonal is\n```\n\nLet's multiply $I$ by $A$.\n\n\n```python\nI * A # Nothing will change\n```\n\nTo get rid of the leading $3$ in the second row(because we want a $0$ under the first pivot in the first row), we multiply the first row by $3$ and subtracted that from the second row. Interestingly enough, we can do the same to the identity matrix.\n\n\n```python\nE21 = Matrix([[1, 0, 0], [-3, 1, 0], [0, 0, 1]])\nE21 # 21 because we are working on row 2, column 1\n```\n\nThat gives us the required 3 times the first row and the negative shows that we subtract (add the negative). It's a thing of beauty!\n\n\n```python\nE21 * A\n```\n\nJust what we wanted. $E1$ is called the first elimination matrix.\n\nLet's do something to the identity matrix to get rid of the $4$ in the third row (the second column). It would require $2$ times the second row subtracted from the third row. Look carefully at the positions.\n\n\n```python\nE32 = Matrix([[1, 0, 0], [0, 1, 0], [0, -2, 1]])\nE32\n```\n\n\n```python\nE32 * (E21 * A)\n```\n\nSpot on! We now have nice pivots (leading non-zeros), with nothing under them (along the columns). As a tip, try not to get fractions involved. As far as the other two row operations are concerned, we can either exchange rows in the identity matrix or multiply the required row by a scalar constant.\n\nLook at what happens when we multiply $E2$ and $E1$.\n\n\n```python\nL_inv = E32 * E21\nL_inv\n```\n\nLater we'll call this matrix the inverse of $L$. It is in triangular form, in this case lower triangular (note all the zeros above the main diagonal).\n\n\n```python\nL_inv * A # Later we'll call this result the matrix U\n```\n\nWe now have the following, shown in (19).\n\n$$ {L}^{-1}{A}={U} \\tag{19} $$\n\nLeft-multiplying by $L$ leaves (20).\n\n$$ {L}{L}^{-1}{A}={L}{U} \\tag{20} $$\n\nThe inverse of a square matrix multiplied by itself gives the identity matrix.\n\n$$ {I}{A}={L}{U} \\\\ {A}={L}{U} \\tag{20} $$\n\nWe can construct $L$ from $E32$ and $E21$ above.\n\n$$ {E}_{21}^{-1}{E}_{32}^{-1}{E}_{32}{E}_{21}={E}_{21}^{-1}{E}_{32}^{-1}{U} \\\\ \\therefore {E}_{21}^{-1}{E}_{32}^{-1}={L} \\tag{21} $$\n\n\n```python\nE21.inv() # The inverse is easy to understand in words\n# We just want to add 3 instead of subtracting 3\n```\n\n\n```python\nE32.inv()\n```\n\n\n```python\nE21.inv() * E32.inv()\n```\n\nThis is exactly the inverse of our inverse of $L$ above.\n\n\n```python\nL_inv.inv()\n```\n\nThis is called _LU-decomposition_ of $A$. More about this in two chapters from now (I_05_LU_decomposition).\n\nAs an aside we can also do elementary column operation, but then we have to multiply on the right of $A$ and not on the left as above.\n\n## Example problems\n\n### Example problem 1\n\nSolve the linear system in (22).\n\n$$ \\begin{align} x-y-z+u &= 0 \\\\ 2x+2z &= 8 \\\\ -y-2z &= -8 \\\\ 3x-3y-2z+4u &= 7 \\end{align} \\tag{22} $$\n\n#### Solution\n\n\n```python\nA_augm = Matrix([[1, -1, -1, 1, 0], [2, 0, 2, 0, 8], [0, -1, -2, 0, -8], [3, -3, -2, 4, 7]])\nA_augm\n```\n\n\n```python\nA_augm.rref()\n```\n\nWhoa! That was easy! Let's take it a notch down and create some elementary matrices. First off, we want the matrix of coefficients.\n\n\n```python\nA = Matrix([[1, -1, -1, 1], [2, 0, 2, 0], [0, -1, -2, 0], [3, -3, -2, 4]])\nA\n```\n\nNow we need to get rid of the $2$ in position row `2`, column `1`. We start by numbering the elementary matrix by this position and modifying the identity matrix.\n\n\n```python\nE21 = Matrix([[1, 0, 0, 0], [-2, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])\nE21 * A\n```\n\nNow for position row `3`, column `2`. We have to use row `2` to do this. If we used row `1`, we would introduce a non-zero into position row `3`, column `1`.\n\n\n```python\nE32 = Matrix([[1, 0, 0, 0], [0, 1, 0, 0], [0, Rational(1, 2), 1, 0], [0, 0, 0, 1]])\nE32 * (E21 * A)\n```\n\nNow for the $3$ in position row `4`, column `1`.\n\n\n```python\nE41 = Matrix([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [-3, 0, 0, 1]])\nE41 * (E32 * E21 * A)\n```\n\nLet's exchange rows `3` and `4`.\n\n\n```python\nEe34 = Matrix([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]])\nEe34 * E41 * E32 * E21 * A\n```\n\nLet's see where that leaves $\\underline{b}$. After all, what we do to the left, we must do to the right.\n\n$$ {Ee}_{34}\\times{E}_{41}\\times{E}_{32}\\times{E}_{21}{A}{x}={Ee}_{34}\\times{E}_{41}\\times{E}_{32}\\times{E}_{21}{b} \\tag{23}$$\n\n\n```python\nb_vect = Matrix([[0], [8], [-8], [7]])\nb_vect\n```\n\n\n```python\nEe34 * E41 * E32 * E21 * b_vect\n```\n\nLet's print them next to each other on the screen.\n\n\n```python\nEe34 * E41 * E32 * E21 * A, Ee34 * E41 * E32 * E21 * b_vect\n```\n\nSo we can simply do back substitution. We note that $-1u = -4$ and thus $u = 4$. From here, we work our way back up.\n\n$$ \\begin{align} -1(u) = -4 \\quad &\\therefore \\quad u=4 \\\\ 1(z)+1(4) = 7 \\quad &\\therefore \\quad z=3 \\\\ 2(y) + 4(3) - 2(4) = 8 \\quad &\\therefore \\quad y=2 \\\\ 1(x)-1(2)-1(3)+1(4)=0 \\quad &\\therefore \\quad x=1 \\end{align} \\tag{24}$$\n\n\n```python\n\n```\n", "meta": {"hexsha": "6990ad41f614c82c318c10fc37dea3904fd6a5af", "size": 123799, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_2_Elimination.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_2_Elimination.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_2_Elimination.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 77.8610062893, "max_line_length": 6928, "alphanum_fraction": 0.8015573631, "converted": true, "num_tokens": 5271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3380771374883919, "lm_q2_score": 0.33458944788835565, "lm_q1q2_score": 0.11311704277591675}} {"text": "### Checklist for submission\n\nIt is extremely important to make sure that:\n\n1. Everything runs as expected (no bugs when running cells);\n2. The output from each cell corresponds to its code (don't change any cell's contents without rerunning it afterwards);\n3. All outputs are present (don't delete any of the outputs);\n4. Fill in all the places that say `# YOUR CODE HERE`, or \"**Your answer:** (fill in here)\".\n5. Never copy/paste any notebook cells. Inserting new cells is allowed, but it should not be necessary.\n6. The notebook contains some hidden metadata which is important during our grading process. **Make sure not to corrupt any of this metadata!** The metadata may for example be corrupted if you copy/paste any notebook cells, or if you perform an unsuccessful git merge / git pull. It may also be pruned completely if using Google Colab, so watch out for this. Searching for \"nbgrader\" when opening the notebook in a text editor should take you to the important metadata entries.\n7. Although we will try our very best to avoid this, it may happen that bugs are found after an assignment is released, and that we will push an updated version of the assignment to GitHub. If this happens, it is important that you update to the new version, while making sure the notebook metadata is properly updated as well. The safest way to make sure nothing gets messed up is to start from scratch on a clean updated version of the notebook, copy/pasting your code from the cells of the previous version into the cells of the new version.\n8. If you need to have multiple parallel versions of this notebook, make sure not to move them to another directory.\n9. Although not forced to work exclusively in the course Docker environment, you need to make sure that the notebook will run in that environment, i.e. that you have not added any additional dependencies.\n\nFailing to meet any of these requirements might lead to either a subtraction of POEs (at best) or a request for resubmission (at worst).\n\nWe advise you the following steps before submission for ensuring that requirements 1, 2, and 3 are always met: **Restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All). This might require a bit of time, so plan ahead for this (and possibly use Google Cloud's GPU in HA1 and HA2 for this step). Finally press the \"Save and Checkout\" button before handing in, to make sure that all your changes are saved to this .ipynb file.\n\n### Check Python version\n\n\n```python\nfrom platform import python_version_tuple\nassert python_version_tuple()[:2] == ('3','7'), \"You are not running Python 3.7. Make sure to run Python through the course Docker environment, or alternatively in the provided Conda environment.\"\n```\n\n### Check that notebook server has access to all required resources, and that notebook has not moved\n\n\n```python\nimport os\nnb_dirname = os.path.abspath('')\nassert nb_dirname != '/notebooks', \\\n '[ERROR] The notebook server appears to have been started at the same directory as the assignment. Make sure to start it at least one level above.'\nassignment_name = os.path.basename(nb_dirname)\nassert assignment_name in ['IHA1', 'IHA2', 'HA1', 'HA2', 'HA3'], \\\n '[ERROR] The notebook appears to have been moved from its original directory'\n```\n\n### Run the following cells to verify that your notebook is up-to-date and not corrupted in any way\n\n\n```javascript\n%%javascript\nIPython.notebook.kernel.execute(`nb_fname = '${IPython.notebook.notebook_name}'`);\n```\n\n\n```python\nimport sys\nsys.path.append('..')\nfrom ha_utils import check_notebook_uptodate_and_not_corrupted\ncheck_notebook_uptodate_and_not_corrupted(nb_dirname, nb_fname)\n```\n\n### Fill in group number and member names:\n\n\n```python\nGROUP = \"\"\nNAME1 = \"\"\nNAME2 = \"\"\n```\n\n# IHA1 - Assignment\n\nWelcome to the first individual home assignment! \n\nThis assignment consists of two parts:\n * Python and NumPy exercises\n * Build a deep neural network for forward propagation\n \nThe focus of this assignment is for you to gain practical knowledge with implementing forward propagation of deep neural networks without using any deep learning framework. You will also gain practical knowledge in two of Python's scientific libraries [NumPy](https://docs.scipy.org/doc/numpy-1.13.0/index.html) and [Matplotlib](https://matplotlib.org/devdocs/index.html). \n\nSkeleton code is provided for most tasks and every part you are expected to implement is marked with **TODO**\n\nWe expect you to search and learn by yourself any commands you think are useful for these tasks. Don't limit yourself to only what was taught in CL1. Use the help function, [stackoverflow](https://stackoverflow.com/), google, the [python documentation](https://docs.python.org/3.5/library/index.html) and the [NumPy](https://docs.scipy.org/doc/numpy-1.13.0/index.html) documentation to your advantage. \n\n**IMPORTANT NOTE**: The tests available are not exhaustive, meaning that if you pass a test you have avoided the most common mistakes, but it is still not guaranteed that you solution is 100% correct. \n\nLets start by importing the necessary libraries below\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom utils.tests.iha1Tests import *\n```\n\n## 1. Lists and arrays introduction\nFirst, we will warm up with a Python exercise and few NumPy exercises\n\n### 1.1 List comprehensions\nExamine the code snippet provided below\n\n\n```python\nmyList = []\nfor i in range(25):\n if i % 2 == 0:\n myList.append(i**2)\n \nprint(myList)\n```\n\nThis is not a very \"[pythonic](http://docs.python-guide.org/en/latest/writing/style/)\" way of writing. Lets re-write the code above using a [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). The result will be less code, more readable and elegant. Your solution should be able to fit into one line of code.\n\n\n```python\nmyList = None # TODO\n# YOUR CODE HERE\nprint(myList)\n```\n\n\n```python\n# sample output from cell above for reference\n# [0, 4, 16, 36, 64, 100, 144, 196, 256, 324, 400, 484, 576]\n```\n\n### 1.2 Numpy array vs numpy vectors\nRun the cell below to create a numpy array. \n\n\n```python\nmyArr = np.array([1, 9, 25, 49, 81, 121, 169, 225, 289, 361, 441, 529])\nprint(myArr)\nprint(myArr.shape)\n```\n\nOne of the core features of numpy is to efficiently perform linear algebra operations.\nThere are two types of one-dimensional representations in numpy: arrays of shape (x,) and vectors of shape (x,1)\n\nThe the above result indicates that **myArr** is an array of 12 elements with shape (12,). \n\nNumpy's arrays and vectors both have the type of `numpy.ndarray` but have in some cases different characteristics and it is important to separate the two types because it will save a lot of debugging time later on. Read more about numpy shapes [here](https://stackoverflow.com/a/22074424) \n\nRun the code below to see how the transpose operation behaves differently between an array and vector\n\n\n```python\n# print the shape of an array and the shape of a transposed array\nprint('myArr is an array of shape:')\nprint(myArr.shape)\nprint('The transpose of myArr has the shape:')\nprint(myArr.T.shape)\n\n# print the shape of a vector and the transpose of a vector\nmyVec = myArr.reshape(12,1)\nprint('myVec is a vector of shape:')\nprint(myVec.shape)\nprint('The transpose of myVec has the shape:')\nprint(myVec.T.shape)\n```\n\n### 1.3 Numpy exercises\nNow run the cell below to create the numpy array `numbers` and then complete the exercises sequentially\n\n\n```python\nnumbers = np.arange(24)\nprint(numbers)\n```\n\n\n```python\n# TODO: reshape numbers into a 6x4 matrix\n\n# YOUR CODE HERE\nprint(numbers)\n\n```\n\n\n```python\n# sample output from cell above for reference\n# [[ 0 1 2 3]\n# [ 4 5 6 7]\n# [ 8 9 10 11]\n# [12 13 14 15]\n# [16 17 18 19]\n# [20 21 22 23]]\n```\n\n\n```python\n# test case\ntest_numpy_reshape(numbers)\n```\n\n\n```python\n# TODO: set the element of the last row of the last column to zero\n# Hint: Try what happends when indices are negative\n\n# YOUR CODE HERE\nprint(numbers)\n```\n\n\n```python\n# sample output from cell above for reference\n# [[ 0 1 2 3]\n# [ 4 5 6 7]\n# [ 8 9 10 11]\n# [12 13 14 15]\n# [16 17 18 19]\n# [20 21 22 0]]\n```\n\n\n```python\n# test case\ntest_numpy_neg_ix(numbers)\n```\n\n\n```python\n# TODO: set every element of the 0th row to 0\n\n# YOUR CODE HERE\nprint(numbers)\n```\n\n\n```python\n# sample output from cell above for reference\n# [[ 0 0 0 0]\n# [ 4 5 6 7]\n# [ 8 9 10 11]\n# [12 13 14 15]\n# [16 17 18 19]\n# [20 21 22 0]]\n```\n\n\n```python\n# test case\ntest_numpy_row_ix(numbers)\n```\n\n\n```python\n# TODO: append a 1x4 row vector of zeros to `numbers`, \n# resulting in a 7x4 matrix where the new row of zeros is the last row\n# Hint: A new matrix must be created in the procedure. Numpy arrays are not dynamic.\n\n# YOUR CODE HERE\nprint(numbers)\nprint(numbers.shape)\n```\n\n\n```python\n# sample output from cell above for reference\n# [[ 0 0 0 0]\n# [ 4 5 6 7]\n# [ 8 9 10 11]\n# [12 13 14 15]\n# [16 17 18 19]\n# [20 21 22 0]\n# [ 0 0 0 0]]\n# (7, 4)\n```\n\n\n```python\n# test case\ntest_numpy_append_row(numbers)\n```\n\n\n```python\n# TODO: set all elements above 10 to the value 1\n\n# YOUR CODE HERE\nprint(numbers)\n```\n\n\n```python\n# sample output from cell above for reference\n# [[ 0 0 0 0]\n# [ 4 5 6 7]\n# [ 8 9 10 1]\n# [ 1 1 1 1]\n# [ 1 1 1 1]\n# [ 1 1 1 0]\n# [ 0 0 0 0]]\n```\n\n\n```python\n# test case\ntest_numpy_bool_matrix(numbers)\n```\n\n\n```python\n# TODO: compute the sum of every row and replace `numbers` with the answer\n# `numbers` will be a (7,) array as a result\n\n# YOUR CODE HERE\nprint(numbers.shape)\nprint(numbers)\n```\n\n\n```python\n# sample output from cell above for reference\n# (7,)\n# [ 0 22 28 4 4 3 0]\n```\n\n\n```python\n# test case\ntest_numpy_sum(numbers)\n```\n\n## 2 Building your deep neural network\nIt is time to start implementing your first feed-forward neural network. In this lab you will only focus on implementing the forward propagation procedure. \n\nWhen using a neural network, you can not forward propagate the entire dataset at once. Therefore, you divide the dataset into a number of sets/parts called batches. A batch will make up for the first dimension of every input to a layer and the notation `(BATCH_SIZE, NUM_FEATURES)` simply means the dimension of a batch of samples where every sample has `NUM_FEATURES` features\n\n### 2.1 activation functions\nYou will start by defining a few activation functions that are later needed by the neural network.\n\n#### 2.1.1 ReLU\nThe neural network will use the ReLU activation function in every layer except for the last. ReLU does element-wise comparison of the input matrix. For example, if the input is `X`, and `X[i,j] == 2` and `X[k,l] == -1`, then after applying ReLU, `X[i,j] == 2` and `X[k,l] == 0` should be true. \n\nThe formula for implementing ReLU for a single neuron $i$ is:\n\\begin{equation}\nrelu(z_i) = \n \\begin{cases}\n 0, & \\text{if}\\ z_i \\leq 0 \\\\\n z_i, & \\text{otherwise}\n \\end{cases}\n\\end{equation}\n\nNow implement `relu` in vectorized form\n\n\n```python\ndef relu(z):\n \"\"\" Implement the ReLU activation function\n \n Arguments:\n z - the input of the activation function. Has a type of `numpy.ndarray`\n \n Returns:\n a - the output of the activation function. Has a type of numpy.ndarray and the same shape as `z`\n \"\"\"\n \n a = None # TODO\n # YOUR CODE HERE\n \n return a\n```\n\n\n```python\n# test case\ntest_relu(relu)\n```\n\n#### 2.1.2 Sigmoid\nThe sigmoid activation function is common for binary classification. This is because it squashes its input to the range [0,1]. \nImplement the activation function `sigmoid` using the formula: \n\\begin{equation}\n \\sigma(z) = \\frac{1}{1 + e^{-z}}\n\\end{equation}\n\n\n```python\ndef sigmoid(z):\n \"\"\" Implement the sigmoid activation function\n \n Arguments:\n z - the input of the activation function. Has a type of `numpy.ndarray`\n \n Returns:\n a - the output of the activation function. Has a type of `numpy.ndarray` and the same shape as `z`\n \"\"\"\n \n a = None # TODO\n # YOUR CODE HERE\n \n return a\n```\n\n\n```python\n# test case\ntest_sigmoid(sigmoid)\n```\n\n#### 2.1.3 Visualization\nMake a plot using matplotlib to visualize the activation functions between the input interval [-3,3]. The plot should have the following properties\n * one plot should contain a visualization of both `ReLU` and `sigmoid`\n * x-axis: range of values between [-3,3], **hint**: np.linspace\n * y-axis: the value of the activation functions at a given input `x`\n * a legend explaining which line represents which activation function\n\n\n```python\n# TODO: make a plot of ReLU and sigmoid values in the interval [-3,3]\n\n# YOUR CODE HERE\n```\n\n#### 2.1.4 Softmax\nYou will use the softmax activation function / classifier as the final layer of your neural network later in the assignment. Implement `softmax` according the the formula below. The subtraction of the maximum value is there solely to avoid overflows in a practical implementation.\n\\begin{equation}\nsoftmax(z_i) = \\frac{e^{z_i - max(\\mathbf{z})}}{ \\sum^j e^{z_j - max(\\mathbf{z})}}\n\\end{equation}\n\n\n\n```python\ndef softmax(z):\n \"\"\" Implement the softmax activation function\n \n Arguments:\n z - the input of the activation function, shape (BATCH_SIZE, FEATURES) and type `numpy.ndarray`\n \n Returns:\n a - the output of the activation function, shape (BATCH_SIZE, FEATURES) and type umpy.ndarray\n \"\"\"\n \n a = None # TODO\n # YOUR CODE HERE\n \n return a\n```\n\n\n```python\n# test case\ntest_softmax(softmax)\n```\n\n### 2.2 Initialize weights\nYou will implement a helper function that takes the shape of a layer as input, and returns an initialized weight matrix $\\mathbf{W}$ and bias vector $\\mathbf{b}$ as output. $\\mathbf{W}$ should be sampled from a normal distribution of mean 0 and standard deviation 2, and $\\mathbf{b}$ should be initialized to all zeros.\n\n\n```python\ndef initialize_weights(layer_shape):\n \"\"\" Implement initialization of the weight matrix and biases\n \n Arguments:\n layer_shape - a tuple of length 2, type (int, int), that determines the dimensions of the weight matrix: (input_dim, output_dim)\n \n Returns:\n w - a weight matrix with dimensions of `layer_shape`, (input_dim, output_dim), that is normally distributed with\n properties mu = 0, stddev = 2. Has a type of `numpy.ndarray`\n b - a vector of initialized biases with shape (1,output_dim), all of value zero. Has a type of `numpy.ndarray`\n \"\"\"\n w = None # TODO\n b = None # TODO\n # YOUR CODE HERE\n \n return w, b\n```\n\n\n```python\n# test case\ntest_initialize_weights(initialize_weights)\n```\n\n### 2.3 Feed-forward neural network layer module\nTo build a feed-forward neural network of arbitrary depth you are going to define a neural network layer as a module that can be used to stack upon eachother. \n\nYour task is to complete the `Layer` class by following the descriptions in the comments. \n\nRecall the formula for forward propagation of an arbitrary layer $l$:\n\n\\begin{equation}\n\\mathbf{a}^{[l]} = g(\\mathbf{z}^{[l]}) = g(\\mathbf{a}^{[l-1]}\\mathbf{w}^{[l]} +\\mathbf{b}^{[l]})\n\\end{equation}\n\n$g$ is the activation function given by `activation_fn`, which can be relu, sigmoid or softmax. \n\n\n```python\nclass Layer:\n \"\"\" \n TODO: Build a class called Layer that satisfies the descriptions of the methods\n Make sure to utilize the helper functions you implemented before\n \"\"\"\n \n def __init__(self, input_dim, output_dim, activation_fn=relu):\n \"\"\"\n Arguments:\n input_dim - the number of inputs of the layer. type int\n output_dim - the number of outputs of the layer. type int\n activation_fn - a reference to the activation function to use. Should be `relu` as a default\n possible values are the `relu`, `sigmoid` and `softmax` functions you implemented earlier.\n Has the type `function`\n \n Attributes:\n w - the weight matrix of the layer, should be initialized with `initialize_weights`\n and has the shape (INPUT_FEATURES, OUTPUT_FEATURES) and type `numpy.ndarray`\n b - the bias vector of the layer, should be initialized with `initialize_weights`\n and has the shape (1, OUTPUT_FEATURES) and type `numpy.ndarray`\n activation_fn - a reference to the activation function to use.\n Has the type `function`\n \"\"\"\n self.w, self.b = None, None # TODO\n self.activation_fn = None # TODO\n # YOUR CODE HERE\n \n \n def forward_prop(self, a_prev):\n \"\"\" Implement the forward propagation module of the neural network layer\n Should use whatever activation function that `activation_fn` references to\n \n Arguments:\n a_prev - the input to the layer, which may be the data `X`, or the output from the previous layer.\n a_prev has the shape of (BATCH_SIZE, INPUT_FEATURES) and the type `numpy.ndarray`\n \n Returns:\n a - the output of the layer when performing forward propagation. Has the type `numpy.ndarray`\n \"\"\"\n \n a = None # TODO\n # YOUR CODE HERE\n \n return a\n```\n\n\n```python\n# test case, be sure that you pass the previous activation function tests before running this test\ntest_layer(Layer, relu, sigmoid, softmax)\n```\n\n### 2.4 Logistic regression \nBinary logistic regression is a classifier where classification is performed by applying the sigmoid activation function to a linear combination of input values. You will now try out your neural network layer by utilizing it as a linear combination of input values and apply the sigmoid activation function to classify a simple problem. \n\nThe cell below defines a dataset of 5 points of either class `0` or class `1`. Your assignment is to: \n1. Create an instance of a `Layer` with sigmoid activation function \n2. Manually tune the weights `w` and `b` of your layer\n\nYou can use `test_logistic` to visually inspect how your classifier is performing.\n\n\n```python\n# Run this cell to create the dataset\nX_s = np.array([[1, 2],\n [5, 3],\n [8, 8],\n [7, 5],\n [3, 6]])\nY_s = np.array([0,0,1,0,1])\n\ntest_logistic(X_s, Y_s)\n```\n\n\n```python\n# create an instance of layer\nl = Layer(2,1,sigmoid)\n\n# TODO: manually tune weights\nl.w = None\nl.b = None\n# YOUR CODE HERE\n\n# testing your choice of weights with this function\ntest_logistic(X_s,Y_s,l,sigmoid)\n```\n\n### 2.5 Feed-forward neural network\nNow define the actual neural network class. It is an L-layer neural network, meaning that the number of layers and neurons in each layer is specified as input by the user. Once again, you will only focus on implementing the forward propagation part.\n\nRead the descriptions in the comments and complete the todos \n\n\n```python\nclass NeuralNetwork:\n \"\"\" \n TODO: Implement an L-layer neural network class by utilizing the Layer module defined above \n Each layer should use `relu` activation function, except for the output layer, which should use `softmax`\n \"\"\"\n \n def __init__(self, input_n, layer_dims):\n \"\"\"\n Arguments:\n input_n - the number of inputs to the network. Should be the same as the length of a data sample\n Has type int\n layer_dims - a python list or tuple of the number of neurons in each layer. Layer `l` should have a weight matrix \n with the shape (`layer_dims[l-1]`, `layer_dims[l]`). \n `layer_dims[-1]` is the dimension of the output layer.\n Layer 1 should have the dimensions (`input_n`, `layer_dims[0]`).\n len(layer_dims) is the depth of the neural network\n Attributes:\n input_n - the number of inputs to the network. Has type int\n layers - a python list of each layer in the network. Each layer should use the `relu` activation function,\n except for the last layer, which should use `softmax`. \n Has type `list` containing layers of type `Layer`\n \"\"\"\n \n self.input_n = None # TODO\n self.layers = None # TODO\n # YOUR CODE HERE\n \n def forward_prop(self, x):\n \"\"\" \n Implement the forward propagation procedure through the entire network, from input to output.\n You will now connect each layer's forward propagation function into a chain of layer-wise forward propagations.\n \n Arguments:\n x - the input data, which has the shape (BATCH_SIZE, NUM_FEATURES) and type `numpy.ndarray`\n \n Returns:\n a - the output of the last layer after forward propagating through the every layer in `layers`.\n Should have the dimension (BATCH_SIZE, layers[-1].w.shape[1]) and type `numpy.ndarray`\n \"\"\"\n a = None # TODO\n # YOUR CODE HERE\n \n return a\n```\n\n\n```python\n# test case\ntest_neuralnetwork(NeuralNetwork)\n```\n\n## 3 Making predictions with a neural network\nIn practice, its common to load weights to your neural network that has already been trained. \nIn this section, you will create an instance of your neural network, load trained weights from disk, and perform predictions.\n\n### 3.1 Load weights from disk\nCreate an instance of `NeuralNetwork` with input size $28 \\times 28 = 784$, two hidden layers of size 100 and an output layer of size 10. Thereafter, load the weights contained in `./utils/ann_weights.npz` to your network.\n\n\n```python\nann = None # TODO: create instance of ann\n# YOUR CODE HERE\n\n# load weights\nweights = np.load('./utils/ann_weights.npz')\nfor l in range(len(ann.layers)):\n ann.layers[l].w = weights['w' + str(l)]\n ann.layers[l].b = weights['b' + str(l)]\n```\n\n### 3.2 Prediction\nNow, implement the function `predict_and_correct` which does the following:\n1. Load `./utils/test_data.npz` from disk\n2. Extract test data `X` and `Y` from file\n2. Perform for every pair of data: \n a. plot the image `x` \n b. make a prediction using your neural network by forward propagating and picking the most probable class \n c. check wether the prediction is correct (compare with the ground truth number `y`) \n d. print the predicted label and wether it was correct or not \n\n\n```python\ndef predict_and_correct(ann):\n \"\"\" Load test data from file and predict using your neural network. \n Make a prediction for ever data sample and print it along with wether it was a correct prediction or not\n \n Arguments:\n ann - the neural network to use for prediction. Has type `NeuralNetwork`\n \n Returns: # for test case purposes\n A `numpy.ndarray` of predicted classes (integers [0-9]) with shape (11,)\n \"\"\"\n data = np.load('./utils/test_data.npz')\n X, cls = data['X'], data['Y']\n \n cls_preds = None # TODO: make a predicted number for every image in X\n # YOUR CODE HERE\n \n for i in range(len(X)):\n plt.imshow(X[i].reshape(28,28), cmap='gray')\n plt.show()\n correct = cls_preds[i] == cls[i]\n print('The prediction was {0}, it was {1}!'.format(cls_preds[i], 'correct' if correct else 'incorrect'))\n \n return cls_preds\n \ncls_pred = predict_and_correct(ann)\n```\n\n\n```python\n# final test case\ntest_predict_and_correct_answer(cls_pred)\n```\n\n## Congratulations!\nYou have successfully implemented a neural network from scratch using only NumPy! \n", "meta": {"hexsha": "ab2325b1d08ddef5f490c789a8bec2753a3abf60", "size": 54741, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "home-assignments/IHA1/IHA1.ipynb", "max_stars_repo_name": "johroge/deep-machine-learning", "max_stars_repo_head_hexsha": "e93e6156a1980f21e6ff25cde0031a4c4fcae5e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-17T11:19:28.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-17T11:19:28.000Z", "max_issues_repo_path": "home-assignments/IHA1/IHA1.ipynb", "max_issues_repo_name": "sondrec/deep-machine-learning", "max_issues_repo_head_hexsha": "5f96c500119bd5ee66c641822cd5bf0d7240d88a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "home-assignments/IHA1/IHA1.ipynb", "max_forks_repo_name": "sondrec/deep-machine-learning", "max_forks_repo_head_hexsha": "5f96c500119bd5ee66c641822cd5bf0d7240d88a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-17T11:09:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-17T11:09:32.000Z", "avg_line_length": 29.1640916356, "max_line_length": 553, "alphanum_fraction": 0.5687510276, "converted": true, "num_tokens": 5913, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3242353859211693, "lm_q2_score": 0.3486451488696663, "lm_q1q2_score": 0.11304309439329978}} {"text": "```python\nimport sys\nsys.path.append('..')\n```\n\n# Milestone 2\n\n## Introduction\n\nAutomatic differentiation is a tool for calculating derivatives using machine accuracy. It has several advantages over traditional methods of derivative calculations such as symbolic and finite differentiation. Automatic differentiation is useful for calculating complex derivatives where errors are more likely with classical methods. For instance , with finite differentiation, h values that are too small will lead to accuracy errors though floating point roundoff error, while h values that are too large will start making vastly inaccurate approximations. \n\nAutomatic differentiation is useful due to its practicality in real world applications that involve thousands of parameters in a complicated function, which would take a long runtime as well as strong possibility for error in calculating the derivatives individually. \n\nOur package allows users to calculate derivatives of complex functions, some with many parameters, allowing machine precision.\n\n## Background\n\nEssentially automatic differentiation works by breaking down a complicated function and performing a sequence of elementary arithmetic such as addition, subtraction, multiplication, and division as well as elementary functions like exp, log, sin, etc. These operations are then repeated by the chain rule and the derivatives of these sequences are calculated. There are two ways that automatic differentiation can be implemented - forward mode and reverse mode. \n\n\n### 2.1 The Chain Rule\n\nThe chain rule makes up a fundamental component of auto differentiation. The basic idea is: \nFor univariate function, $$ F(x) = f(g(x))$$\n\n $$F^{\\prime} = (f(g))^{\\prime} = f^{\\prime}(g(x))g^{\\prime}(x)$$\n \nFor multivariate function, $$F(x) = f(g(x),h(x))$$\n\n$$ \\frac{\\partial F}{\\partial x}=\\frac{\\partial f}{\\partial g} \\frac{\\partial g}{\\partial x}+\\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial x}$$\n\nFor generalized cases, if F is a combination of more sub-functions, $$F(x) = f(g_{1}(x), g_{2}(x), \u2026, g_{m}(x))$$\n\n$$\\frac{\\partial F}{\\partial x}=\\sum_{i=1}^{m}\\frac{\\partial F}{\\partial g_{i}} \\frac{\\partial g_{i}}{\\partial x}$$\n\nFor F is a function f(g): f: $R^n$ -> $R^m$ and g: $R^m$ -> $R^k$,\n\n$$\\mathbf{J}_{\\mathrm{gof}}(\\mathbf{x})=\\mathbf{J}_{\\mathrm{g}}(\\mathbf{f}(\\mathbf{x})) \\mathbf{J}_{\\mathrm{f}}(\\mathbf{x})$$\n\nwhere $J(f) =\\left[\\begin{array}{ccc}{\\frac{\\partial \\mathbf{f}}{\\partial x_{1}}} & {\\cdots} & {\\frac{\\partial \\mathbf{f}}{\\partial x_{n}}}\\end{array}\\right]=\\left[\\begin{array}{ccc}{\\frac{\\partial f_{1}}{\\partial x_{1}}} & {\\cdots} & {\\frac{\\partial f_{1}}{\\partial x_{n}}} \\\\ {\\vdots} & {\\ddots} & {\\vdots} \\\\ {\\frac{\\partial f_{m}}{\\partial x_{1}}} & {\\cdots} & {\\frac{\\partial f_{m}}{\\partial x_{n}}}\\end{array}\\right]$ is the Jacobian Matrix.\n\n\n\n### 2.2 Auto Differentiation: Forward Mode\n\nThe forward mode automatic differentiation is accomplished by firstly splitting the function process into one-by-one steps, each including only one basic operation. It focuses on calculating two things in each step, the value of scalar or vector x in $R^n$, and the 'seed' vector for the derivatives or Jacobian Matrix. From the first node, the value and derivatives will be calculated based on the values and derivatives of forward nodes. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.\n\nThe automatic differentiation is superior to analytic or symbolic differentiation because it could be computed on modern machines. It is also superior to numerical differentiation because numeric method is not precise but AD deals with machine precision problems properly.\n\nAn example of computational graph and table for forward mode AD is shown as follows:\n\n\\begin{align}\n f\\left(x,y\\right) =\\sin\\left(xy\\right)\n\\end{align}\nWe will be evaluating the function at $f(1, 0)$\n\nEvaluation trace:\n\n| Trace | Elementary Function | Current Value | Elementary Function Derivative | $\\nabla_{x}$ Value | $\\nabla_{y}$ Value |\n| :---: | :-----------------: | :-----------: | :----------------------------: | :-----------------: | :-----------------: |\n| $x_{1}$ | $x_{1}$ | $1$ | $\\dot{x}_{1}$ | $1$ | $0$ |\n| $x_{2}$ | $x_{2}$ | $0$ | $\\dot{x}_{2}$ | $0$ | $1$ |\n| $x_{3}$ | $x_{1}x_{2}$ | $0$ | $\\dot{x}_{2}$ | $0$ | $1$ |\n\n\n\n\n\n### 2.3 Reverse Mode\n\nThe reverse mode automatic differentiation has a process similar to the forward mode auto differentiation, but has another reverse process. It does not apply the chain rule and only partial derivatives to a node are stored. First, for the forward process, the partial derivatives are stored for each node. For the reverse process, it starts with the differentiation to the last node, and then activations in the forward process are deployed in the differentiation step by step. \n\n\n### 2.4 Forward Mode v.s. Reverse Mode\n\nTwo main aspects can be considered when choosing between Forward and Reverse mode auto differentiation.\n* Memory Storage & Time of Computation\n\nThe forward mode needs memory storage for values and derivatives for each node, while the reverse mode only needs to store the activations of partial differentiation to each node. The forward mode do the computation at the same time as the variable evaluation, while the reverse mode do the calculation in the backward process.\n* Input & Output Dimensionality\n\nIf the input dimension is much larger than output dimension, then reverse mode is more attractive. If the output dimension is much larger than the input dimension, the forward mode is much computational cheaper.\n\n## Installation\n\nOur package can be installed from our GitHub Repository at: https://github.com/VoraciousFour/cs207-FinalProject.\n\nAfter the package is installed, it needs to be imported into their workspace. Doing so, will automatically download any dependencies that are required by our package such as math or numpy. Then, the user can create and activate a virtual envitronment to use the package in.\n\nThe user can set up and use our package using their terminal as follows.\n\n1. Clone the VorDiff package from our Github Repository into your directory\n git clone https://github.com/VoraciousFour/cs207-FinalProject.git\n2. Create and activate a virtual environment\n '''Installing virtualenv'''\n sudo easy_install virtualenv\n '''Creating the Virtual Environment'''\n virtualenv env\n '''Activating the Virtual Environment'''\n source env/bin/activate\n3. Install required dependencies\n pip install -r requirements.txt\n4. Importing VorDiff package for use\n import VorDiff\n\n## How to use VorDiff\n\nOur Automatic Differentiation package is called VorDiff. The two main objects you will interact with are `AutoDiff` and `Operator`. In short, the user will first instantiate a scalar variable as an `AutoDiff` object, and then feed those variables to operators specified in the `Operator` object. The `Operator` object allows users to build their own functions for auto-differentiation. Simple operations (e.g. addition, multiplication, power) may be used normally. More complex functions (e.g. log, sin, cos) must use the operations defined in the `Operator` class. Lastly, the user may retrieve the values and first derivatives from the objects defined above by using the `get()` method.\n\nA short example is provided below:\n\n\n```python\nfrom VorDiff.autodiff import AutoDiff as ad\nfrom VorDiff.operator import Operator as op\n\n# Define variables\nx = ad.scalar(3.14159)\ny = ad.scalar(0)\n\n# Build functions\nfx = op.sin(x) + 3\nfy = op.exp(y) + op.log(y+1)\n\n# Get values and derivates\nprint(fx.get())\nprint(fy.get())\n```\n\n (3.0000026535897932, -0.9999999999964793)\n (1.0, 2.0)\n\n\n## Software Organization\n\n### Directory Structure\nThe package's directory will be structured as follows:\n```\nVorDiff/\n\t__init__.py\n nodes/\n __init__.py\n\t scalar.py\n reverse_scalar.py\n reverse_vector.py\n\t vector.py\n\ttests/\n __init__.py\n test_autodiff.py\n test_node.py\n test_operator.py\n test_reverse_autodiff.py\n test_reverse_operator.py\n test_reverse_scalar.py\n test_reverse_vector.py\n test_scaler.py\n test_vector.py\n autodiff.py\n operator.py\n reverse_autodiff.py\n reverse_operator.py\n README.md\n ...\ndemo/\n demo_reverse.py\n demo_scalar.py\n demo_vector.py\ndocs/\n ...\n```\n### Modules\n- VorDiff: The VorDiff module contains the operator class to be directly used by users to evaluate functions and calculate their derivatives, and an autodiff class that acts as the central interface for automatic differentiation.\n\n- Nodes: The Nodes module contains the the scalar and vector classes, which define the basic operations that can be performed on scalar and vector variables for the autodiff class.\n \n- Test_Vordiff: The Test_Vordiff module contains the test suite for this project. TravisCI and CodeCov are used to test our operator classes, node classes, and auto-differentiator.\n \n- Demo: The Demo module contains python files demonstrating how to perform automatic differentiation with the implemented functions.\n \n### Testing\nIn this project we use TravisCI to perform continuous integration testing and CodeCov to check the code coverage of our test suite. The status us TravisCI and CodeCov can be found in README.md, in the top level of our package. Since the test suite is included in the project distribution, users can also install the project package and use pytest and pytest-cov to check the test results locally.\n\n### Distribution:\nOur open-source VorDiff package will be uploaded to PyPI by using twine because it uses a verified connection for secure authentication to PyPI over HTTPS. Users will be able to install our project package by using the convential `pip install VorDiff`.\n\n\n\n## Implementation\n\n### Scalar\nThe `Scalar` class represents a single scalar node in the computational graph of a function. It implements the interface for user defined scalar variables. The object contains two hidden attributes, `._val` and `._der`, which can be retrieved with the `get()` method.\n\n### Vector\nThe `Vector` class represents a single vector variable. Vectors are comprised of `Element` objects, which implement much of the computation necessary for vector automatic differentiation. \n\n\n```python\nimport numpy as np\n\n# Documentation Hidden\nclass Scalar():\n\n def __init__(self, value, *kwargs):\n self._val = value\n if len(kwargs) == 0:\n self._der = 1\n else:\n self._der = kwargs[0]\n \n def get(self):\n return self._val, self._der\n\n def __add__(self, other):\n try:\n return Scalar(self._val+other._val, self._der+other._der)\n except AttributeError:\n return self.__radd__(other)\n\n def __radd__(self, other):\n return Scalar(self._val+other, self._der)\n \n def __mul__(self, other):\n try:\n return Scalar(self._val*other._val, self._der*other._val+self._val*other._der)\n except AttributeError:\n return self.__rmul__(other)\n \n def __rmul__(self, other):\n return Scalar(self._val*other, self._der*other)\n \n def __sub__(self, other):\n return self + (-other)\n \n def __rsub__(self, other):\n return -self + other\n \n def __truediv__(self, other):\n try:\n return Scalar(self._val/other._val, (self._der*other._val-self._val*other._der)/(other._val**2))\n except AttributeError:\n return Scalar(self._val/other, self._der/other)\n \n def __rtruediv__(self, other):\n return Scalar(other/self._val, other*(-self._der)/(self._val)**2)\n\n def __pow__(self, other):\n try:\n return Scalar(self._val**other._val, (other._val*self._der/self._val+np.log(self._val)*other._der)*(self._val**other._val))\n except AttributeError:\n return Scalar(self._val**other, other*(self._val**(other-1))*self._der)\n \n def __rpow__(self, other):\n return Scalar(other**self._val, (other**self._val)*np.log(other)*self._der)\n \n def __neg__(self):\n return Scalar((-1)*self._val, (-1)*self._der)\n```\n\n### Operator\nThe operator class contains all mathematical operations that users can call to build their functions. Each function returns a `Scalar` object or a numeric constant, depending on the input type. Each function raises an erro if its input falls outside its domain. All functions in the class are static.\n\nIn this implementation, we include the following elementary functions. Derivatives are calculated with the chain rule.\n\n\n```python\nimport numpy as np\nfrom VorDiff.nodes.scalar import Scalar\n\n# Documentation Hidden\nclass Operator():\n \n @staticmethod\n def sin(x):\n try: # If scalar variable\n return Scalar(np.sin(x._val), x._der*np.cos(x._val))\n \n except AttributeError: # If contant\n return np.sin(x)\n \n @staticmethod\n def cos(x):\n try: # If scalar variable\n return Scalar(np.cos(x._val), -np.sin(x._val)*x._der)\n \n except AttributeError: # If contant\n return np.cos(x)\n \n @staticmethod\n def tan(x):\n try: # If scalar variable\n return Scalar(np.tan(x._val), x._der/np.cos(x._val)**2)\n \n except AttributeError: # If contant\n return np.tan(x)\n \n @staticmethod\n def arcsin(x):\n try: # If scalar variable\n if x._val<-1 or x._val>1:\n raise ValueError('out of domain')\n else:\n return Scalar(np.arcsin(x._val), 1/(x._der*(1-x._val**2)**.5))\n \n except AttributeError: # If contant\n if x<-1 or x>1:\n raise ValueError('out of domain')\n else:\n return np.arcsin(x)\n \n @staticmethod\n def arccos(x):\n try: # If scalar variable\n if x._val<-1 or x._val>1:\n raise ValueError('out of domain')\n else:\n return Scalar(np.arccos(x._val), -x._der/(1-x._val**2)**.5)\n \n except AttributeError: # If contant\n if x<-1 or x>1:\n raise ValueError('out of domain')\n else:\n return np.arccos(x)\n \n @staticmethod\n def arctan(x):\n try: # If scalar variable\n return Scalar(np.arctan(x._val), x._der/(1+x._val**2))\n \n except: # If contant\n return np.arctan(x)\n \n @staticmethod\n def log(x):\n try: # If scalar variable\n return Scalar(np.log(x._val), x._der/x._val)\n \n except AttributeError: # If contant\n return np.log(x)\n \n @staticmethod\n def exp(x):\n try: # If scalar variable\n return Scalar(np.exp(x._val), x._der*np.exp(x._val))\n \n except AttributeError: # If contant\n return np.exp(x)\n \n @staticmethod\n def sinh(x):\n try: # if scalar variable\n return Scalar(np.sinh(x._val), x._der*(np.cosh(x._val)))\n \n except AttributeError: #if constant\n return np.sinh(x) \n \n @staticmethod\n def cosh(x):\n try: # if scalar variable\n return Scalar(np.cosh(x._val), x._der*(np.sinh(x._val)))\n \n except AttributeError: #if constant\n return np.cosh(x)\n\n @staticmethod\n def tanh(x):\n try: # if scalar variable\n return Scalar(np.tanh(x._val), x._der*(1-np.tanh(x._val)**2))\n \n except AttributeError: #if constant\n return np.tanh(x)\n\n @staticmethod\n def arcsinh(x):\n try: # if scalar variable\n return Scalar(np.arcsinh(x._val), x._der*(-np.arcsinh(x._val)*np.arctanh(x._val)))\n \n except AttributeError: #if constant\n return np.arcsinh(x)\n \n @staticmethod\n def arccosh(x):\n try: # if scalar variable\n return Scalar(np.arccosh(x._val), x._der*(-np.arccosh(x._val)*np.tanh(x._val)))\n \n except AttributeError: #if constant\n return np.arccosh(x)\n \n @staticmethod\n def arctanh(x):\n try: # if scalar variable\n return Scalar(np.arctanh(x._val), x._der*(1-np.arctanh(x._val)**2))\n \n except AttributeError: #if constant\n return np.arctanh(x)\n```\n\n### AutoDiff\nThe `AutoDiff` class will allow the user to easily create variables and build auto-differentiable functions, without having to interface with the `Node` class. It will make use of the auto-differentiator much more intuitive for the user.\n\n\n```python\nfrom VorDiff.nodes.scalar import Scalar\n\n# Documentation Hidden\nclass AutoDiff():\n\n @staticmethod\n def scalar(val):\n \n return Scalar(val, 1)\n```\n\n## Future Features\n\n### 1. Option for higher-order derivatives\n\nThere are plenty of ways we could improve our package. The first is to grant users the option to compute higher-order derivatives like Hessians. We can recursively apply AD first to the target function, i.e., producing the first-order derivative, then moving the operations of the first-order derivatives into a new computational graph then applying AD again. In short, higher order derivatives would be accomplished by repeatedly applying automatic differentiation to function and its derivatives.\n\n### 2. Application using AD library to find the roots of functions\n\nA second way we could extend our work is by writing a separate library to find the roots of given functions. For example, this could include an implementation of Newton\u2019s Method that calculates the exact Hessian matrix of a function using AD to get second-order partial derivatives. We would use Newton's Method to search for the approximations by calculating the exact Hessian matrix of the function using AD to get the second-order partial derivatives.\n\n### 3. Backpropagation in neural networks\n\nWe can also extend our implementation of automatic differentiation to the neural networks. Neural networks are able to gradually increase accuracy with every training session through the process of gradient descent. In gradient descent, we aim to minimize the loss (i.e. how inaccurate the model is) through tweaking the weights and biases.\n\nBy finding the partial derivative of the loss function, we know how much (and in what direction) we must adjust our weights and biases to decrease loss. In that series, we calculate the derivative mean squared error loss function of a single-neuron neural network.\n\nFor computers to calculate the partial derivatives of an expression in neural networks, we can implement the automatic differentiation for both forward pass and back propagation. Then we can calculate the partial derivatives in both scalar and vector modes.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6f35500103b774f84cb53a6b13aa0cee16fc422d", "size": 24862, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/milestone2.ipynb", "max_stars_repo_name": "VoraciousFour/VorDiff", "max_stars_repo_head_hexsha": "9676462b028e532b10ebf38989b5014763b7260e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/milestone2.ipynb", "max_issues_repo_name": "VoraciousFour/VorDiff", "max_issues_repo_head_hexsha": "9676462b028e532b10ebf38989b5014763b7260e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/milestone2.ipynb", "max_forks_repo_name": "VoraciousFour/VorDiff", "max_forks_repo_head_hexsha": "9676462b028e532b10ebf38989b5014763b7260e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4465648855, "max_line_length": 940, "alphanum_fraction": 0.5850293621, "converted": true, "num_tokens": 4481, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.22541661583507672, "lm_q1q2_score": 0.11270830791753836}} {"text": "\n\n**Set up your Google Colab Environment**\n\nMount your Google Drive\n\n\n```python\nfrom google.colab import drive\ndrive.mount('/content/drive')\n```\n\n Mounted at /content/drive\n\n\nRun this import command to be able to work with the eecs598 folder.\n\n\n```python\nimport sys\nsys.path.append('/content/drive/My Drive/QCourse511/ResearchProject/working')\n```\n\nRun these commands if you have problems. When you first start Google Colab notebook,a newer version of tensorflow will be installed, but you need tensorflow 2.5.1 (or 2.4.1 depending on the documentation you find).\n\n\n\n```python\n!pip uninstall tensorflow\n```\n\n\n```python\n!pip uninstall tensorflow-quantum\n```\n\n\n```python\n!pip install tensorflow==2.5.1\n```\n\n\n```python\n!pip install tf-nightly\n```\n\n\n```python\n!pip install tensorflow-quantum\n```\n\n\n```python\n!pip install tfq-nightly\n```\n\n\n```python\n!pip install tensorflow-estimator==2.5.*\n```\n\n\n```python\n!pip install keras==2.6.0\n```\n\n\u0130mport some useful packages.\n\n\n```python\nimport eecs598\nimport torch\nimport torchvision\nimport matplotlib.pyplot as plt\nimport statistics\nimport numpy as np\n\n```\n\n\n```python\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\nimport seaborn as sns\nimport collections\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\n```\n\n# Download, visualize and prepare the CIFAR10 dataset\nThe CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.\n\nThe following code calls functions in the eecs598 library that use PyTorch to download the CIFAR dataset, split it into train and test sets and then convert it to torch.Tensors which are a multi-dimensional matrix containing elements of a single data type.\n\nThe code in the eecs598 library also normalizes pixel values to be between 0 and 1 by dividing by 255.\n\n\n```python\nx_train, y_train, x_test, y_test = eecs598.data.cifar10()\n\nprint('Training set:', )\nprint(' data shape:', x_train.shape)\nprint(' labels shape: ', y_train.shape)\nprint('Test set:')\nprint(' data shape: ', x_test.shape)\nprint(' labels shape', y_test.shape)\n```\n\n Training set:\n data shape: torch.Size([50000, 3, 32, 32])\n labels shape: torch.Size([50000])\n Test set:\n data shape: torch.Size([10000, 3, 32, 32])\n labels shape torch.Size([10000])\n\n\nIncrease the default figure size.\n\n\n```python\n# Control qrid size for visualization\nplt.rcParams['figure.figsize'] = (10.0, 8.0)\nplt.rcParams['font.size'] = 16\n```\n\nVisualize the dataset\n\n\n```python\nimport random\nfrom torchvision.utils import make_grid\n\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nsamples_per_class = 12\nsamples = []\nfor y, cls in enumerate(classes):\n plt.text(-4, 34 * y + 18, cls, ha='right')\n idxs, = (y_train == y).nonzero(as_tuple=True)\n for i in range(samples_per_class):\n idx = idxs[random.randrange(idxs.shape[0])].item()\n samples.append(x_train[idx])\nimg = torchvision.utils.make_grid(samples, nrow=samples_per_class)\nplt.imshow(eecs598.tensor_to_image(img))\nplt.axis('off')\nplt.show()\n```\n\n# Label data\nThe label data is a list of numbers ranging from 0 to 9, which corresponds to each of the 10 classes in CIFAR-10.\n\n0 - airplane, \n1 - automobile, \n2 - bird, \n3 - cat, \n4 - deer, \n5 - dog, \n6 - frog, \n7 - horse, \n8 - ship, \n9 - truck\n\n\n```python\nprint(y_train)\n```\n\n tensor([6, 9, 9, ..., 9, 1, 1])\n\n\n# 1. Data preparation\nYou will begin by preparing the CIFAR-10 dataset for running on a quantum computer.\n\n# 1.1 Download the CIFAR-10 dataset\nThe first step is to get the traditional the CIFAR-10 dataset. This can be done using the `tf.keras.datasets` module.\n\n\n```python\nfrom tensorflow.keras.datasets import cifar10\n(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()\n\n# Rescale the images from [0,255] to the [0.0,1.0] range.\ntrain_images, test_images = train_images/255.0, test_images/255.0\n```\n\n\n```python\ny_train=train_labels.flatten()\ny_test=test_labels.flatten()\n# We have reduced the dimension of the labels\n```\n\nIt really comes down to math and getting a value between 0-1. Since 255 is the maximum value, dividing by 255 expresses a 0-1 representation. Each channel (Red, Green, and Blue are each channels) is 8 bits, so they are each limited to 256, in this case 255 since 0 is included.\n\n\n```python\nx_train=tf.image.rgb_to_grayscale(train_images)\nx_test=tf.image.rgb_to_grayscale(test_images)\n#to convert images to grayscale\n```\n\nRestrict our dataset to those of only two ground truth labels: cat and frog. Filter the dataset to keep just the cat and frog, remove the other classes. At the same time convert the label, y, to boolean: True for 3 (cat) and False for 6 (frog).\n\n\n```python\ndef filter_36(x, y):\n keep = (y == 3) | (y == 6)\n x, y = x[keep], y[keep]\n y = y == 3\n return x,y\n```\n\n\n```python\nx_train, y_train = filter_36(x_train, y_train)\nx_test, y_test = filter_36(x_test, y_test)\n\nprint(\"Number of filtered training examples:\", len(x_train))\nprint(\"Number of filtered test examples:\", len(x_test))\n\n# The lable data is now a mix of True and False values, True for a cat and False for a frog.\nprint(y_train)\n```\n\n Number of filtered training examples: 10000\n Number of filtered test examples: 2000\n [False True True ... True True False]\n\n\n\n```python\n# Let's reduce the size of the data set to 1000 training data points and 200 testing data points.\nN_TRAIN = 1000\nN_TEST = 200\nx_train, x_test = x_train[:N_TRAIN], x_test[:N_TEST]\ny_train, y_test = y_train[:N_TRAIN], y_test[:N_TEST]\n\nprint(\"New number of training examples:\", len(x_train))\nprint(\"New number of test examples:\", len(x_test))\n```\n\n New number of training examples: 1000\n New number of test examples: 200\n\n\n\n```python\n# Review the first four of the images in 32x32 mode\nf, axarr = plt.subplots(2,2)\naxarr[0,0].imshow(x_train[0, :, :, 0])\naxarr[0,1].imshow(x_train[1, :, :, 0])\naxarr[1,0].imshow(x_train[2, :, :, 0])\naxarr[1,1].imshow(x_train[3, :, :, 0])\n```\n\n\n```python\nplt.imshow(x_train[0, :, :, 0])\nplt.colorbar()\n```\n\nResize the image 32x32 to down to 2x2 as that is what our FRQI example can work with. This can be done using bilinear interpolation as another option.\n\n\n```python\ndef truncate_x(x_train, x_test, n_components=10):\n \"\"\"Perform PCA on image dataset keeping the top `n_components` components.\"\"\"\n n_points_train = tf.gather(tf.shape(x_train), 0)\n n_points_test = tf.gather(tf.shape(x_test), 0)\n\n # Flatten to 1D\n x_train = tf.reshape(x_train, [n_points_train, -1])\n x_test = tf.reshape(x_test, [n_points_test, -1])\n\n # Normalize.\n feature_mean = tf.reduce_mean(x_train, axis=0)\n x_train_normalized = x_train - feature_mean\n x_test_normalized = x_test - feature_mean\n\n # Truncate.\n e_values, e_vectors = tf.linalg.eigh(\n tf.einsum('ji,jk->ik', x_train_normalized, x_train_normalized))\n return tf.einsum('ij,jk->ik', x_train_normalized, e_vectors[:,-n_components:]), \\\n tf.einsum('ij,jk->ik', x_test_normalized, e_vectors[:, -n_components:])\n\n#DATASET_DIM = 10\n#x_train_s, x_test_s = truncate_x(x_train, x_test, n_components=DATASET_DIM)\n#print(f'New datapoint dimension:', len(x_train_s[0]))\n\nimage_dimension = 2\nx_train_s = tf.image.resize(x_train, (image_dimension,image_dimension)).numpy()\nx_test_s = tf.image.resize(x_test, (image_dimension,image_dimension)).numpy()\n```\n\n\n```python\n# Review the first four of the images in 32x32 mode\nf, axarr = plt.subplots(2,2)\naxarr[0,0].imshow(x_train_s[0, :, :, 0])\naxarr[0,1].imshow(x_train_s[1, :, :, 0])\naxarr[1,0].imshow(x_train_s[2, :, :, 0])\naxarr[1,1].imshow(x_train_s[3, :, :, 0])\n```\n\n\n```python\nplt.imshow(x_train_s[0, :, :, 0])\n# The color bar is getting adjusted for some reason. May be a bug with tf.image.resize() which has some underlying problems.\n# https://hackernoon.com/how-tensorflows-tf-image-resize-stole-60-days-of-my-life-aba5eb093f35\nplt.colorbar()\n```\n\n## Encode the data as quantum circuits\n+ Set the base state based on the pixel\n+ run through the algorithm\n\nTransform the images to black and white by thresholding the pixel color.\n\n\n\n\n```python\nprint(x_train_s[0:4,0:4])\n\nTHRESHOLD = 0.5\n\nx_train_bin = np.array(x_train_s > THRESHOLD, dtype=np.float32)\nx_test_bin = np.array(x_test_s > THRESHOLD, dtype=np.float32)\n\nprint(x_train_bin[0:4,0:4])\n```\n\n [[[[0.36764815]\n [0.28288433]]\n \n [[0.49960423]\n [0.25495717]]]\n \n \n [[[0.35545862]\n [0.6398164 ]]\n \n [[0.2035408 ]\n [0.08488853]]]\n \n \n [[[0.5119202 ]\n [0.21370961]]\n \n [[0.44399855]\n [0.6437518 ]]]\n \n \n [[[0.18349177]\n [0.434492 ]]\n \n [[0.36693138]\n [0.2093106 ]]]]\n [[[[0.]\n [0.]]\n \n [[0.]\n [0.]]]\n \n \n [[[0.]\n [1.]]\n \n [[0.]\n [0.]]]\n \n \n [[[1.]\n [0.]]\n \n [[0.]\n [1.]]]\n \n \n [[[0.]\n [0.]]\n \n [[0.]\n [0.]]]]\n\n\nThe qubits at pixel indices with values that exceed a threshold, are rotated through a gate. This is the part we should replace.\n\n\n```python\ndef FRQI(theta):\n\n circuit = cirq.Circuit()\n\n theta = theta\n\n # Or created in a range\n # This will create LineQubit(0), LineQubit(1), LineQubit(2)\n q0, q1, q2 = cirq.LineQubit.range(3)\n\n circuit.append(cirq.H(q) for q in cirq.LineQubit.range(2))\n\n qc.barrier()\n #Pixel 1\n\n qc.cry(theta,0,2)\n qc.cx(0,1)\n qc.cry(-theta,1,2)\n qc.cx(0,1)\n qc.cry(theta,1,2)\n\n qc.barrier()\n #Pixel 2\n\n qc.x(1)\n qc.cry(theta,0,2)\n qc.cx(0,1)\n qc.cry(-theta,1,2)\n qc.cx(0,1)\n qc.cry(theta,1,2)\n\n qc.barrier()\n\n qc.x(1)\n qc.x(0)\n qc.cry(theta,0,2)\n qc.cx(0,1)\n qc.cry(-theta,1,2)\n qc.cx(0,1)\n qc.cry(theta,1,2)\n\n\n qc.barrier()\n\n qc.x(1)\n\n qc.cry(theta,0,2)\n qc.cx(0,1)\n qc.cry(-theta,1,2)\n qc.cx(0,1)\n qc.cry(theta,1,2)\n\n qc.measure_all()\n\n qc.draw()\n\n\nFRQI(0)\n\n```\n\n\n```python\ndef convert_to_circuit(image):\n \"\"\"Encode truncated classical image into quantum datapoint.\"\"\"\n values = np.ndarray.flatten(image)\n qubits = cirq.GridQubit.rect(6, 6)\n circuit = cirq.Circuit()\n for i, value in enumerate(values):\n if value:\n circuit.append(cirq.H(qubits[i]))\n\n return circuit\n\n\nx_train_circ = [convert_to_circuit(x) for x in x_train_bin]\nx_test_circ = [convert_to_circuit(x) for x in x_test_bin]\n```\n\n\n```python\nSVGCircuit(x_train_circ[0])\n\n```\n\nConvert Cirq circuits to tensors for tfq\n\n\n```python\nx_train_tfcirc = tfq.convert_to_tensor(x_train_circ)\nx_test_tfcirc = tfq.convert_to_tensor(x_test_circ)\n```\n\n## Quantum neural network\n\nadd a layer of these gates to a circuit\n\n\n```python\nclass CircuitLayerBuilder():\n def __init__(self, data_qubits, readout):\n self.data_qubits = data_qubits\n self.readout = readout\n\n def add_layer(self, circuit, gate, prefix):\n for i, qubit in enumerate(self.data_qubits):\n symbol = sympy.Symbol(prefix + '-' + str(i))\n circuit.append(gate(qubit, self.readout)**symbol)\n\n```\n\n\n```python\ndemo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1),\n readout=cirq.GridQubit(-1,-1))\n\ncircuit = cirq.Circuit()\ndemo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx')\nSVGCircuit(circuit)\n\n```\n\n\n```python\ndef create_quantum_model():\n \"\"\"Create a QNN model circuit and readout operation to go along with it.\"\"\"\n data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.\n readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]\n circuit = cirq.Circuit()\n\n # Prepare the readout qubit.\n circuit.append(cirq.X(readout))\n circuit.append(cirq.H(readout))\n\n builder = CircuitLayerBuilder(\n data_qubits = data_qubits,\n readout=readout)\n\n # Then add layers (experiment by adding more).\n builder.add_layer(circuit, cirq.XX, \"xx1\")\n builder.add_layer(circuit, cirq.ZZ, \"zz1\")\n\n # Finally, prepare the readout qubit.\n circuit.append(cirq.H(readout))\n\n return circuit, cirq.Z(readout)\n\n```\n\n\n```python\nmodel_circuit, model_readout = create_quantum_model()\n\n```\n\n Build the Keras model.\n\n\n\n```python\nmodel = tf.keras.Sequential([\n # The input is the data-circuit, encoded as a tf.string\n tf.keras.layers.Input(shape=(), dtype=tf.string),\n # The PQC layer returns the expected value of the readout gate, range [-1,1].\n tfq.layers.PQC(model_circuit, model_readout),\n])\n```\n\n\n```python\ny_train_hinge = 2.0*y_train-1.0\ny_test_hinge = 2.0*y_test-1.0\n\n```\n\n\n```python\ndef hinge_accuracy(y_true, y_pred):\n y_true = tf.squeeze(y_true) > 0.0\n y_pred = tf.squeeze(y_pred) > 0.0\n result = tf.cast(y_true == y_pred, tf.float32)\n\n return tf.reduce_mean(result)\n```\n\n\n```python\nmodel.compile(\n loss=tf.keras.losses.Hinge(),\n optimizer=tf.keras.optimizers.Adam(),\n metrics=[hinge_accuracy])\n\n\n```\n\n\n```python\nprint(model.summary())\n```\n\n\n```python\nEPOCHS = 3\nBATCH_SIZE = 128\n\nNUM_EXAMPLES = len(x_train_tfcirc)\n\n\n```\n\n\n```python\nx_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]\ny_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]\n\n\n```\n\n\n```python\nimport time\nstart_time = time.time()\n```\n\n\n```python\nqnn_history = model.fit(\n x_train_tfcirc_sub, y_train_hinge_sub,\n batch_size=32,\n epochs=EPOCHS,\n verbose=1,\n validation_data=(x_test_tfcirc, y_test_hinge))\n\nqnn_results = model.evaluate(x_test_tfcirc, y_test)\n\n\n```\n\n\n```python\nqnn_accuracy = qnn_results[1]\nqnn_accuracy\n\n```\n\n\n```python\nmodel.predict_classes(x_train_tfcirc[0:7])\n```\n\n\n```python\n\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.gridspec import GridSpec\n\ndef format_axes(fig):\n for i, ax in enumerate(fig.axes):\n ax.tick_params(labelbottom=False, labelleft=False)\n\nfig = plt.figure(figsize=(10, 10))\n\ngs = GridSpec(3, 3, figure=fig)\nax1 = fig.add_subplot(gs[0, 0])\n# identical to ax1 = plt.subplot(gs.new_subplotspec((0, 0), colspan=3))\n\nfor i in range(3):\n for j in range(3):\n ax = fig.add_subplot(gs[i, j])\n \n ax.imshow(x_train[i+j, :, :, 0])\n\nfig.suptitle(\"GridSpec\")\nformat_axes(fig)\n\nplt.show()\n\n\n```\n\n\n```python\nplt.plot(qnn_history.history['hinge_accuracy'], label='QCNN')\n#plt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')\n#plt.title('Quantum vs Hybrid CNN performance')\nplt.xlabel('Epochs')\nplt.legend()\nplt.ylabel('Validation Accuracy')\nplt.show()\n```\n", "meta": {"hexsha": "75e62a17956e76bcb2dcb7af8e6433269bb2fbb5", "size": 729048, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FRQI.ipynb", "max_stars_repo_name": "WebheadTech/QCourse511-1", "max_stars_repo_head_hexsha": "e8396eb5b292203669eda5d04541d31c3d947803", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "FRQI.ipynb", "max_issues_repo_name": "WebheadTech/QCourse511-1", "max_issues_repo_head_hexsha": "e8396eb5b292203669eda5d04541d31c3d947803", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FRQI.ipynb", "max_forks_repo_name": "WebheadTech/QCourse511-1", "max_forks_repo_head_hexsha": "e8396eb5b292203669eda5d04541d31c3d947803", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 529.8313953488, "max_line_length": 600130, "alphanum_fraction": 0.938559601, "converted": true, "num_tokens": 4309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46490157137338844, "lm_q2_score": 0.24220562872535942, "lm_q1q2_score": 0.1126017773898991}} {"text": "##### Copyright 2020 The TensorFlow Authors.\n\n\n```\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# Calculate gradients\n\n\n \n \n \n \n
\n View on TensorFlow.org\n \n Run in Google Colab\n \n View source on GitHub\n \n Download notebook\n
\n\nThis tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.\n\nCalculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down\u2014unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes.\n\n## Setup\n\n\n```\n!pip install tensorflow==2.3.1\n```\n\nInstall TensorFlow Quantum:\n\n\n```\n!pip install tensorflow-quantum\n```\n\nNow import TensorFlow and the module dependencies:\n\n\n```\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\n```\n\n## 1. Preliminary\n\nLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:\n\n\n```\nqubit = cirq.GridQubit(0, 0)\nmy_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))\nSVGCircuit(my_circuit)\n```\n\nAlong with an observable:\n\n\n```\npauli_x = cirq.X(qubit)\npauli_x\n```\n\nLooking at this operator you know that $\u27e8Y(\\alpha)| X | Y(\\alpha)\u27e9 = \\sin(\\pi \\alpha)$\n\n\n```\ndef my_expectation(op, alpha):\n \"\"\"Compute \u27e8Y(alpha)| `op` | Y(alpha)\u27e9\"\"\"\n params = {'alpha': alpha}\n sim = cirq.Simulator()\n final_state_vector = sim.simulate(my_circuit, params).final_state_vector\n return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real\n\n\nmy_alpha = 0.3\nprint(\"Expectation=\", my_expectation(pauli_x, my_alpha))\nprint(\"Sin Formula=\", np.sin(np.pi * my_alpha))\n```\n\n and if you define $f_{1}(\\alpha) = \u27e8Y(\\alpha)| X | Y(\\alpha)\u27e9$ then $f_{1}^{'}(\\alpha) = \\pi \\cos(\\pi \\alpha)$. Let's check this:\n\n\n```\ndef my_grad(obs, alpha, eps=0.01):\n grad = 0\n f_x = my_expectation(obs, alpha)\n f_x_prime = my_expectation(obs, alpha + eps)\n return ((f_x_prime - f_x) / eps).real\n\n\nprint('Finite difference:', my_grad(pauli_x, my_alpha))\nprint('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))\n```\n\n## 2. The need for a differentiator\n\nWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:\n\n\n```\nexpectation_calculation = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nexpectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])\n```\n\nHowever, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:\n\n\n```\nsampled_expectation_calculation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])\n```\n\nThis can quickly compound into a serious accuracy problem when it comes to gradients:\n\n\n```\n# Make input_points = [batch_size, 1] array.\ninput_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)\nexact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=input_points)\nimperfect_outputs = sampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=input_points)\nplt.title('Forward Pass Values')\nplt.xlabel('$x$')\nplt.ylabel('$f(x)$')\nplt.plot(input_points, exact_outputs, label='Analytic')\nplt.plot(input_points, imperfect_outputs, label='Sampled')\nplt.legend()\n```\n\n\n```\n# Gradients are a much different story.\nvalues_tensor = tf.convert_to_tensor(input_points)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = sampled_expectation_calculation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nsampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')\nplt.legend()\n```\n\nHere you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:\n\n\n```\n# A smarter differentiation scheme.\ngradient_safe_sampled_expectation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ParameterShift())\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = gradient_safe_sampled_expectation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nsampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_param_shift_gradients, label='Sampled')\nplt.legend()\n```\n\nFrom the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more \"real world\" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm.\n\n## 3. Multiple observables\n\nLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.\n\n\n```\npauli_z = cirq.Z(qubit)\npauli_z\n```\n\nIf this observable is used with the same circuit as before, then you have $f_{2}(\\alpha) = \u27e8Y(\\alpha)| Z | Y(\\alpha)\u27e9 = \\cos(\\pi \\alpha)$ and $f_{2}^{'}(\\alpha) = -\\pi \\sin(\\pi \\alpha)$. Perform a quick check:\n\n\n```\ntest_value = 0.\n\nprint('Finite difference:', my_grad(pauli_z, test_value))\nprint('Sin formula: ', -np.pi * np.sin(np.pi * test_value))\n```\n\nIt's a match (close enough).\n\nNow if you define $g(\\alpha) = f_{1}(\\alpha) + f_{2}(\\alpha)$ then $g'(\\alpha) = f_{1}^{'}(\\alpha) + f^{'}_{2}(\\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.\n\nThis means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).\n\n\n```\nsum_of_outputs = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=[[test_value]])\n```\n\nHere you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:\n\n\n```\ntest_value_tensor = tf.convert_to_tensor([[test_value]])\n\nwith tf.GradientTape() as g:\n g.watch(test_value_tensor)\n outputs = sum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=test_value_tensor)\n\nsum_of_gradients = g.gradient(outputs, test_value_tensor)\n\nprint(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))\nprint(sum_of_gradients.numpy())\n```\n\nHere you have verified that the sum of the gradients for each observable is indeed the gradient of $\\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow.\n\n## 4. Advanced usage\nAll differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement `get_gradient_circuits`, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload `differentiate_analytic` and `differentiate_sampled`; the class `tfq.differentiators.Adjoint` takes this route.\n\nThe following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting.\n\nRecall the circuit you defined above, $|\\alpha\u27e9 = Y^{\\alpha}|0\u27e9$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\\alpha) = \u27e8\\alpha|X|\\alpha\u27e9$. Using [parameter shift rules](https://pennylane.ai/qml/glossary/parameter_shift.html), for this circuit, you can find that the derivative is\n$$\\frac{\\partial}{\\partial \\alpha} f(\\alpha) = \\frac{\\pi}{2} f\\left(\\alpha + \\frac{1}{2}\\right) - \\frac{ \\pi}{2} f\\left(\\alpha - \\frac{1}{2}\\right)$$\nThe `get_gradient_circuits` function returns the components of this derivative.\n\n\n```\nclass MyDifferentiator(tfq.differentiators.Differentiator):\n \"\"\"A Toy differentiator for .\"\"\"\n\n def __init__(self):\n pass\n\n def get_gradient_circuits(self, programs, symbol_names, symbol_values):\n \"\"\"Return circuits to compute gradients for given forward pass circuits.\n \n Every gradient on a quantum computer can be computed via measurements\n of transformed quantum circuits. Here, you implement a custom gradient\n for a specific circuit. For a real differentiator, you will need to\n implement this function in a more general way. See the differentiator\n implementations in the TFQ library for examples.\n \"\"\"\n\n # The two terms in the derivative are the same circuit...\n batch_programs = tf.stack([programs, programs], axis=1)\n\n # ... with shifted parameter values.\n shift = tf.constant(1/2)\n forward = symbol_values + shift\n backward = symbol_values - shift\n batch_symbol_values = tf.stack([forward, backward], axis=1)\n \n # Weights are the coefficients of the terms in the derivative.\n num_program_copies = tf.shape(batch_programs)[0]\n batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),\n [num_program_copies, 1, 1])\n\n # The index map simply says which weights go with which circuits.\n batch_mapper = tf.tile(\n tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])\n\n return (batch_programs, symbol_names, batch_symbol_values,\n batch_weights, batch_mapper)\n```\n\nThe `Differentiator` base class uses the components returned from `get_gradient_circuits` to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing `tfq.layer` objects:\n\n\n```\ncustom_dif = MyDifferentiator()\ncustom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)\n\n# Now let's get the gradients with finite diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\n# Now let's get the gradients with custom diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n my_outputs = custom_grad_expectation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nmy_gradients = g.gradient(my_outputs, values_tensor)\n\nplt.subplot(1, 2, 1)\nplt.title('Exact Gradient')\nplt.plot(input_points, analytic_finite_diff_gradients.numpy())\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.subplot(1, 2, 2)\nplt.title('My Gradient')\nplt.plot(input_points, my_gradients.numpy())\nplt.xlabel('x')\n```\n\nThis new differentiator can now be used to generate differentiable ops.\n\nKey Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.\n\n\n```\n# Create a noisy sample based expectation op.\nexpectation_sampled = tfq.get_sampled_expectation_op(\n cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))\n\n# Make it differentiable with your differentiator:\n# Remember to refresh the differentiator before attaching the new op\ncustom_dif.refresh()\ndifferentiable_op = custom_dif.generate_differentiable_op(\n sampled_op=expectation_sampled)\n\n# Prep op inputs.\ncircuit_tensor = tfq.convert_to_tensor([my_circuit])\nop_tensor = tfq.convert_to_tensor([[pauli_x]])\nsingle_value = tf.convert_to_tensor([[my_alpha]])\nnum_samples_tensor = tf.convert_to_tensor([[5000]])\n\nwith tf.GradientTape() as g:\n g.watch(single_value)\n forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,\n op_tensor, num_samples_tensor)\n\nmy_gradients = g.gradient(forward_output, single_value)\n\nprint('---TFQ---')\nprint('Foward: ', forward_output.numpy())\nprint('Gradient:', my_gradients.numpy())\nprint('---Original---')\nprint('Forward: ', my_expectation(pauli_x, my_alpha))\nprint('Gradient:', my_grad(pauli_x, my_alpha))\n```\n\nSuccess: Now you can use all the differentiators that TensorFlow Quantum has to offer\u2014and define your own.\n", "meta": {"hexsha": "0a8a9077155cf91f8278b40dc2246f0be1fc6528", "size": 28174, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "site/en-snapshot/quantum/tutorials/gradients.ipynb", "max_stars_repo_name": "wanggdnju/docs-l10n", "max_stars_repo_head_hexsha": "4775692c820ce24babcaf2f29f6130195f7ff509", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-14T09:14:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T09:14:16.000Z", "max_issues_repo_path": "site/en-snapshot/quantum/tutorials/gradients.ipynb", "max_issues_repo_name": "wanggdnju/docs-l10n", "max_issues_repo_head_hexsha": "4775692c820ce24babcaf2f29f6130195f7ff509", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "site/en-snapshot/quantum/tutorials/gradients.ipynb", "max_forks_repo_name": "wanggdnju/docs-l10n", "max_forks_repo_head_hexsha": "4775692c820ce24babcaf2f29f6130195f7ff509", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2279511533, "max_line_length": 615, "alphanum_fraction": 0.5337190317, "converted": true, "num_tokens": 3792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.23370635157681108, "lm_q1q2_score": 0.11229091885702559}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ndisplay(HTML(''))\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sympy as sym\nimport scipy.signal as signal\nfrom ipywidgets import widgets, interact\nimport control as cn\n```\n\n## Root locus\n\nRoot locus is a plot of the location of closed-loop system poles in relation with a certain parameter (typically amplification). It can be shown that the curves start in the open-loop poles and end up in the open-loop zeros (or infinity). The location of closed-loop system poles not only gives an indication of system stability, but other closed-loop system response properties such as overshoot, rise time and settling time can also be inferred from pole location.\n\n---\n\n### How to use this notebook?\n1. Click on *P0*, *P1*, *I0* or *I1* to toggle between the following objects: proportional of the zeroth, first or second order, or an integral one of zeroth or first order. The transfer function of P0 object is $k_p$ (in this example $k_p=2$), of PI object $\\frac{k_p}{\\tau s+1}$ (in this example $k_p=1$ and $\\tau=2$), of IO object $\\frac{k_i}{s}$ (in this example $k_i=\\frac{1}{10}$) and of I1 object $\\frac{k_i}{s(\\tau s +1)}$ (in this example $k_i=1$ and $\\tau=10$).\n2. Click on the *P*, *PI*, *PD* or *PID* button to toogle between proportional, proportional-integral, proportional-derivative or proportional\u2013integral\u2013derivative control algorithm types.\n3. Move the sliders to change the values of proportional ($K_p$), integral ($T_i$) and derivative ($T_d$) PID tunning coefficients.\n4. Move the slider $t_{max}$ to change the maximum value of the time on x axis.\n\n\n```python\nA = 10\na=0.1\ns, P, I, D = sym.symbols('s, P, I, D')\n\nobj = 1/(A*s)\nPID = P + P/(I*s) + P*D*s#/(a*D*s+1)\nsystem = obj*PID/(1+obj*PID)\nnum = [sym.fraction(system.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system.factor())[0], gen=s)))]\nden = [sym.fraction(system.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system.factor())[1], gen=s)))]\nsystem_func_open = obj*PID\nnum_open = [sym.fraction(system_func_open.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[0], gen=s)))]\nden_open = [sym.fraction(system_func_open.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[1], gen=s)))]\n \n# make figure\nfig = plt.figure(figsize=(9.8, 4),num='Root locus')\nplt.subplots_adjust(wspace=0.3)\n\n# add axes\nax = fig.add_subplot(121)\nax.grid(which='both', axis='both', color='lightgray')\nax.set_title('Time response')\nax.set_xlabel('t [s]')\nax.set_ylabel('input, output')\n\nrlocus = fig.add_subplot(122)\n\n\n# plot step function and responses (initalisation)\ninput_plot, = ax.plot([],[],'C0', lw=1, label='input')\nresponse_plot, = ax.plot([],[], 'C1', lw=2, label='output')\nax.legend()\n\nrlocus_plot, = rlocus.plot([], [], 'r')\n\nplt.show()\n\nsystem_open = None\nsystem_close = None\ndef update_plot(KP, TI, TD, Time_span):\n global num, den, num_open, den_open\n global system_open, system_close\n num_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in num]\n den_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in den]\n system = signal.TransferFunction(num_temp, den_temp)\n system_close = system\n num_temp_open = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in num_open]\n den_temp_open = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in den_open]\n system_open = signal.TransferFunction(num_temp_open, den_temp_open)\n \n rlocus.clear()\n r, k, xlim, ylim = cn.root_locus_modified(system_open, Plot=False)\n# r, k = cn.root_locus(system_open, Plot=False)\n #rlocus.scatter(r)\n #plot closed loop poles and zeros\n poles = np.roots(system.den)\n rlocus.plot(np.real(poles), np.imag(poles), 'kx')\n zeros = np.roots(system.num)\n if zeros.size > 0:\n rlocus.plot(np.real(zeros), np.imag(zeros), 'ko', alpha=0.5)\n # plot open loop poles and zeros\n poles = np.roots(system_open.den)\n rlocus.plot(np.real(poles), np.imag(poles), 'x', alpha=0.5)\n zeros = np.roots(system_open.num)\n if zeros.size > 0:\n rlocus.plot(np.real(zeros), np.imag(zeros), 'o')\n #plot root locus\n for index, col in enumerate(r.T):\n rlocus.plot(np.real(col), np.imag(col), 'b', alpha=0.5)\n \n rlocus.set_title('Root locus')\n rlocus.set_xlabel('Re')\n rlocus.set_ylabel('Im')\n rlocus.grid(which='both', axis='both', color='lightgray')\n \n rlocus.axhline(linewidth=.3, color='g')\n rlocus.axvline(linewidth=.3, color='g')\n rlocus.set_ylim(ylim)\n rlocus.set_xlim(xlim)\n \n time = np.linspace(0, Time_span, 300)\n u = np.ones_like(time)\n u[0] = 0\n time, response = signal.step(system, T=time)\n \n response_plot.set_data(time, response)\n input_plot.set_data(time, u)\n \n ax.set_ylim([min([np.min(u), min(response),-.1]),min(100,max([max(response)*1.05, 1, 1.05*np.max(u)]))])\n ax.set_xlim([-0.1,max(time)])\n\n plt.show()\n\ncontroller_ = PID\nobject_ = obj\n\ndef calc_tf():\n global num, den, controller_, object_, num_open, den_open\n system_func = object_*controller_/(1+object_*controller_)\n \n num = [sym.fraction(system_func.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[0], gen=s)))]\n den = [sym.fraction(system_func.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[1], gen=s)))]\n \n system_func_open = object_*controller_\n num_open = [sym.fraction(system_func_open.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[0], gen=s)))]\n den_open = [sym.fraction(system_func_open.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[1], gen=s)))]\n \n update_plot(Kp_widget.value, Ti_widget.value, Td_widget.value, time_span_widget.value)\n\ndef transfer_func(controller_type):\n global controller_\n proportional = P\n integral = P/(I*s)\n differential = P*D*s/(a*D*s+1)\n if controller_type =='P':\n controller_func = proportional\n Kp_widget.disabled=False\n Ti_widget.disabled=True\n Td_widget.disabled=True\n elif controller_type =='PI':\n controller_func = proportional+integral\n Kp_widget.disabled=False\n Ti_widget.disabled=False\n Td_widget.disabled=True\n elif controller_type == 'PD':\n controller_func = proportional+differential\n Kp_widget.disabled=False\n Ti_widget.disabled=True\n Td_widget.disabled=False\n else:\n controller_func = proportional+integral+differential\n Kp_widget.disabled=False\n Ti_widget.disabled=False\n Td_widget.disabled=False\n \n controller_ = controller_func\n calc_tf()\n \ndef transfer_func_obj(object_type):\n global object_\n if object_type == 'P0':\n object_ = 2\n elif object_type == 'P1':\n object_ = 1/(2*s+1) \n elif object_type == 'I0':\n object_ = 1/(10*s)\n elif object_type == 'I1':\n object_ = 1/(s*(10*s+1))\n calc_tf()\n\nstyle = {'description_width': 'initial'}\n\ndef buttons_controller_clicked(event):\n controller = buttons_controller.options[buttons_controller.index]\n transfer_func(controller)\nbuttons_controller = widgets.ToggleButtons(\n options=['P', 'PI', 'PD', 'PID'],\n description='Select control algorithm type:',\n disabled=False,\n style=style)\nbuttons_controller.observe(buttons_controller_clicked)\n\ndef buttons_object_clicked(event):\n object_ = buttons_object.options[buttons_object.index]\n transfer_func_obj(object_)\nbuttons_object = widgets.ToggleButtons(\n options=['P0', 'P1', 'I0', 'I1'],\n description='Select object:',\n disabled=False,\n style=style)\nbuttons_object.observe(buttons_object_clicked)\n\n \nKp_widget = widgets.FloatLogSlider(value=.5,min=-3,max=2.1,step=.001,description=r'\\(K_p\\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\nTi_widget = widgets.FloatLogSlider(value=1.,min=-3,max=1.8,step=.001,description=r'\\(T_{i} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\nTd_widget = widgets.FloatLogSlider(value=1.,min=-3,max=1.8,step=.001,description=r'\\(T_{d} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\n\ntime_span_widget = widgets.FloatSlider(value=10.,min=.5,max=50.,step=0.1,description=r'\\(t_{max} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1f')\n\ntransfer_func(buttons_controller.options[buttons_controller.index])\ntransfer_func_obj(buttons_object.options[buttons_object.index])\n\ndisplay(buttons_object)\ndisplay(buttons_controller)\n\ninteract(update_plot, KP=Kp_widget, TI=Ti_widget, TD=Td_widget, Time_span=time_span_widget);\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Select object:', options=('P0', 'P1', 'I0', 'I1'), style=ToggleButtonsStyle(descrip\u2026\n\n\n\n ToggleButtons(description='Select control algorithm type:', options=('P', 'PI', 'PD', 'PID'), style=ToggleButt\u2026\n\n\n\n interactive(children=(FloatLogSlider(value=0.5, description='\\\\(K_p\\\\)', max=2.1, min=-3.0, readout_format='.3\u2026\n\n", "meta": {"hexsha": "72bfa549fcb0e1fe0db0e4dde60b5bdb8b346bbf", "size": 127629, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_en/examples/02/.ipynb_checkpoints/TD-18-Root-locus-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT/ENG/examples/02/TD-18-Root-locus.ipynb", "max_issues_repo_name": "tuxsaurus/ICCT", "max_issues_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT/ENG/examples/02/TD-18-Root-locus.ipynb", "max_forks_repo_name": "tuxsaurus/ICCT", "max_forks_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 113.1462765957, "max_line_length": 78867, "alphanum_fraction": 0.7907998966, "converted": true, "num_tokens": 2641, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4687906266262438, "lm_q2_score": 0.23934934189686402, "lm_q1q2_score": 0.11220472797040996}} {"text": "\n# PHY321: Forces, Newton's Laws and Motion Example\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University, USA and Department of Physics, University of Oslo, Norway \n\n **[Scott Pratt](https://pa.msu.edu/profile/pratts/)**, Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University, USA \n\n **[Carl Schmidt](https://pa.msu.edu/profile/schmidt/)**, Department of Physics and Astronomy, Michigan State University, USA\n\nDate: **Dec 16, 2020**\n\nCopyright 1999-2020, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n\n\n\n\n# Basic Steps of Scientific Investigations\n\nAn overarching aim in this course is to give you a deeper\nunderstanding of the scientific method. The problems we study will all\ninvolve cases where we can apply classical mechanics. In our previous\nmaterial we already assumed that we had a model for the motion of an\nobject. Alternatively we could have data from experiment (like Usain\nBolt's 100m world record run in 2008). Or we could have performed\nourselves an experiment and we want to understand which forces are at\nplay and whether these forces can be understood in terms of\nfundamental forces.\n\nOur first step consists in identifying the problem. What we sketch\nhere may include a mix of experiment and theoretical simulations, or\njust experiment or only theory.\n\n\n## Identifying our System\n\nHere we can ask questions like\n1. What kind of object is moving\n\n2. What kind of data do we have\n\n3. How do we measure position, velocity, acceleration etc\n\n4. Which initial conditions influence our system\n\n5. Other aspects which allow us to identify the system\n\n## Defining a Model\n\nWith our eventual data and observations we would now like to develop a\nmodel for the system. In the end we want obviously to be able to\nunderstand which forces are at play and how they influence our\nspecific system. That is, can we extract some deeper insights about a\nsystem?\n\nWe need then to\n1. Find the forces that act on our system\n\n2. Introduce models for the forces\n\n3. Identify the equations which can govern the system (Newton's second law for example)\n\n4. More elements we deem important for defining our model\n\n## Solving the Equations\n\nWith the model at hand, we can then solve the equations. In classical mechanics we normally end up with solving sets of coupled ordinary differential equations or partial differential equations.\n1. Using Newton's second law we have equations of the type $\\boldsymbol{F}=m\\boldsymbol{a}=md\\boldsymbol{v}/dt$\n\n2. We need to define the initial conditions (typically the initial velocity and position as functions of time) and/or initial conditions and boundary conditions\n\n3. The solution of the equations give us then the position, the velocity and other time-dependent quantities which may specify the motion of a given object.\n\nWe are not yet done. With our lovely solvers, we need to start thinking.\n\n\nNow it is time to ask the big questions. What do our results mean? Can we give a simple interpretation in terms of fundamental laws? What do our results mean? Are they correct?\nThus, typical questions we may ask are\n1. Are our results for say $\\boldsymbol{r}(t)$ valid? Do we trust what we did? Can you validate and verify the correctness of your results?\n\n2. Evaluate the answers and their implications\n\n3. Compare with experimental data if possible. Does our model make sense?\n\n4. and obviously many other questions.\n\nThe analysis stage feeds back to the first stage. It may happen that\nthe data we had were not good enough, there could be large statistical\nuncertainties. We may need to collect more data or perhaps we did a\nsloppy job in identifying the degrees of freedom.\n\nAll these steps are essential elements in a scientific\nenquiry. Hopefully, through a mix of numerical simulations, analytical\ncalculations and experiments we may gain a deeper insight about the\nphysics of a specific system.\n\nLet us now remind ourselves of Newton's laws, since these are the laws of motion we will study in this course.\n\n\n## Newton's Laws\n\nWhen analyzing a physical system we normally start with distinguishing between the object we are studying (we will label this in more general terms as our **system**) and how this system interacts with the environment (which often means everything else!)\n\nIn our investigations we will thus analyze a specific physics problem in terms of the system and the environment.\nIn doing so we need to identify the forces that act on the system and assume that the\nforces acting on the system must have a source, an identifiable cause in\nthe environment.\n\nA force acting on for example a falling object must be related to an interaction with something in the environment.\nThis also means that we do not consider internal forces. The latter are forces between\none part of the object and another part. In this course we will mainly focus on external forces.\n\nForces are either contact forces or long-range forces.\n\nContact forces, as evident from the name, are forces that occur at the contact between\nthe system and the environment. Well-known long-range forces are the gravitional force and the electromagnetic force.\n\n\n\n## Setting up a model for forces acting on an object\n\nIn order to set up the forces which act on an object, the following steps may be useful\n1. Divide the problem into system and environment.\n\n2. Draw a figure of the object and everything in contact with the object.\n\n3. Draw a closed curve around the system.\n\n4. Find contact points\u2014these are the points where contact forces may act.\n\n5. Give names and symbols to all the contact forces.\n\n6. Identify the long-range forces.\n\n7. Make a drawing of the object. Draw the forces as arrows, vectors, starting from where the force is acting. The direction of the vector(s) indicates the (positive) direction of the force. Try to make the length of the arrow indicate the relative magnitude of the forces.\n\n8. Draw in the axes of the coordinate system. It is often convenient to make one axis parallel to the direction of motion. When you choose the direction of the axis you also choose the positive direction for the axis.\n\n## Newton's Laws, the Second one first\n\n\nNewton\u2019s second law of motion: The force $\\boldsymbol{F}$ on an object of inertial mass $m$\nis related to the acceleration a of the object through\n\n$$\n\\boldsymbol{F} = m\\boldsymbol{a},\n$$\n\nwhere $\\boldsymbol{a}$ is the acceleration.\n\nNewton\u2019s laws of motion are laws of nature that have been found by experimental\ninvestigations and have been shown to hold up to continued experimental investigations.\nNewton\u2019s laws are valid over a wide range of length- and time-scales. We\nuse Newton\u2019s laws of motion to describe everything from the motion of atoms to the\nmotion of galaxies.\n\nThe second law is a vector equation with the acceleration having the same\ndirection as the force. The acceleration is proportional to the force via the mass $m$ of the system under study.\n\n\nNewton\u2019s second law introduces a new property of an object, the so-called \ninertial mass $m$. We determine the inertial mass of an object by measuring the\nacceleration for a given applied force.\n\n\n\n## Then the First Law\n\n\nWhat happens if the net external force on a body is zero? Applying Newton\u2019s second\nlaw, we find:\n\n$$\n\\boldsymbol{F} = 0 = m\\boldsymbol{a},\n$$\n\nwhich gives using the definition of the acceleration\n\n$$\n\\boldsymbol{a} = \\frac{d\\boldsymbol{v}}{dt}=0.\n$$\n\nThe acceleration is zero, which means that the velocity of the object is constant. This\nis often referred to as Newton\u2019s first law. An object in a state of uniform motion tends to remain in\nthat state unless an external force changes its state of motion.\nWhy do we need a separate law for this? Is it not simply a special case of Newton\u2019s\nsecond law? Yes, Newton\u2019s first law can be deduced from the second law as we have\nillustrated. However, the first law is often used for a different purpose: Newton\u2019s\nFirst Law tells us about the limit of applicability of Newton\u2019s Second law. Newton\u2019s\nSecond law can only be used in reference systems where the First law is obeyed. But\nis not the First law always valid? No! The First law is only valid in reference systems\nthat are not accelerated. If you observe the motion of a ball from an accelerating\ncar, the ball will appear to accelerate even if there are no forces acting on it. We call\nsystems that are not accelerating inertial systems, and Newton\u2019s first law is often\ncalled the law of inertia. Newton\u2019s first and second laws of motion are only valid in\ninertial systems. \n\nA system is an inertial system if it is not accelerated. It means that the reference system\nmust not be accelerating linearly or rotating. Unfortunately, this means that most\nsystems we know are not really inertial systems. For example, the surface of the\nEarth is clearly not an inertial system, because the Earth is rotating. The Earth is also\nnot an inertial system, because it ismoving in a curved path around the Sun. However,\neven if the surface of the Earth is not strictly an inertial system, it may be considered\nto be approximately an inertial system for many laboratory-size experiments.\n\n\n## And finally the Third Law\n\n\nIf there is a force from object A on object B, there is also a force from object B on object A.\nThis fundamental principle of interactions is called Newton\u2019s third law. We do not\nknow of any force that do not obey this law: All forces appear in pairs. Newton\u2019s\nthird law is usually formulated as: For every action there is an equal and opposite\nreaction.\n\n\n\n## Motion of a Single Object\n\nHere we consider the motion of a single particle moving under\nthe influence of some set of forces. We will consider some problems where\nthe force does not depend on the position. In that case Newton's law\n$m\\dot{\\boldsymbol{v}}=\\boldsymbol{F}(\\boldsymbol{v})$ is a first-order differential\nequation and one solves for $\\boldsymbol{v}(t)$, then moves on to integrate\n$\\boldsymbol{v}$ to get the position. In essentially all of these cases we cna find an analytical solution.\n\n\n\n## Air Resistance in One Dimension\n\nAir resistance tends to scale as the square of the velocity. This is\nin contrast to many problems chosen for textbooks, where it is linear\nin the velocity. The choice of a linear dependence is motivated by\nmathematical simplicity (it keeps the differential equation linear)\nrather than by physics. One can see that the force should be quadratic\nin velocity by considering the momentum imparted on the air\nmolecules. If an object sweeps through a volume $dV$ of air in time\n$dt$, the momentum imparted on the air is\n\n\n
\n\n$$\n\\begin{equation}\ndP=\\rho_m dV v,\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nwhere $v$ is the velocity of the object and $\\rho_m$ is the mass\ndensity of the air. If the molecules bounce back as opposed to stop\nyou would double the size of the term. The opposite value of the\nmomentum is imparted onto the object itself. Geometrically, the\ndifferential volume is\n\n\n
\n\n$$\n\\begin{equation}\ndV=Avdt,\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nwhere $A$ is the cross-sectional area and $vdt$ is the distance the\nobject moved in time $dt$.\n\n\n## Resulting Acceleration\nPlugging this into the expression above,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dP}{dt}=-\\rho_m A v^2.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nThis is the force felt by the particle, and is opposite to its\ndirection of motion. Now, because air doesn't stop when it hits an\nobject, but flows around the best it can, the actual force is reduced\nby a dimensionless factor $c_W$, called the drag coefficient.\n\n\n
\n\n$$\n\\begin{equation}\nF_{\\rm drag}=-c_W\\rho_m Av^2,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the acceleration is\n\n$$\n\\begin{eqnarray}\n\\frac{dv}{dt}=-\\frac{c_W\\rho_mA}{m}v^2.\n\\end{eqnarray}\n$$\n\nFor a particle with initial velocity $v_0$, one can separate the $dt$\nto one side of the equation, and move everything with $v$s to the\nother side. We did this in our discussion of simple motion and will not repeat it here.\n\nOn more general terms,\nfor many systems, e.g. an automobile, there are multiple sources of\nresistance. In addition to wind resistance, where the force is\nproportional to $v^2$, there are dissipative effects of the tires on\nthe pavement, and in the axel and drive train. These other forces can\nhave components that scale proportional to $v$, and components that\nare independent of $v$. Those independent of $v$, e.g. the usual\n$f=\\mu_K N$ frictional force you consider in your first Physics courses, only set in\nonce the object is actually moving. As speeds become higher, the $v^2$\ncomponents begin to dominate relative to the others. For automobiles\nat freeway speeds, the $v^2$ terms are largely responsible for the\nloss of efficiency. To travel a distance $L$ at fixed speed $v$, the\nenergy/work required to overcome the dissipative forces are $fL$,\nwhich for a force of the form $f=\\alpha v^n$ becomes\n\n$$\n\\begin{eqnarray}\nW=\\int dx~f=\\alpha v^n L.\n\\end{eqnarray}\n$$\n\nFor $n=0$ the work is\nindependent of speed, but for the wind resistance, where $n=2$,\nslowing down is essential if one wishes to reduce fuel consumption. It\nis also important to consider that engines are designed to be most\nefficient at a chosen range of power output. Thus, some cars will get\nbetter mileage at higher speeds (They perform better at 50 mph than at\n5 mph) despite the considerations mentioned above.\n\n\n## Going Ballistic, Projectile Motion or a Softer Approach, Falling Raindrops\n\n\nAs an example of Newton's Laws we consider projectile motion (or a\nfalling raindrop or a ball we throw up in the air) with a drag force. Even though air resistance is\nlargely proportional to the square of the velocity, we will consider\nthe drag force to be linear to the velocity, $\\boldsymbol{F}=-m\\gamma\\boldsymbol{v}$,\nfor the purposes of this exercise. The acceleration for a projectile moving upwards,\n$\\boldsymbol{a}=\\boldsymbol{F}/m$, becomes\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{dt}=-\\gamma v_x,\\\\\n\\nonumber\n\\frac{dv_y}{dt}=-\\gamma v_y-g,\n\\end{eqnarray}\n$$\n\nand $\\gamma$ has dimensions of inverse time. \n\nIf you on the other hand have a falling raindrop, how do these equations change? See for example Figure 2.1 in Taylor.\nLet us stay with a ball which is thrown up in the air at $t=0$. \n\n\n## Ways of solving these equations\n\nWe will go over two different ways to solve this equation. The first\nby direct integration, and the second as a differential equation. To\ndo this by direct integration, one simply multiplies both sides of the\nequations above by $dt$, then divide by the appropriate factors so\nthat the $v$s are all on one side of the equation and the $dt$ is on\nthe other. For the $x$ motion one finds an easily integrable equation,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_x}{v_x}&=&-\\gamma dt,\\\\\n\\nonumber\n\\int_{v_{0x}}^{v_{x}}\\frac{dv_x}{v_x}&=&-\\gamma\\int_0^{t}dt,\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{x}}{v_{0x}}\\right)&=&-\\gamma t,\\\\\n\\nonumber\nv_{x}(t)&=&v_{0x}e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nThis is very much the result you would have written down\nby inspection. For the $y$-component of the velocity,\n\n$$\n\\begin{eqnarray}\n\\frac{dv_y}{v_y+g/\\gamma}&=&-\\gamma dt\\\\\n\\nonumber\n\\ln\\left(\\frac{v_{y}+g/\\gamma}{v_{0y}-g/\\gamma}\\right)&=&-\\gamma t_f,\\\\\n\\nonumber\nv_{fy}&=&-\\frac{g}{\\gamma}+\\left(v_{0y}+\\frac{g}{\\gamma}\\right)e^{-\\gamma t}.\n\\end{eqnarray}\n$$\n\nWhereas $v_x$ starts at some value and decays\nexponentially to zero, $v_y$ decays exponentially to the terminal\nvelocity, $v_t=-g/\\gamma$.\n\n\n## Solving as differential equations\n\nAlthough this direct integration is simpler than the method we invoke\nbelow, the method below will come in useful for some slightly more\ndifficult differential equations in the future. The differential\nequation for $v_x$ is straight-forward to solve. Because it is first\norder there is one arbitrary constant, $A$, and by inspection the\nsolution is\n\n\n
\n\n$$\n\\begin{equation}\nv_x=Ae^{-\\gamma t}.\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nThe arbitrary constants for equations of motion are usually determined\nby the initial conditions, or more generally boundary conditions. By\ninspection $A=v_{0x}$, the initial $x$ component of the velocity.\n\n\n\n## Differential Equations, contn\n\nThe differential equation for $v_y$ is a bit more complicated due to\nthe presence of $g$. Differential equations where all the terms are\nlinearly proportional to a function, in this case $v_y$, or to\nderivatives of the function, e.g., $v_y$, $dv_y/dt$,\n$d^2v_y/dt^2\\cdots$, are called linear differential equations. If\nthere are terms proportional to $v^2$, as would happen if the drag\nforce were proportional to the square of the velocity, the\ndifferential equation is not longer linear. Because this expression\nhas only one derivative in $v$ it is a first-order linear differential\nequation. If a term were added proportional to $d^2v/dt^2$ it would be\na second-order differential equation. In this case we have a term\ncompletely independent of $v$, the gravitational acceleration $g$, and\nthe usual strategy is to first rewrite the equation with all the\nlinear terms on one side of the equal sign,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_y}{dt}+\\gamma v_y=-g.\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\n## Splitting into two parts\n\nNow, the solution to the equation can be broken into two\nparts. Because this is a first-order differential equation we know\nthat there will be one arbitrary constant. Physically, the arbitrary\nconstant will be determined by setting the initial velocity, though it\ncould be determined by setting the velocity at any given time. Like\nmost differential equations, solutions are not \"solved\". Instead,\none guesses at a form, then shows the guess is correct. For these\ntypes of equations, one first tries to find a single solution,\ni.e. one with no arbitrary constants. This is called the {\\it\nparticular} solution, $y_p(t)$, though it should really be called\n\"a\" particular solution because there are an infinite number of such\nsolutions. One then finds a solution to the {\\it homogenous} equation,\nwhich is the equation with zero on the right-hand side,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,h}}{dt}+\\gamma v_{y,h}=0.\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nHomogenous solutions will have arbitrary constants. \n\nThe particular solution will solve the same equation as the original\ngeneral equation\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{dv_{y,p}}{dt}+\\gamma v_{y,p}=-g.\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nHowever, we don't need find one with arbitrary constants. Hence, it is\ncalled a **particular** solution.\n\nThe sum of the two,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=v_{y,p}+v_{y,h},\n\\label{_auto9} \\tag{9}\n\\end{equation}\n$$\n\nis a solution of the total equation because of the linear nature of\nthe differential equation. One has now found a *general* solution\nencompassing all solutions, because it both satisfies the general\nequation (like the particular solution), and has an arbitrary constant\nthat can be adjusted to fit any initial condition (like the homogneous\nsolution). If the equation were not linear, e.g if there were a term\nsuch as $v_y^2$ or $v_y\\dot{v}_y$, this technique would not work.\n\n\n## More details\n\nReturning to the example above, the homogenous solution is the same as\nthat for $v_x$, because there was no gravitational acceleration in\nthat case,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,h}=Be^{-\\gamma t}.\n\\label{_auto10} \\tag{10}\n\\end{equation}\n$$\n\nIn this case a particular solution is one with constant velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{y,p}=-g/\\gamma.\n\\label{_auto11} \\tag{11}\n\\end{equation}\n$$\n\nNote that this is the terminal velocity of a particle falling from a\ngreat height. The general solution is thus,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=Be^{-\\gamma t}-g/\\gamma,\n\\label{_auto12} \\tag{12}\n\\end{equation}\n$$\n\nand one can find $B$ from the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_{0y}=B-g/\\gamma,~~~B=v_{0y}+g/\\gamma.\n\\label{_auto13} \\tag{13}\n\\end{equation}\n$$\n\nPlugging in the expression for $B$ gives the $y$ motion given the initial velocity,\n\n\n
\n\n$$\n\\begin{equation}\nv_y=(v_{0y}+g/\\gamma)e^{-\\gamma t}-g/\\gamma.\n\\label{_auto14} \\tag{14}\n\\end{equation}\n$$\n\nIt is easy to see that this solution has $v_y=v_{0y}$ when $t=0$ and\n$v_y=-g/\\gamma$ when $t\\rightarrow\\infty$.\n\nOne can also integrate the two equations to find the coordinates $x$\nand $y$ as functions of $t$,\n\n$$\n\\begin{eqnarray}\nx&=&\\int_0^t dt'~v_{0x}(t')=\\frac{v_{0x}}{\\gamma}\\left(1-e^{-\\gamma t}\\right),\\\\\n\\nonumber\ny&=&\\int_0^t dt'~v_{0y}(t')=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\end{eqnarray}\n$$\n\nIf the question was to find the position at a time $t$, we would be\nfinished. However, the more common goal in a projectile equation\nproblem is to find the range, i.e. the distance $x$ at which $y$\nreturns to zero. For the case without a drag force this was much\nsimpler. The solution for the $y$ coordinate would have been\n$y=v_{0y}t-gt^2/2$. One would solve for $t$ to make $y=0$, which would\nbe $t=2v_{0y}/g$, then plug that value for $t$ into $x=v_{0x}t$ to\nfind $x=2v_{0x}v_{0y}/g=v_0\\sin(2\\theta_0)/g$. One follows the same\nsteps here, except that the expression for $y(t)$ is more\ncomplicated. Searching for the time where $y=0$, and we get\n\n\n
\n\n$$\n\\begin{equation}\n0=-\\frac{gt}{\\gamma}+\\frac{v_{0y}+g/\\gamma}{\\gamma}\\left(1-e^{-\\gamma t}\\right).\n\\label{_auto15} \\tag{15}\n\\end{equation}\n$$\n\nThis cannot be inverted into a simple expression $t=\\cdots$. Such\nexpressions are known as \"transcendental equations\", and are not the\nrare instance, but are the norm. In the days before computers, one\nmight plot the right-hand side of the above graphically as\na function of time, then find the point where it crosses zero.\n\nNow, the most common way to solve for an equation of the above type\nwould be to apply Newton's method numerically. This involves the\nfollowing algorithm for finding solutions of some equation $F(t)=0$.\n\n1. First guess a value for the time, $t_{\\rm guess}$.\n\n2. Calculate $F$ and its derivative, $F(t_{\\rm guess})$ and $F'(t_{\\rm guess})$. \n\n3. Unless you guessed perfectly, $F\\ne 0$, and assuming that $\\Delta F\\approx F'\\Delta t$, one would choose \n\n4. $\\Delta t=-F(t_{\\rm guess})/F'(t_{\\rm guess})$.\n\n5. Now repeat step 1, but with $t_{\\rm guess}\\rightarrow t_{\\rm guess}+\\Delta t$.\n\nIf the $F(t)$ were perfectly linear in $t$, one would find $t$ in one\nstep. Instead, one typically finds a value of $t$ that is closer to\nthe final answer than $t_{\\rm guess}$. One breaks the loop once one\nfinds $F$ within some acceptable tolerance of zero. A program to do\nthis will be added shortly.\n\n\n## Motion in a Magnetic Field\n\n\nAnother example of a velocity-dependent force is magnetism,\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{F}&=&q\\boldsymbol{v}\\times\\boldsymbol{B},\\\\\n\\nonumber\nF_i&=&q\\sum_{jk}\\epsilon_{ijk}v_jB_k.\n\\end{eqnarray}\n$$\n\nFor a uniform field in the $z$ direction $\\boldsymbol{B}=B\\hat{z}$, the force can only have $x$ and $y$ components,\n\n$$\n\\begin{eqnarray}\nF_x&=&qBv_y\\\\\n\\nonumber\nF_y&=&-qBv_x.\n\\end{eqnarray}\n$$\n\nThe differential equations are\n\n$$\n\\begin{eqnarray}\n\\dot{v}_x&=&\\omega_c v_y,\\omega_c= qB/m\\\\\n\\nonumber\n\\dot{v}_y&=&-\\omega_c v_x.\n\\end{eqnarray}\n$$\n\nOne can solve the equations by taking time derivatives of either equation, then substituting into the other equation,\n\n$$\n\\begin{eqnarray}\n\\ddot{v}_x=\\omega_c\\dot{v_y}=-\\omega_c^2v_x,\\\\\n\\nonumber\n\\ddot{v}_y&=&-\\omega_c\\dot{v}_x=-\\omega_cv_y.\n\\end{eqnarray}\n$$\n\nThe solution to these equations can be seen by inspection,\n\n$$\n\\begin{eqnarray}\nv_x&=&A\\sin(\\omega_ct+\\phi),\\\\\n\\nonumber\nv_y&=&A\\cos(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nOne can integrate the equations to find the positions as a function of time,\n\n$$\n\\begin{eqnarray}\nx-x_0&=&\\int_{x_0}^x dx=\\int_0^t dt v(t)\\\\\n\\nonumber\n&=&\\frac{-A}{\\omega_c}\\cos(\\omega_ct+\\phi),\\\\\n\\nonumber\ny-y_0&=&\\frac{A}{\\omega_c}\\sin(\\omega_ct+\\phi).\n\\end{eqnarray}\n$$\n\nThe trajectory is a circle centered at $x_0,y_0$ with amplitude $A$ rotating in the clockwise direction.\n\nThe equations of motion for the $z$ motion are\n\n\n
\n\n$$\n\\begin{equation}\n\\dot{v_z}=0,\n\\label{_auto16} \\tag{16}\n\\end{equation}\n$$\n\nwhich leads to\n\n\n
\n\n$$\n\\begin{equation}\nz-z_0=V_zt.\n\\label{_auto17} \\tag{17}\n\\end{equation}\n$$\n\nAdded onto the circle, the motion is helical.\n\nNote that the kinetic energy,\n\n\n
\n\n$$\n\\begin{equation}\nT=\\frac{1}{2}m(v_x^2+v_y^2+v_z^2)=\\frac{1}{2}m(\\omega_c^2A^2+V_z^2),\n\\label{_auto18} \\tag{18}\n\\end{equation}\n$$\n\nis constant. This is because the force is perpendicular to the\nvelocity, so that in any differential time element $dt$ the work done\non the particle $\\boldsymbol{F}\\cdot{dr}=dt\\boldsymbol{F}\\cdot{v}=0$.\n\nOne should think about the implications of a velocity dependent\nforce. Suppose one had a constant magnetic field in deep space. If a\nparticle came through with velocity $v_0$, it would undergo cyclotron\nmotion with radius $R=v_0/\\omega_c$. However, if it were still its\nmotion would remain fixed. Now, suppose an observer looked at the\nparticle in one reference frame where the particle was moving, then\nchanged their velocity so that the particle's velocity appeared to be\nzero. The motion would change from circular to fixed. Is this\npossible?\n\nThe solution to the puzzle above relies on understanding\nrelativity. Imagine that the first observer believes $\\boldsymbol{B}\\ne 0$ and\nthat the electric field $\\boldsymbol{E}=0$. If the observer then changes\nreference frames by accelerating to a velocity $\\boldsymbol{v}$, in the new\nframe $\\boldsymbol{B}$ and $\\boldsymbol{E}$ both change. If the observer moved to the\nframe where the charge, originally moving with a small velocity $v$,\nis now at rest, the new electric field is indeed $\\boldsymbol{v}\\times\\boldsymbol{B}$,\nwhich then leads to the same acceleration as one had before. If the\nvelocity is not small compared to the speed of light, additional\n$\\gamma$ factors come into play,\n$\\gamma=1/\\sqrt{1-(v/c)^2}$. Relativistic motion will not be\nconsidered in this course.\n\n\n\n\n## Sliding Block tied to a Wall\n\nAnother classical case is that of simple harmonic oscillations, here represented by a block sliding on a horizontal frictionless surface. The block is tied to a wall with a spring. If the spring is not compressed or stretched too far, the force on the block at a given position $x$ is\n\n$$\nF=-kx.\n$$\n\nThe negative sign means that the force acts to restore the object to an equilibrium position. Newton's equation of motion for this idealized system is then\n\n$$\nm\\frac{d^2x}{dt^2}=-kx,\n$$\n\nor we could rephrase it as\n\n\n
\n\n$$\n\\frac{d^2x}{dt^2}=-\\frac{k}{m}x=-\\omega_0^2x,\n\\label{eq:newton1} \\tag{19}\n$$\n\nwith the angular frequency $\\omega_0^2=k/m$. \n\nThe above differential equation has the advantage that it can be solved analytically with solutions on the form\n\n$$\nx(t)=Acos(\\omega_0t+\\nu),\n$$\n\nwhere $A$ is the amplitude and $\\nu$ the phase constant. This provides in turn an important test for the numerical\nsolution and the development of a program for more complicated cases which cannot be solved analytically. \n\n\nWith the position $x(t)$ and the velocity $v(t)=dx/dt$ we can reformulate Newton's equation in the following way\n\n$$\n\\frac{dx(t)}{dt}=v(t),\n$$\n\nand\n\n$$\n\\frac{dv(t)}{dt}=-\\omega_0^2x(t).\n$$\n\nWe are now going to solve these equations using first the standard forward Euler method. Later we will try to improve upon this.\n\n\nBefore proceeding however, it is important to note that in addition to the exact solution, we have at least two further tests which can be used to check our solution. \n\nSince functions like $cos$ are periodic with a period $2\\pi$, then the solution $x(t)$ has also to be periodic. This means that\n\n$$\nx(t+T)=x(t),\n$$\n\nwith $T$ the period defined as\n\n$$\nT=\\frac{2\\pi}{\\omega_0}=\\frac{2\\pi}{\\sqrt{k/m}}.\n$$\n\nObserve that $T$ depends only on $k/m$ and not on the amplitude of the solution. \n\n\nIn addition to the periodicity test, the total energy has also to be conserved. \n\nSuppose we choose the initial conditions\n\n$$\nx(t=0)=1\\hspace{0.1cm} \\mathrm{m}\\hspace{1cm} v(t=0)=0\\hspace{0.1cm}\\mathrm{m/s},\n$$\n\nmeaning that block is at rest at $t=0$ but with a potential energy\n\n$$\nE_0=\\frac{1}{2}kx(t=0)^2=\\frac{1}{2}k.\n$$\n\nThe total energy at any time $t$ has however to be conserved, meaning that our solution has to fulfil the condition\n\n$$\nE_0=\\frac{1}{2}kx(t)^2+\\frac{1}{2}mv(t)^2.\n$$\n\nWe will derive this equation in our discussion on [energy conservation](https://mhjensen.github.io/Physics321/doc/pub/energyconserv/html/energyconserv.html).\n\n\nAn algorithm which implements these equations is included below.\n * Choose the initial position and speed, with the most common choice $v(t=0)=0$ and some fixed value for the position. \n\n * Choose the method you wish to employ in solving the problem.\n\n * Subdivide the time interval $[t_i,t_f] $ into a grid with step size\n\n$$\nh=\\frac{t_f-t_i}{N},\n$$\n\nwhere $N$ is the number of mesh points. \n * Calculate now the total energy given by\n\n$$\nE_0=\\frac{1}{2}kx(t=0)^2=\\frac{1}{2}k.\n$$\n\n* Choose ODE solver to obtain $x_{i+1}$ and $v_{i+1}$ starting from the previous values $x_i$ and $v_i$.\n\n * When we have computed $x(v)_{i+1}$ we upgrade $t_{i+1}=t_i+h$.\n\n * This iterative process continues till we reach the maximum time $t_f$.\n\n * The results are checked against the exact solution. Furthermore, one has to check the stability of the numerical solution against the chosen number of mesh points $N$. \n\nThe following python program ( code will be added shortly)\n\n\n```\n#\n# This program solves Newtons equation for a block sliding on\n# an horizontal frictionless surface.\n# The block is tied to the wall with a spring, so N's eq takes the form:\n#\n# m d^2x/dt^2 = - kx\n#\n# In order to make the solution dimless, we set k/m = 1.\n# This results in two coupled diff. eq's that may be written as:\n#\n# dx/dt = v\n# dv/dt = -x\n#\n# The user has to specify the initial velocity and position,\n# and the number of steps. The time interval is fixed to\n# t \\in [0, 4\\pi) (two periods)\n#\n```\n\n## The classical pendulum and scaling the equations\n\nThe angular equation of motion of the pendulum is given by\nNewton's equation and with no external force it reads\n\n\n
\n\n$$\n\\begin{equation}\n ml\\frac{d^2\\theta}{dt^2}+mgsin(\\theta)=0,\n\\label{_auto19} \\tag{20}\n\\end{equation}\n$$\n\nwith an angular velocity and acceleration given by\n\n\n
\n\n$$\n\\begin{equation}\n v=l\\frac{d\\theta}{dt},\n\\label{_auto20} \\tag{21}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n a=l\\frac{d^2\\theta}{dt^2}.\n\\label{_auto21} \\tag{22}\n\\end{equation}\n$$\n\n## More on the Pendulum\n\nWe do however expect that the motion will gradually come to an end due a viscous drag torque acting on the pendulum. \nIn the presence of the drag, the above equation becomes\n\n\n
\n\n$$\n\\begin{equation}\n ml\\frac{d^2\\theta}{dt^2}+\\nu\\frac{d\\theta}{dt} +mgsin(\\theta)=0, \\label{eq:pend1} \\tag{23}\n\\end{equation}\n$$\n\nwhere $\\nu$ is now a positive constant parameterizing the viscosity\nof the medium in question. In order to maintain the motion against\nviscosity, it is necessary to add some external driving force. \nWe choose here a periodic driving force. The last equation becomes then\n\n\n
\n\n$$\n\\begin{equation}\n ml\\frac{d^2\\theta}{dt^2}+\\nu\\frac{d\\theta}{dt} +mgsin(\\theta)=Asin(\\omega t), \\label{eq:pend2} \\tag{24}\n\\end{equation}\n$$\n\nwith $A$ and $\\omega$ two constants representing the amplitude and \nthe angular frequency respectively. The latter is called the driving frequency.\n\n\n\n\n## More on the Pendulum\n\nWe define\n\n$$\n\\omega_0=\\sqrt{g/l},\n$$\n\nthe so-called natural frequency and the new dimensionless quantities\n\n$$\n\\hat{t}=\\omega_0t,\n$$\n\nwith the dimensionless driving frequency\n\n$$\n\\hat{\\omega}=\\frac{\\omega}{\\omega_0},\n$$\n\nand introducing the quantity $Q$, called the *quality factor*,\n\n$$\nQ=\\frac{mg}{\\omega_0\\nu},\n$$\n\nand the dimensionless amplitude\n\n$$\n\\hat{A}=\\frac{A}{mg}\n$$\n\nWe have\n\n$$\n\\frac{d^2\\theta}{d\\hat{t}^2}+\\frac{1}{Q}\\frac{d\\theta}{d\\hat{t}} \n +sin(\\theta)=\\hat{A}cos(\\hat{\\omega}\\hat{t}).\n$$\n\nThis equation can in turn be recast in terms of two coupled first-order differential equations as follows\n\n$$\n\\frac{d\\theta}{d\\hat{t}}=\\hat{v},\n$$\n\nand\n\n$$\n\\frac{d\\hat{v}}{d\\hat{t}}=-\\frac{\\hat{v}}{Q}-sin(\\theta)+\\hat{A}cos(\\hat{\\omega}\\hat{t}).\n$$\n\nThese are the equations to be solved. The factor $Q$ represents the number of oscillations of the undriven system that must occur before its energy is significantly reduced due to the viscous drag. The amplitude $\\hat{A}$ is measured in units of the maximum possible gravitational torque while $\\hat{\\omega}$ is the angular frequency of the external torque measured in units of the pendulum's natural frequency.\n", "meta": {"hexsha": "8eb37a977444462cd35d691010f93d9c660da571", "size": 52676, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/forces/ipynb/forces.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/pub/forces/ipynb/forces.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/pub/forces/ipynb/forces.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 31.6943441637, "max_line_length": 424, "alphanum_fraction": 0.5804540967, "converted": true, "num_tokens": 9428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.34158251284363395, "lm_q2_score": 0.3276682876897044, "lm_q1q2_score": 0.11192575708822}} {"text": "\n*This notebook contains material from [CBE40455-2020](https://jckantor.github.io/CBE40455-2020);\ncontent is available [on Github](https://github.com/jckantor/CBE40455-2020.git).*\n\n\n\n< [3.9 Refinements of a Grocery Store Checkout Operation](https://jckantor.github.io/CBE40455-2020/03.09-Refinements-to-the-Grocery-Store-Checkout-Operation.html) | [Contents](toc.html) | [3.11 Batch Chemical Process](https://jckantor.github.io/CBE40455-2020/03.11-Project-Batch-Chemical-Process.html) >

\n\n# 3.10 Objected-Oriented Simulation\n\nUp to this point we have been using Python generators and shared resources as the building blocks for simulations of complex systems. This can be effective, particularly if the individual agents do not require access to the internal state of other agents. But there are situations where the action of an agent depends on the state or properties of another agent in the simulation. For example, consider this discussion question from the Grocery store checkout example:\n\n>Suppose we were to change one or more of the lanes to a express lanes which handle only with a small number of items, say five or fewer. How would you expect this to change average waiting time? This is a form of prioritization ... are there other prioritizations that you might consider?\n\nThe customer action depends the item limit parameter associated with a checkout lane. This is a case where the action of one agent depends on a property of another. The shared resources builtin to the SimPy library provide some functionality in this regard, but how do add this to the simulations we write?\n\nThe good news is that Python offers a rich array of object oriented programming features well suited to this purpose. The SymPy documentation provides excellent examples of how to create Python objects for use in SymPy. The bad news is that object oriented programming in Python -- while straightforward compared to many other programming languages -- constitutes a steep learning curve for students unfamiliar with the core concepts.\n\nFortunately, since the introduction of Python 3.7 in 2018, the standard libraries for Python have included a simplified method for creating and using Python classes. Using [dataclass](https://realpython.com/python-data-classes/), it easy to create objects for SymPy simulations that retain the benefits of object oriented programming without all of the coding overhead. \n\nThe purpose of this notebook is to introduce the use of `dataclass` in creating SymPy simulations. To the best of the author's knowledge, this is a novel use of `dataclass` and the only example of which the author is aware.\n\n## 3.10.1 Installations and imports\n\n\n```python\n!pip install sympy\n```\n\n Requirement already satisfied: sympy in /Users/jeff/opt/anaconda3/lib/python3.7/site-packages (1.5.1)\n Requirement already satisfied: mpmath>=0.19 in /Users/jeff/opt/anaconda3/lib/python3.7/site-packages (from sympy) (1.1.0)\n\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport random\nimport simpy\nimport pandas as pd\nfrom dataclasses import dataclass\n```\n\n\n```python\nimport sys\nprint(sys.version)\n```\n\n 3.7.4 (default, Aug 13 2019, 15:17:50) \n [Clang 4.0.1 (tags/RELEASE_401/final)]\n\n\nAdditional imports are from the `dataclasses` library that has been part of the standard Python distribution since version 3.7. Here we import `dataclass` and `field`.\n\n\n```python\nfrom dataclasses import dataclass, field\n```\n\n## 3.10.2 Introduction to `dataclass`\n\nTutorials and additional documentation:\n\n* [The Ultimate Guide to Data Classes in Python 3.7](https://realpython.com/python-data-classes/): Tutorial article from ReaalPython.com\n* [dataclasses \u2014 Data Classes](https://docs.python.org/3/library/dataclasses.html): Official Python documentation.\n* [Data Classes in Python](https://towardsdatascience.com/data-classes-in-python-8d1a09c1294b): Tutorial from TowardsDataScience.com\n\n### 3.10.2.1 Creating a `dataclass`\n\nA `dataclass` defines a new class of Python objects. A `dataclass` object takes care of several routine things that you would otherwise have to code, such as creating instances of an object, testing for equality, and other aspects. \n\nAs an example, the following cell shows how to define a dataclass corresponding to a hypothetical Student object. The Student object maintains data associated with instances of a student. The dataclass also defines a function associated with the object.\n\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass Student():\n name: str\n graduation_class: int\n dorm: str\n \n def print_name(self):\n print(f\"{self.name} (Class of {self.graduation_class})\")\n```\n\nLet's create an instance of the Student object.\n\n\n```python\nsam = Student(\"Sam Jones\", 2024, \"Alumni\")\n```\n\nLet's see how the `print_name()` function works.\n\n\n```python\nsam.print_name()\n```\n\n Sam Jones (Class of 2024)\n\n\nThe next cell shows how to create a list of students, and how to iterate over a list of students.\n\n\n```python\n# create a list of students\nstudents = [\n Student(\"Sam Jones\", 2024, \"Alumni\"),\n Student(\"Becky Smith\", 2023, \"Howard\"),\n]\n\n# iterate over the list of students to print all of their names\nfor student in students:\n student.print_name()\n print(student.dorm)\n```\n\n Sam Jones (Class of 2024)\n Alumni\n Becky Smith (Class of 2023)\n Howard\n\n\nHere are a few details you need to use `dataclass` effectively:\n\n* The `class` statement is standard statement for creating a new class of Python objects. The preceding `@dataclass` is a Python 'decorator'. Decorators are Python functions that modify the behavior of subsequent statements. In this case, the `@dataclass` decorator modifies `class` to provide a streamlined syntax for implementing classes.\n* A Python class names begin with a capital letter. In this case `Student` is the class name.\n* The lines following the the class statement declare parameters that will be used by the new class. The parameters can be specified when you create an instance of the dataclass. \n* Each paraameter is followed by type 'hint'. Commonly used type hints are `int`, `float`, `bool`, and `str`. Use the keyword `any` you don't know or can't specify a particular type. Type hints are actually used by type-checking tools and ignored by the python interpreter.\n* Following the parameters, write any functions or generators that you may wish to define for the new class. To access variables unique to an instance of the class, preceed the parameter name with `self`.\n\n### 3.10.2.2 Specifying parameter values\n\nThere are different ways of specifying the parameter values assigned to an instance of a dataclass. Here are three particular methods:\n\n* Specify the parameter value when creating a new instance. This is what was done in the Student example above.\n* Provide a default values determined when the dataclass is defined.\n* Provide a default_factory method to create a parameter value when an instance of the dataclass is created.\n\n#### 3.10.2.2.1 Specifying a parameter value when creating a new instance\n\nParameter values can be specified when creating an instance of a dataclass. The parameter values can be specified by position or by name as shown below.\n\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass Student():\n name: str\n graduation_year: int\n dorm: str\n \n def print_name(self):\n print(f\"{self.name} (Class of {self.graduation_year})\")\n \nsam = Student(\"Sam Jones\", 2031, \"Alumni\")\nsam.print_name()\n\ngilda = Student(name=\"Gilda Radner\", graduation_year=2030, dorm=\"Howard\")\ngilda.print_name()\n```\n\n Sam Jones (Class of 2031)\n Gilda Radner (Class of 2030)\n\n\n#### 3.10.2.2.2 Setting default parameter values\n\nSetting a default value for a parameter can save extra typing or coding. More importantly, setting default values makes it easier to maintain and adapt code for other applications, and is a convenient way to handle missing data. \n\nThere are two ways to set default parameter values. For str, int, float, bool, tuple (the immutable types in Python), a default value can be set using `=` as shown in the next cell.\n\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass Student():\n name: str = None\n graduation_year: int = None\n dorm: str = None\n \n def print_name(self):\n print(f\"{self.name} (Class of {self.graduation_year})\")\n \njdoe = Student(name=\"John Doe\", dorm=\"Alumni\")\njdoe.print_name()\n```\n\n John Doe (Class of None)\n\n\nDefault parameter values are restricted to 'immutable' types. This technical restriction eliminiates the error-prone practice of use mutable objects, such as lists, as defaults. The difficulty with setting defaults for mutable objects is that all instances of the dataclass share the same value. If one instance of the object changes that value, then all other instances are affected. This leads to unpredictable behavior, and is a particularly nasty bug to uncover and fix.\n\nThere are two ways to provide defaults for mutable parameters such as lists, sets, dictionaries, or arbitrary Python objects. \n\nThe more direct way is to specify a function for constucting the default parameter value using the `field` statement with the `default_factory` option. The default_factory is called when a new instance of the dataclass is created. The function must take no arguments and must return a value that will be assigned to the designated parameter. Here's an example.\n\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass Student():\n name: str = None\n graduation_year: int = None\n dorm: str = None\n majors: list = field(default_factory=list)\n \n def print_name(self):\n print(f\"{self.name} (Class of {self.graduation_year})\")\n \n def print_majors(self):\n for n, major in enumerate(self.majors):\n print(f\" {n+1}. {major}\")\n \njdoe = Student(name=\"John Doe\", dorm=\"Alumni\", majors=[\"Math\", \"Chemical Engineering\"])\njdoe.print_name()\njdoe.print_majors()\n\nStudent().print_majors()\n```\n\n John Doe (Class of None)\n 1. Math\n 2. Chemical Engineering\n\n\n#### 3.10.2.2.3 Initializing a dataclass with __post_init__(self)\n\nFrequently there are additional steps to complete when creating a new instance of a dataclass. For that purpose, a dataclass may contain an\noptional function with the special name `__post_init__(self)`. If present, that function is run automatically following the creation of a new instance. This feature will be demonstrated in following reimplementation of the grocery store checkout operation.\n\n## 3.10.3 Using `dataclass` with Simpy\n\n### 3.10.3.1 Step 0. A simple model\n\nTo demonstrate the use of classes in SimPy simulations, let's begin with a simple model of a clock using generators.\n\n\n```python\nimport simpy\n\ndef clock(id=\"\", t_step=1.0):\n while True:\n print(id, env.now)\n yield env.timeout(t_step)\n \nenv = simpy.Environment()\nenv.process(clock(\"A\"))\nenv.process(clock(\"B\", 1.5))\nenv.run(until=5.0)\n```\n\n A 0\n B 0\n A 1.0\n B 1.5\n A 2.0\n B 3.0\n A 3.0\n A 4.0\n B 4.5\n\n\n### 3.10.3.2 Step 1. Embed the generator inside of a class\n\nAs a first step, we rewrite the generator as a Python dataclass named `Clock`. The parameters are given default values, and the generator is incorporated within the Clock object. Note the use of `self` to refer to parameters specific to an instance of the class.\n\n\n```python\nimport simpy\nfrom dataclasses import dataclass\n\n@dataclass\nclass Clock():\n id: str = \"\"\n t_step: float = 1.0\n \n def process(self):\n while True:\n print(self.id, env.now)\n yield env.timeout(self.t_step)\n\nenv = simpy.Environment()\nenv.process(Clock(\"A\").process())\nenv.process(Clock(\"B\", 1.5).process())\nenv.run(until=5)\n```\n\n A 0\n B 0\n A 1.0\n B 1.5\n A 2.0\n B 3.0\n A 3.0\n A 4.0\n B 4.5\n\n\n### 3.10.3.3 Step 2. Eliminate (if possible) global variables\n\nOur definition of clock requires the simulation environment to have a specific name `env`, and assumes env is a global variable. That's generally not a good coding practice because it imposes an assumption on any user of the class, and exposes the internal coding of the class. A much better practice is to use class parameters to pass this data through a well defined interface to the class.\n\n\n```python\nimport simpy\nfrom dataclasses import dataclass\n\n@dataclass\nclass Clock():\n env: simpy.Environment\n id: str = \"\"\n t_step: float = 1.0\n \n def process(self):\n while True:\n print(self.id, self.env.now)\n yield self.env.timeout(self.t_step)\n\nenv = simpy.Environment()\nenv.process(Clock(env, \"A\").process())\nenv.process(Clock(env, \"B\", 1.5).process())\nenv.run(until=10)\n```\n\n A 0\n B 0\n A 1.0\n B 1.5\n A 2.0\n B 3.0\n A 3.0\n A 4.0\n B 4.5\n A 5.0\n B 6.0\n A 6.0\n A 7.0\n B 7.5\n A 8.0\n B 9.0\n A 9.0\n\n\n### 3.10.3.4 Step 3. Encapsulate initializations inside __post_init__\n\n\n```python\nimport simpy\nfrom dataclasses import dataclass\n\n@dataclass\nclass Clock():\n env: simpy.Environment\n id: str = \"\"\n t_step: float = 1.0\n \n def __post_init__(self):\n self.env.process(self.process())\n \n def process(self):\n while True:\n print(self.id, self.env.now)\n yield self.env.timeout(self.t_step)\n\nenv = simpy.Environment()\nClock(env, \"A\")\nClock(env, \"B\", 1.5)\nenv.run(until=5)\n```\n\n A 0\n B 0\n A 1.0\n B 1.5\n A 2.0\n B 3.0\n A 3.0\n A 4.0\n B 4.5\n\n\n## 3.10.4 Grocery Store Model\n\nLet's review our model for the grocery store checkout operations. There are multiple checkout lanes, each with potentially different characteristics. With generators we were able to implement differences in the time required to scan items. But another parameter, a limit on number of items that could be checked out in a lane, required a new global list. The reason was the need to access that parameter, something that a generator doesn't allow. This is where classes become important building blocks in creating more complex simulations.\n\nOur new strategy will be encapsulate the generator inside of a dataclass object. Here's what we'll ask each class definition to do:\n\n* Create a parameter corresponding to the simulation environment. This makes our classes reusable in other simulations by eliminating a reference to a globall variable.\n* Create parameters with reasonable defaults values.\n* Initialize any objects used within the class.\n* Register the class generator with the simulation environment.\n\n\n\n```python\nfrom dataclasses import dataclass\n\n# create simulation models\n@dataclass\nclass Checkout():\n env: simpy.Environment\n lane: simpy.Store = None\n t_item: float = 1/10\n item_limit: int = 25\n t_payment: float = 2.0\n \n def __post_init__(self):\n self.lane = simpy.Store(self.env)\n self.env.process(self.process())\n \n def process(self):\n while True:\n customer_id, cart, enter_time = yield self.lane.get()\n wait_time = env.now - enter_time\n yield env.timeout(self.t_payment + cart*self.t_item)\n customer_log.append([customer_id, cart, enter_time, wait_time, env.now]) \n \n@dataclass\nclass CustomerGenerator():\n env: simpy.Environment\n rate: float = 1.0\n customer_id: int = 1\n \n def __post_init__(self):\n self.env.process(self.process())\n \n def process(self):\n while True:\n yield env.timeout(random.expovariate(self.rate))\n cart = random.randint(1, 25)\n available_checkouts = [checkout for checkout in checkouts if cart <= checkout.item_limit]\n checkout = min(available_checkouts, key=lambda checkout: len(checkout.lane.items))\n yield checkout.lane.put([self.customer_id, cart, env.now])\n self.customer_id += 1\n\ndef lane_logger(t_sample=0.1):\n while True:\n lane_log.append([env.now] + [len(checkout.lane.items) for checkout in checkouts])\n yield env.timeout(t_sample)\n \n# create simulation environment\nenv = simpy.Environment()\n\n# create simulation objects (agents)\nCustomerGenerator(env)\ncheckouts = [\n Checkout(env, t_item=1/5, item_limit=25),\n Checkout(env, t_item=1/5, item_limit=25),\n Checkout(env, item_limit=5),\n Checkout(env),\n Checkout(env),\n]\nenv.process(lane_logger())\n\n# run process\ncustomer_log = []\nlane_log = []\nenv.run(until=600)\n```\n\n\n```python\ndef visualize():\n\n # extract lane data\n lane_df = pd.DataFrame(lane_log, columns = [\"time\"] + [f\"lane {n}\" for n in range(0, len(checkouts))])\n lane_df = lane_df.set_index(\"time\")\n\n customer_df = pd.DataFrame(customer_log, columns = [\"customer id\", \"cart items\", \"enter\", \"wait\", \"leave\"])\n customer_df[\"elapsed\"] = customer_df[\"leave\"] - customer_df[\"enter\"]\n\n # compute kpi's\n print(f\"Average waiting time = {customer_df['wait'].mean():5.2f} minutes\")\n print(f\"\\nAverage lane queue \\n{lane_df.mean()}\")\n print(f\"\\nOverall aaverage lane queue \\n{lane_df.mean().mean():5.4f}\")\n\n # plot results\n fig, ax = plt.subplots(3, 1, figsize=(12, 7))\n ax[0].plot(lane_df)\n ax[0].set_xlabel(\"time / min\")\n ax[0].set_title(\"length of checkout lanes\")\n ax[0].legend(lane_df.columns)\n\n ax[1].bar(customer_df[\"customer id\"], customer_df[\"wait\"])\n ax[1].set_xlabel(\"customer id\")\n ax[1].set_ylabel(\"minutes\")\n ax[1].set_title(\"customer waiting time\")\n\n ax[2].bar(customer_df[\"customer id\"], customer_df[\"elapsed\"])\n ax[2].set_xlabel(\"customer id\")\n ax[2].set_ylabel(\"minutes\")\n ax[2].set_title(\"total elapsed time\")\n plt.tight_layout()\n \nvisualize()\n```\n\n## 3.10.5 Customers as agents\n\n\n```python\nfrom dataclasses import dataclass\n\n# create simulation models\n@dataclass\nclass Checkout():\n env: simpy.Environment\n lane: simpy.Store = None\n t_item: float = 1/10\n item_limit: int = 25\n t_payment: float = 2.0\n \n def __post_init__(self):\n self.lane = simpy.Store(self.env)\n self.env.process(self.process())\n \n def process(self):\n while True:\n customer_id, cart, enter_time = yield self.lane.get()\n wait_time = env.now - enter_time\n yield env.timeout(self.t_payment + cart*self.t_item)\n customer_log.append([customer_id, cart, enter_time, wait_time, env.now]) \n \n@dataclass\nclass CustomerGenerator():\n env: simpy.Environment\n rate: float = 1.0\n customer_id: int = 1\n \n def __post_init__(self):\n self.env.process(self.process())\n \n def process(self):\n while True:\n yield env.timeout(random.expovariate(self.rate))\n Customer(self.env, self.customer_id)\n self.customer_id += 1\n \n@dataclass\nclass Customer():\n env: simpy.Environment\n id: int = 0\n \n def __post_init__(self):\n self.cart = random.randint(1, 25)\n self.env.process(self.process())\n \n def process(self):\n available_checkouts = [checkout for checkout in checkouts if self.cart <= checkout.item_limit]\n checkout = min(available_checkouts, key=lambda checkout: len(checkout.lane.items))\n yield checkout.lane.put([self.id, self.cart, env.now])\n \n\ndef lane_logger(t_sample=0.1):\n while True:\n lane_log.append([env.now] + [len(checkout.lane.items) for checkout in checkouts])\n yield env.timeout(t_sample)\n \n# create simulation environment\nenv = simpy.Environment()\n\n# create simulation objects (agents)\nCustomerGenerator(env)\ncheckouts = [\n Checkout(env, t_item=1/5, item_limit=25),\n Checkout(env, t_item=1/5, item_limit=25),\n Checkout(env, item_limit=5),\n Checkout(env),\n Checkout(env),\n]\nenv.process(lane_logger())\n\n# run process\ncustomer_log = []\nlane_log = []\nenv.run(until=600)\n\nvisualize()\n```\n\n## 3.10.6 Creating Smart Objects\n\n\n```python\nfrom dataclasses import dataclass, field\nimport pandas as pd\n\n# create simulation models\n@dataclass\nclass Checkout():\n lane: simpy.Store\n t_item: float = 1/10\n item_limit: int = 25\n \n def process(self):\n while True:\n customer_id, cart, enter_time = yield self.lane.get()\n wait_time = env.now - enter_time\n yield env.timeout(t_payment + cart*self.t_item)\n customer_log.append([customer_id, cart, enter_time, wait_time, env.now]) \n \n@dataclass\nclass CustomerGenerator():\n rate: float = 1.0\n customer_id: int = 1\n \n def process(self):\n while True:\n yield env.timeout(random.expovariate(self.rate))\n cart = random.randint(1, 25)\n available_checkouts = [checkout for checkout in checkouts if cart <= checkout.item_limit]\n checkout = min(available_checkouts, key=lambda checkout: len(checkout.lane.items))\n yield checkout.lane.put([self.customer_id, cart, env.now])\n self.customer_id += 1\n\n@dataclass\nclass LaneLogger():\n lane_log: list = field(default_factory=list) # this creates a variable that can be modified\n t_sample: float = 0.1\n lane_df: pd.DataFrame = field(default_factory=pd.DataFrame)\n \n def process(self):\n while True:\n self.lane_log.append([env.now] + [len(checkout.lane.items) for checkout in checkouts])\n yield env.timeout(self.t_sample)\n \n def report(self):\n self.lane_df = pd.DataFrame(self.lane_log, columns = [\"time\"] + [f\"lane {n}\" for n in range(0, N)])\n self.lane_df = self.lane_df.set_index(\"time\")\n print(f\"\\nAverage lane queue \\n{self.lane_df.mean()}\")\n print(f\"\\nOverall average lane queue \\n{self.lane_df.mean().mean():5.4f}\")\n \n def plot(self):\n self.lane_df = pd.DataFrame(self.lane_log, columns = [\"time\"] + [f\"lane {n}\" for n in range(0, N)])\n self.lane_df = self.lane_df.set_index(\"time\") \n fig, ax = plt.subplots(1, 1, figsize=(12, 3))\n ax.plot(self.lane_df)\n ax.set_xlabel(\"time / min\")\n ax.set_title(\"length of checkout lanes\")\n ax.legend(self.lane_df.columns) \n \n# create simulation environment\nenv = simpy.Environment()\n\n# create simulation objects (agents)\ncustomer_generator = CustomerGenerator()\ncheckouts = [\n Checkout(simpy.Store(env), t_item=1/5),\n Checkout(simpy.Store(env), t_item=1/5),\n Checkout(simpy.Store(env), item_limit=5),\n Checkout(simpy.Store(env)),\n Checkout(simpy.Store(env)),\n]\nlane_logger = LaneLogger()\n\n# register agents\nenv.process(customer_generator.process())\nfor checkout in checkouts:\n env.process(checkout.process()) \nenv.process(lane_logger.process())\n\n# run process\nenv.run(until=600)\n\n# plot results\nlane_logger.report()\nlane_logger.plot()\n```\n\n\n```python\n\n```\n\n\n< [3.9 Refinements of a Grocery Store Checkout Operation](https://jckantor.github.io/CBE40455-2020/03.09-Refinements-to-the-Grocery-Store-Checkout-Operation.html) | [Contents](toc.html) | [3.11 Batch Chemical Process](https://jckantor.github.io/CBE40455-2020/03.11-Project-Batch-Chemical-Process.html) >

\n", "meta": {"hexsha": "2d18b826333fe9f03d5bf77485abbf18033577d7", "size": 267255, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/03.10-Creating-Simulation-Classes.ipynb", "max_stars_repo_name": "jckantor/CBE40455-2020", "max_stars_repo_head_hexsha": "318bd71b7259c6f2f810cc55f268e2477c6b8cfd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-08-18T13:22:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-31T23:30:56.000Z", "max_issues_repo_path": "docs/03.10-Creating-Simulation-Classes.ipynb", "max_issues_repo_name": "jckantor/CBE40455-2020", "max_issues_repo_head_hexsha": "318bd71b7259c6f2f810cc55f268e2477c6b8cfd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/03.10-Creating-Simulation-Classes.ipynb", "max_forks_repo_name": "jckantor/CBE40455-2020", "max_forks_repo_head_hexsha": "318bd71b7259c6f2f810cc55f268e2477c6b8cfd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-11-07T21:36:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-11T20:38:20.000Z", "avg_line_length": 202.0068027211, "max_line_length": 75216, "alphanum_fraction": 0.8989915998, "converted": true, "num_tokens": 5914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3886180408675583, "lm_q2_score": 0.28776782186926264, "lm_q1q2_score": 0.11183176715955735}} {"text": "# Introducci\u00f3n a Python\n\n## introducci\u00f3n\n\n[Python](https://www.python.org/) es un lenguaje de prop\u00f3sito general sencillo y expresivo. Permite acceder c\u00f3modamente a una amplia colecci\u00f3n de bibliotecas \u00fatiles en todos los campos de la inform\u00e1tica. Su uso en ciencia y tecnolog\u00eda es cada vez mayor.\n\nEs un lenguaje interpretado, con tipos de datos din\u00e1micos y manejo autom\u00e1tico de memoria, que puede utilizarse tanto para escribir programas de forma tradicional como para experimentar en un entorno interactivo. Incluye las construcciones m\u00e1s importantes de programaci\u00f3n funcional y admite programaci\u00f3n orientada a objetos.\n\nLa sintaxis es simple e intuitiva pero hay que tener en cuenta algunas caracter\u00edsticas:\n\n\n- Los bloques de instrucciones de las construcciones condicionales, bucles y funciones se delimitan mediante la \"indentaci\u00f3n del c\u00f3digo\": no se utiliza \"end\" ni `{` `}`.\n\n- Los \u00edndices para acceder a los arrays o listas comienzan en 0 y acaban en tama\u00f1o-1. Las secuencias (*range*) usadas en bucles o *list comprehensions* no incluyen el l\u00edmite superior.\n\n- Algunas funciones tienen la sintaxis tradicional `f(x)`, `g(x,a)`, mientras que otras se expresan como `x.f()`, `x.g(a)`, etc., indicando que el \"objeto\" `x` se modifica de alguna forma.\n\n- Los arrays y las listas son \"mutables\": su asignaci\u00f3n a otra variable **no** crea una copia del objeto original sino una \"referencia\" a trav\u00e9s de la cual se puede modificar la estructura original.\n\n- Las funciones pueden leer directamente el valor de variables globales, pero para modificarlas hay que declararlas como `global`. La asignaci\u00f3n de variables dentro de una funci\u00f3n crea variables locales.\n\n- La \u00fanica forma de crear un \u00e1mbito de variables es definir una funci\u00f3n. Los \u00edndices de los bucles son visibles a la salida.\n\n### instalaci\u00f3n\n\n[Anaconda](https://www.continuum.io/DOWNLOADS)\n\nSi partimos de la instalaci\u00f3n m\u00ednima [miniconda](https://conda.io/miniconda.html) necesitamos los siguiente paquetes:\n\n > conda install jupyter numpy scipy sympy matplotlib\n\nArrancamos el \"servidor\" de notebooks con\n\n > jupyter notebook\n\nLas celdas de c\u00f3digo se eval\u00faan pulsando May\u00fasculas-Entrar.\n\n## tipos simples\n\nCadenas de caracteres:\n\n\n```python\ns = 'Hola' \n```\n\n\n```python\ns\n```\n\n\n\n\n 'Hola'\n\n\n\n\n```python\nprint(s)\n```\n\n Hola\n\n\n\n```python\ntype(s)\n```\n\n\n\n\n str\n\n\n\nSe admiten diferentes tipos de delimitadores y cadenas multil\u00ednea.\n\n\n```python\n\"Hola\" + ''' amigos!'''\n```\n\n\n\n\n 'Hola amigos!'\n\n\n\nVariables l\u00f3gicas:\n\n\n```python\nc = 3 < 4\n```\n\n\n```python\ntype(c)\n```\n\n\n\n\n bool\n\n\n\n\n```python\nc and (2==1+1) or not (3 != 5)\n```\n\n\n\n\n True\n\n\n\nN\u00fameros reales aproximados con coma flotante de doble precisi\u00f3n:\n\n\n```python\nx = 3.5\n```\n\n\n```python\ntype(x)\n```\n\n\n\n\n float\n\n\n\nLos enteros tienen tama\u00f1o ilimitado:\n\n\n```python\nx = 20\n```\n\n\n```python\ntype(x)\n```\n\n\n\n\n int\n\n\n\n\n```python\nx**x\n```\n\n\n\n\n 104857600000000000000000000\n\n\n\nVariable compleja:\n\n\n```python\n(1+1j)*(1-1j)\n```\n\n\n\n\n (2+0j)\n\n\n\n\n```python\nimport cmath\n\ncmath.sqrt(-1)\n```\n\n\n\n\n 1j\n\n\n\n## control\n\nCondiciones:\n\n\n```python\nk = 7\n\nif k%2 == 0:\n print(k,\" es par\")\nelse:\n print(k,\" es impar\")\n print(\"me gustan los impares\")\n```\n\n 7 es impar\n me gustan los impares\n\n\nBucles:\n\n\n```python\nfor k in [1,2,3]:\n print(k)\n```\n\n 1\n 2\n 3\n\n\n\n```python\nfor k in range(5):\n print(k)\n```\n\n 0\n 1\n 2\n 3\n 4\n\n\n\n```python\nk = 1\np = 1\nwhile k < 5:\n p = p*k\n k = k+1\np\n```\n\n\n\n\n 24\n\n\n\n## contenedores\n\n### tuplas\n\n\n```python\nt = (2,'rojo')\n```\n\n\n```python\nt\n```\n\n\n\n\n (2, 'rojo')\n\n\n\n\n```python\nt[0]\n```\n\n\n\n\n 2\n\n\n\nSon inmutables.\n\n### listas\n\n\n```python\nl = [1,-2,67,0,8,1,3]\n```\n\n\n```python\ntype(l)\n```\n\n\n\n\n list\n\n\n\nTambi\u00e9n admite elementos de diferentes tipos, incluyendo otras listas, tuplas, o cualquier otro tipo de datos, aunque lo normal es trabajar con listas homog\u00e9neas (con elementos del mismo tipo) cuyos elementos pueden procesarse todos de la misma manera usando un bucle.\n\nLa extracci\u00f3n de elementos (\"indexado\"), la longitud de la lista y la suma de sus elementos se consiguen exactamente igual que en las tuplas:\n\n\n```python\nl[2], len(l), sum(l)\n```\n\n\n\n\n (67, 7, 78)\n\n\n\nSin embargo, las listas se diferencian en una caracter\u00edstica fundamental. Son **mutables**: podemos a\u00f1adir o quitar elementos de ellas.\n\n\n```python\nl.append(28)\n\nl\n```\n\n\n\n\n [1, -2, 67, 0, 8, 1, 3, 28]\n\n\n\n\n```python\nl += [-2,4]\n\nl\n```\n\n\n\n\n [1, -2, 67, 0, 8, 1, 3, 28, -2, 4]\n\n\n\n\n```python\nl.remove(0)\n\nl\n```\n\n\n\n\n [1, -2, 67, 8, 1, 3, 28, -2, 4]\n\n\n\n\n```python\nl[2] = 7\n\nl\n```\n\n\n\n\n [1, -2, 7, 8, 1, 3, 28, -2, 4]\n\n\n\n\n```python\nl.pop()\n```\n\n\n\n\n 4\n\n\n\n\n```python\nl\n```\n\n\n\n\n [1, -2, 7, 8, 1, 3, 28, -2]\n\n\n\n\n```python\ndel l[2]\n```\n\n\n```python\nl\n```\n\n\n\n\n [1, -2, 8, 1, 3, 28, -2]\n\n\n\n\n```python\nl.insert(3,100)\n\nl\n```\n\n\n\n\n [1, -2, 8, 100, 1, 3, 28, -2]\n\n\n\n### conjuntos\n\nEl tipo `set` trata de reproducir el concepto matem\u00e1tico de conjunto. Se construye con llaves y los elementos duplicados se eliminan autom\u00e1ticamente.\n\n\n```python\nC = {1,2,7,1,8,2,1}\nC\n```\n\n\n\n\n {1, 2, 7, 8}\n\n\n\nLas operaciones de conjuntos est\u00e1n disponibles con s\u00edmbolos o con \"m\u00e9todos\" (funciones en forma de sufijo) . Los detalles pueden encontrarse en la [documentaci\u00f3n](https://docs.python.org/3.6/library/stdtypes.html?highlight=set#set).\n\n\n```python\nC.union({0,8})\n```\n\n\n\n\n {0, 1, 2, 7, 8}\n\n\n\n\n```python\nC | {0,8}\n```\n\n\n\n\n {0, 1, 2, 7, 8}\n\n\n\n\n```python\nC & {5,2}\n```\n\n\n\n\n {2}\n\n\n\n\n```python\nC - {2,8,0,5}\n```\n\n\n\n\n {1, 7}\n\n\n\n\n```python\n5 in C\n```\n\n\n\n\n False\n\n\n\n\n```python\n{1,2} < {5,2,1}\n```\n\n\n\n\n True\n\n\n\n### diccionarios\n\nEs un array asociativo (el \u00edndice puede ser cualquier tipo (inmutable)). Es una estructura muy utilizada en Python.\n\n\n```python\nd = {'lunes': 8, 'martes' : [1,2,3], 3: 5}\n```\n\n\n```python\nd['martes']\n```\n\n\n\n\n [1, 2, 3]\n\n\n\n\n```python\nd.keys()\n```\n\n\n\n\n dict_keys(['lunes', 'martes', 3])\n\n\n\n\n```python\nd.values()\n```\n\n\n\n\n dict_values([8, [1, 2, 3], 5])\n\n\n\n### iteraci\u00f3n en contenedores\n\nSi queremos procesar todos los elementos de un contenedor podemos hacer un bucle y acceder a cada uno de ellos con la operaci\u00f3n de indexado.\n\n\n```python\nlista = [1,2,3,4,5]\n\nfor k in range(len(lista)):\n print(lista[k])\n```\n\n 1\n 2\n 3\n 4\n 5\n\n\nEsta construcci\u00f3n es tan com\u00fan que en Python podemos escribirla de forma mucho m\u00e1s natural:\n\n\n```python\nfor x in lista:\n print(x)\n```\n\n 1\n 2\n 3\n 4\n 5\n\n\nEsto funciona incluso en contenedores como `set` que no admiten el indexado. Los tipos contenedores se pueden \"recorrer\" directamente mediante un bucle `for`, visitando todos sus elementos.\n\n### conversi\u00f3n\n\nEl nombre de un contenedor es a la vez una funci\u00f3n para construir un contenedor de ese tipo a partir de otro contenedor cualquiera.\n\n\n```python\nl = [4,2,2,3,3,3,3,1]\n\ntuple(l)\n```\n\n\n\n\n (4, 2, 2, 3, 3, 3, 3, 1)\n\n\n\n\n```python\nset(l)\n```\n\n\n\n\n {1, 2, 3, 4}\n\n\n\n\n```python\nlist({5,4,3})\n```\n\n\n\n\n [3, 4, 5]\n\n\n\n\n```python\nlist(range(10))\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n\n\nEsta caracter\u00edstica funciona con cualquier otro tipo, no solo con contenedores:\n\n\n```python\nfloat(5)\n```\n\n\n\n\n 5.0\n\n\n\n\n```python\nint('54')\n```\n\n\n\n\n 54\n\n\n\nSi la conversi\u00f3n no es posible se producir\u00e1 un error.\n\n### subsecuencias\n\n\n```python\nl = list(range(20))\n\nl\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n\n\n\n\n```python\nl[:5]\n```\n\n\n\n\n [0, 1, 2, 3, 4]\n\n\n\n\n```python\nl[4:]\n```\n\n\n\n\n [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n\n\n\n\n```python\nl[-3:]\n```\n\n\n\n\n [17, 18, 19]\n\n\n\n\n```python\nl[5:10:2]\n```\n\n\n\n\n [5, 7, 9]\n\n\n\n\n```python\nl[::-1]\n```\n\n\n\n\n [19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]\n\n\n\n\n```python\nl[10:14] = [0,0,0]\n\nl\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 14, 15, 16, 17, 18, 19]\n\n\n\n### *list comprehensions*\n\nCuando utilizamos un bucle para recorrer una lista o realizar un gran n\u00famero de c\u00e1lculos los resultados intermedios se pueden imprimir si se desea, pero en cualquier caso al final se pierden.\n\nMuchas veces surge la necesidad de construir una lista (o cualquier otro tipo de contenedor) a partir de los elementos de otra. Una forma de programarlo es empezar con una lista vac\u00eda e iterar mediante un bucle a\u00f1adiendo elementos.\n\nSupongamos que queremos construir una lista con los 100 primeros n\u00fameros cuadrados $1,4,9,16,\\ldots,10000$. En principio parece razonable hacer lo siguiente:\n\n\n```python\nr = []\nfor k in range(1,101):\n r.append(k**2)\n\nprint(r)\n```\n\n [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089, 1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116, 2209, 2304, 2401, 2500, 2601, 2704, 2809, 2916, 3025, 3136, 3249, 3364, 3481, 3600, 3721, 3844, 3969, 4096, 4225, 4356, 4489, 4624, 4761, 4900, 5041, 5184, 5329, 5476, 5625, 5776, 5929, 6084, 6241, 6400, 6561, 6724, 6889, 7056, 7225, 7396, 7569, 7744, 7921, 8100, 8281, 8464, 8649, 8836, 9025, 9216, 9409, 9604, 9801, 10000]\n\n\nNo est\u00e1 mal, pero los lenguajes modernos proporcionan una herramienta mucho m\u00e1s elegante para expresar este tipo de c\u00e1lculos. Se conoce como [list comprehension](https://en.wikipedia.org/wiki/List_comprehension) y trata de imitar la notaci\u00f3n matem\u00e1tica para definir conjuntos:\n\n$$ r = \\{ k^2 \\; : \\; \\forall k \\in \\mathbb{N}, \\;1 \\leq k \\leq 100 \\} $$\n\n\n```python\nr = [ k**2 for k in range(1,101) ]\n\nprint(r)\n```\n\n [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089, 1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116, 2209, 2304, 2401, 2500, 2601, 2704, 2809, 2916, 3025, 3136, 3249, 3364, 3481, 3600, 3721, 3844, 3969, 4096, 4225, 4356, 4489, 4624, 4761, 4900, 5041, 5184, 5329, 5476, 5625, 5776, 5929, 6084, 6241, 6400, 6561, 6724, 6889, 7056, 7225, 7396, 7569, 7744, 7921, 8100, 8281, 8464, 8649, 8836, 9025, 9216, 9409, 9604, 9801, 10000]\n\n\n\n```python\n[ k for k in range(100) if k%7 == 0 ]\n```\n\n\n\n\n [0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98]\n\n\n\n\n```python\n[(a,b) for a in range(1,7) for b in range(1,7) if a + b >= 10 ]\n```\n\n\n\n\n [(4, 6), (5, 5), (5, 6), (6, 4), (6, 5), (6, 6)]\n\n\n\n\n```python\n{ a+b for a in range(1,7) for b in range(1,7) }\n```\n\n\n\n\n {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}\n\n\n\n\n```python\nsum([k**2 for k in range(100+1)])\n```\n\n\n\n\n 338350\n\n\n\n### desestructuraci\u00f3n\n\nEn Python es posible asignar nombres a los elementos de una secuencia de forma muy natural.\n\nSupongamos que tenemos una tupla como la siguiente\n\n\n```python\nt = (3,4,5)\n```\n\ny queremos operar con sus elementos. Podemos acceder con un \u00edndice:\n\n\n```python\nt[1] + t[2]\n```\n\n\n\n\n 9\n\n\n\nNo hay ning\u00fan problema pero el acceso con \u00edndice se hace pesado si los elementos aparecen varias veces en el c\u00f3digo. En estos casos es mejor ponerles nombre. Podemos hacer\n\n\n```python\nb = t[1]\nc = t[2]\n\nb+c\n```\n\n\n\n\n 9\n\n\n\nSin embargo Python nos permite algo m\u00e1s elegante:\n\n\n```python\n_,b,c = t\n\nb+c\n```\n\n\n\n\n 9\n\n\n\n(El nombre `_` se suele usar cuando no necesitamos ese elemento.)\n\nUsando esta caracter\u00edstica podemos escribir varias asignaciones de una vez:\n\n\n```python\nx,y = 23,45\n```\n\nUn nombre con asterisco captura dentro de una lista todos los elementos restantes:\n\n\n```python\ns = 'Alberto'\n\nx, y, *z, w = s\n```\n\n\n```python\ny\n```\n\n\n\n\n 'l'\n\n\n\n\n```python\nz\n```\n\n\n\n\n ['b', 'e', 'r', 't']\n\n\n\nLa desestructuraci\u00f3n de argumentos es muy pr\u00e1ctica en combinaci\u00f3n con las *list comprehensions*:\n\n\n```python\nl = [(k,k**2) for k in range(5)]\nl\n```\n\n\n\n\n [(0, 0), (1, 1), (2, 4), (3, 9), (4, 16)]\n\n\n\n\n```python\n[a+b for a,b in l]\n```\n\n\n\n\n [0, 2, 6, 12, 20]\n\n\n\n## funciones\n\n\n```python\ndef sp(n):\n r = n**2+n+41\n return r\n```\n\n\n```python\nsp(5)\n```\n\n\n\n\n 71\n\n\n\nSe pueden devolver varios resultados en una tupla:\n\n\n```python\nimport math\n\ndef ecsec(a,b,c):\n d = math.sqrt(b**2- 4*a*c)\n s1 = (-b+d)/2/a\n s2 = (-b-d)/2/a\n return (s1,s2)\n```\n\n\n```python\necsec(2,-6,4)\n```\n\n\n\n\n (2.0, 1.0)\n\n\n\nLos par\u00e9ntesis de la tupla son opcionales.\n\n\n```python\na,b = ecsec(1,-3,2)\n\nb\n```\n\n\n\n\n 1.0\n\n\n\nLas variables globales son visibles dentro de las funciones y las asignaciones crean variables locales (a menos que el nombre se declare `global`).\n\n\n```python\na = 5\n\nb = 8\n\ndef f(x):\n b = a+1\n return b\n\nprint(f(3))\nprint(b)\n```\n\n 6\n 8\n\n\n\n```python\na = 5\n\nb = 8\n\ndef f(x):\n global b\n b = a+1\n return b\n\nprint(f(3))\nprint(b)\n```\n\n 6\n 6\n\n\nArgumentos por omisi\u00f3n:\n\n\n```python\ndef incre(x,y=1):\n return x + y\n\nprint(incre(5))\nprint(incre(5,3))\n```\n\n 6\n 8\n\n\nArgumentos por nombre:\n\n\n```python\nincre(y=3, x=2)\n```\n\n\n\n\n 5\n\n\n\nDocumentaci\u00f3n:\n\n\n```python\n# ? sum\nhelp(sum)\n```\n\n Help on built-in function sum in module builtins:\n \n sum(iterable, start=0, /)\n Return the sum of a 'start' value (default: 0) plus an iterable of numbers\n \n When the iterable is empty, return the start value.\n This function is intended specifically for use with numeric values and may\n reject non-numeric types.\n \n\n\n\n```python\ndef fun(n):\n \"\"\"Una funci\u00f3n muy simple que calcula el triple de su argumento.\"\"\"\n return 3*n\n```\n\n\n```python\nhelp(fun)\n```\n\n Help on function fun in module __main__:\n \n fun(n)\n Una funci\u00f3n muy simple que calcula el triple de su argumento.\n \n\n\n### bibliotecas\n\nLas funciones definidas en un archivo se pueden utilizar directamente haciendo un `import`. Existe una convenci\u00f3n para definir una funci\u00f3n `main` que se ejecuta cuando el archivo se arranca como programa y suele usarse para ejecutar tests.\n\n### programaci\u00f3n funcional\n\nEn Python 3 las construcciones funcionales crean secuencias \"bajo demanda\".\n\n\n```python\nmap(sp,range(5))\n```\n\n\n\n\n \n\n\n\n\n```python\nfor k in map(sp,range(5)):\n print(k)\n```\n\n 41\n 43\n 47\n 53\n 61\n\n\n\n```python\nlist(map(sp,range(5)))\n```\n\n\n\n\n [41, 43, 47, 53, 61]\n\n\n\n\n```python\nlist(filter(lambda x: x%2 == 1, range(10)))\n```\n\n\n\n\n [1, 3, 5, 7, 9]\n\n\n\nEs poco frecuente usar expl\u00edcitamente map y filter, ya que su efecto se consigue de forma m\u00e1s c\u00f3moda con list comprehensions:\n\n\n```python\n[k**2 for k in range(10) if k >5 ]\n```\n\n\n\n\n [36, 49, 64, 81]\n\n\n\n\n```python\ndef divis(n):\n return [k for k in range(2,n) if n%k==0]\n```\n\n\n```python\ndivis(12)\n```\n\n\n\n\n [2, 3, 4, 6]\n\n\n\n\n```python\ndivis(1001)\n```\n\n\n\n\n [7, 11, 13, 77, 91, 143]\n\n\n\n\n```python\ndef perfect(n):\n return sum(divis(n)) + 1 == n\n```\n\n\n```python\nperfect(4)\n```\n\n\n\n\n False\n\n\n\n\n```python\nperfect(6)\n```\n\n\n\n\n True\n\n\n\n\n```python\ndef prime(n):\n return divis(n)==[]\n```\n\n\n```python\n[k for k in range(2,21) if prime(k)]\n```\n\n\n\n\n [2, 3, 5, 7, 11, 13, 17, 19]\n\n\n\n\n```python\nfrom functools import reduce\nimport operator\n\ndef product(l):\n return reduce(operator.mul,l,1)\n```\n\n\n```python\nproduct(range(1,10+1))\n```\n\n\n\n\n 3628800\n\n\n\nFunci\u00f3n que construye funciones:\n\n\n```python\ndef mkfun(y):\n return lambda x: x+y\n```\n\n\n```python\nf = mkfun(1)\ng = mkfun(5)\n\nprint(f(10))\nprint(g(10))\n```\n\n 11\n 15\n\n\n\n```python\nfs = list(map(mkfun,range(1,6)))\n\nprint(fs[0](10))\nprint(fs[4](10))\n```\n\n 11\n 15\n\n\n## arrays\n\nGran parte del \u00e9xito de Python se debe a [numpy](http://www.numpy.org/).\n\n\n```python\nimport numpy as np\n```\n\nConstrucci\u00f3n a partir de listas (u otros contenedores):\n\n\n```python\nm = np.array([[5,3, 2,10],\n [2,0, 7, 0],\n [1,1,-3, 6]])\n```\n\n\n```python\nm[1,2]\n```\n\n\n\n\n 7\n\n\n\nInspecci\u00f3n de su tipo y estructura:\n\n\n```python\ntype(m)\n```\n\n\n\n\n numpy.ndarray\n\n\n\n\n```python\nm.dtype\n```\n\n\n\n\n dtype('int64')\n\n\n\n\n```python\nm.shape\n```\n\n\n\n\n (3, 4)\n\n\n\n\n```python\nm.ndim\n```\n\n\n\n\n 2\n\n\n\n\n```python\nlen(m)\n```\n\n\n\n\n 3\n\n\n\n\n```python\nm.size\n```\n\n\n\n\n 12\n\n\n\nLas operaciones elemento a elemento son autom\u00e1ticas:\n\n\n```python\n5*m + 2\n```\n\n\n\n\n array([[ 27, 17, 12, 52],\n [ 12, 2, 37, 2],\n [ 7, 7, -13, 32]])\n\n\n\nConstructores especiales:\n\n\n```python\nnp.zeros([2,3])\n```\n\n\n\n\n array([[0., 0., 0.],\n [0., 0., 0.]])\n\n\n\n\n```python\nnp.ones([4])\n```\n\n\n\n\n array([1., 1., 1., 1.])\n\n\n\n\n```python\nnp.linspace(0,5,11)\n```\n\n\n\n\n array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ])\n\n\n\n\n```python\nnp.arange(10)\n```\n\n\n\n\n array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\n\n```python\nnp.arange(1,10,0.5)\n```\n\n\n\n\n array([1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5, 6. , 6.5, 7. ,\n 7.5, 8. , 8.5, 9. , 9.5])\n\n\n\n\n```python\nnp.eye(7)\n```\n\n\n\n\n array([[1., 0., 0., 0., 0., 0., 0.],\n [0., 1., 0., 0., 0., 0., 0.],\n [0., 0., 1., 0., 0., 0., 0.],\n [0., 0., 0., 1., 0., 0., 0.],\n [0., 0., 0., 0., 1., 0., 0.],\n [0., 0., 0., 0., 0., 1., 0.],\n [0., 0., 0., 0., 0., 0., 1.]])\n\n\n\nIteraci\u00f3n, a lo largo de la primera dimensi\u00f3n:\n\n\n```python\nfor e in np.arange(4):\n print(e)\n```\n\n 0\n 1\n 2\n 3\n\n\n\n```python\nfor e in m:\n print(e)\n```\n\n [ 5 3 2 10]\n [2 0 7 0]\n [ 1 1 -3 6]\n\n\n\n```python\nsum(m)\n```\n\n\n\n\n array([ 8, 4, 6, 16])\n\n\n\n\n```python\nnp.sum(m,axis=1)\n```\n\n\n\n\n array([20, 9, 5])\n\n\n\nOperaciones matriciales:\n\n\n```python\nm.T\n```\n\n\n\n\n array([[ 5, 2, 1],\n [ 3, 0, 1],\n [ 2, 7, -3],\n [10, 0, 6]])\n\n\n\n\n```python\nv = np.array([3,2,-5,8])\n```\n\nEl producto de matrices, el producto escalar de vectores, y su generalizaci\u00f3n para arrays multidimensionales se expresa con el s\u00edmbolo `@` (que representa a la funci\u00f3n `dot`).\n\n\n```python\nm @ v\n```\n\n\n\n\n array([ 91, -29, 68])\n\n\n\n\n```python\nnp.diag([10,0,1]) @ m\n```\n\n\n\n\n array([[ 50, 30, 20, 100],\n [ 0, 0, 0, 0],\n [ 1, 1, -3, 6]])\n\n\n\nLas funciones matem\u00e1ticas est\u00e1n optimizadas para operar con arrays elemento a elemento:\n\n\n```python\nx = np.linspace(0,2*np.pi,30)\n\nx\n```\n\n\n\n\n array([0. , 0.21666156, 0.43332312, 0.64998469, 0.86664625,\n 1.08330781, 1.29996937, 1.51663094, 1.7332925 , 1.94995406,\n 2.16661562, 2.38327719, 2.59993875, 2.81660031, 3.03326187,\n 3.24992343, 3.466585 , 3.68324656, 3.89990812, 4.11656968,\n 4.33323125, 4.54989281, 4.76655437, 4.98321593, 5.1998775 ,\n 5.41653906, 5.63320062, 5.84986218, 6.06652374, 6.28318531])\n\n\n\n\n```python\ny = np.sin(x) + np.cos(2*x)\ny\n```\n\n\n\n\n array([ 1. , 1.12254586, 1.06727539, 0.87270255, 0.60038006,\n 0.32232498, 0.10669282, 0.00439546, 0.03917335, 0.20298123,\n 0.45755084, 0.74183837, 0.9839623 , 1.1153946 , 1.08473957,\n 0.86850154, 0.47679154, -0.04714542, -0.63356055, -1.19782715,\n -1.65497221, -1.93447969, -1.99267137, -1.82040717, -1.44469911,\n -0.92394405, -0.33764588, 0.22749718, 0.69260498, 1. ])\n\n\n\nReconfiguraci\u00f3n de los elementos:\n\n\n```python\nnp.arange(12)\n```\n\n\n\n\n array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])\n\n\n\n\n```python\nnp.arange(12).reshape(3,2,2)\n```\n\n\n\n\n array([[[ 0, 1],\n [ 2, 3]],\n \n [[ 4, 5],\n [ 6, 7]],\n \n [[ 8, 9],\n [10, 11]]])\n\n\n\n### matrices por bloques\n\n\n```python\nnp.append(m,[[100,200,300,400],\n [0, 10, 0, 1] ],axis=0)\n```\n\n\n\n\n array([[ 5, 3, 2, 10],\n [ 2, 0, 7, 0],\n [ 1, 1, -3, 6],\n [100, 200, 300, 400],\n [ 0, 10, 0, 1]])\n\n\n\n\n```python\nnp.hstack([np.zeros([3,3]),np.ones([3,2])])\n```\n\n\n\n\n array([[0., 0., 0., 1., 1.],\n [0., 0., 0., 1., 1.],\n [0., 0., 0., 1., 1.]])\n\n\n\n\n```python\nnp.vstack([np.eye(3),5*np.ones([2,3])])\n```\n\n\n\n\n array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.],\n [5., 5., 5.],\n [5., 5., 5.]])\n\n\n\nnumpy proporciona un tipo especial `matrix` para los arrays de 2 dimensiones pero [se recomienda no usarlo](https://stackoverflow.com/questions/4151128/what-are-the-differences-between-numpy-arrays-and-matrices-which-one-should-i-u) o usarlo con cuidado.\n\n### automatic broadcasting\n\nLas operaciones elemento a elemento argumentos con las mismas dimensiones. Pero si alguna dimensi\u00f3n es igual a uno, se sobreentiende que los elementos se replican en esa dimensi\u00f3n para coincidir con el otro array.\n\n\n```python\nm = np.array([[1, 2, 3, 4]\n ,[5, 6, 7, 8]\n ,[9,10,11,12]])\n```\n\n\n```python\nm + [[10],\n [20],\n [30]]\n```\n\n\n\n\n array([[11, 12, 13, 14],\n [25, 26, 27, 28],\n [39, 40, 41, 42]])\n\n\n\n\n```python\nm + [100,200,300,400]\n```\n\n\n\n\n array([[101, 202, 303, 404],\n [105, 206, 307, 408],\n [109, 210, 311, 412]])\n\n\n\n\n```python\nnp.array([[1,2,3,4]]) + np.array([[100],\n [200],\n [300]])\n```\n\n\n\n\n array([[101, 102, 103, 104],\n [201, 202, 203, 204],\n [301, 302, 303, 304]])\n\n\n\n### slices\n\nExtracci\u00f3n de elementos y \"submatrices\" o \"subarrays\", seleccionando intervalos de filas, columnas, etc.:\n\n\n```python\nm = np.arange(42).reshape(6,7)\nm\n```\n\n\n\n\n array([[ 0, 1, 2, 3, 4, 5, 6],\n [ 7, 8, 9, 10, 11, 12, 13],\n [14, 15, 16, 17, 18, 19, 20],\n [21, 22, 23, 24, 25, 26, 27],\n [28, 29, 30, 31, 32, 33, 34],\n [35, 36, 37, 38, 39, 40, 41]])\n\n\n\n\n```python\nm[1,2]\n```\n\n\n\n\n 9\n\n\n\n\n```python\nm[2:5,1:4]\n```\n\n\n\n\n array([[15, 16, 17],\n [22, 23, 24],\n [29, 30, 31]])\n\n\n\n\n```python\nm[:3, 4:]\n```\n\n\n\n\n array([[ 4, 5, 6],\n [11, 12, 13],\n [18, 19, 20]])\n\n\n\n\n```python\nm[[1,0,0,2,1],:]\n```\n\n\n\n\n array([[ 7, 8, 9, 10, 11, 12, 13],\n [ 0, 1, 2, 3, 4, 5, 6],\n [ 0, 1, 2, 3, 4, 5, 6],\n [14, 15, 16, 17, 18, 19, 20],\n [ 7, 8, 9, 10, 11, 12, 13]])\n\n\n\nLos \u00edndices negativos indican que se empieza a contar desde el final.\n\n\n```python\n# las dos \u00faltimas columnas y todas las filas menos las tres \u00faltimas.\nm[:-3,-2:]\n```\n\n\n\n\n array([[ 5, 6],\n [12, 13],\n [19, 20]])\n\n\n\n\n```python\n# la pen\u00faltima columna\nm[:,-2]\n```\n\n\n\n\n array([ 5, 12, 19, 26, 33, 40])\n\n\n\n\n```python\n# la pen\u00faltima columna pero como array 2D (matriz), para que se vea como un vector columna\nm[:,[-2]]\n```\n\n\n\n\n array([[ 5],\n [12],\n [19],\n [26],\n [33],\n [40]])\n\n\n\n### masks\n\nExtracci\u00f3n de elementos que cumplen una condici\u00f3n:\n\n\n```python\nn = np.arange(10)\n\nn\n```\n\n\n\n\n array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\n\n```python\nn < 5\n```\n\n\n\n\n array([ True, True, True, True, True, False, False, False, False,\n False])\n\n\n\n\n```python\nn[n<5]\n```\n\n\n\n\n array([0, 1, 2, 3, 4])\n\n\n\n\n```python\nk = np.arange(1,101)\n\n(k ** 2)[(k>10) & (k**3 < 2000)]\n```\n\n\n\n\n array([121, 144])\n\n\n\n### I/O\n\nLa funci\u00f3n `np.loadtxt` permite cargar los datos de los arrays a partir de ficheros de texto. Tambi\u00e9n es posible guardar y recuperar arrays en formato binario.\n\n## gr\u00e1ficas\n\nUno de los paquetes gr\u00e1ficos m\u00e1s conocidos es `matplotlib`, que puede utilizarse con un interfaz muy parecido al de Matlab/Octave.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# para insertar los gr\u00e1ficos en el notebook\n%matplotlib inline\n\n# para generar ventanas independientes\n# %matplotlib qt\n# %matplotib tk\n```\n\n\n```python\nx=np.linspace(0,2*np.pi,200)\n```\n\n\n```python\nplt.plot(np.sin(x))\n```\n\n\n```python\nplt.plot(np.cos(x),np.sin(x)); plt.axis('equal');\n```\n\n\n```python\nplt.plot(x,np.sin(x), x,np.cos(x));\n```\n\n\n```python\nplt.plot(x,np.sin(x),color='red')\nplt.plot(x,np.sin(2*x),color='black')\nplt.plot([1,2.5],[-0.5,0],'.',markerSize=15);\nplt.legend(['hola','fun','puntos']);\nplt.xlabel('x'); plt.ylabel('y'); plt.title('bonito plot'); plt.axis('tight');\n```\n\nEl gr\u00e1fico se puede exportar en el formato deseado:\n\n\n```python\n# plt.savefig('result.pdf') # o .svg, .png, .jpg, etc.\n```\n\n\n```python\nplt.plot(x,np.exp(x)); plt.axis([0,3,-1,5]);\n```\n\n\n```python\nfor k in [1,2,3]:\n plt.plot(x,np.sin(k*x))\nplt.grid()\n```\n\n\n```python\ndef espiral(n):\n t = np.linspace(0,n*2*np.pi,1000)\n r = 3 * t\n x = r * np.cos(t)\n y = r * np.sin(t)\n plt.plot(x,y)\n plt.axis('equal')\n plt.axis('off')\n\nespiral(4)\n```\n\n\n```python\nimport numpy.random as rnd\n\ndef randwalk(n,s):\n p = s*rnd.randn(n,2)\n r = np.cumsum(p,axis=0)\n x = r[:,0]\n y = r[:,1]\n plt.plot(x,y)\n plt.axis('equal');\n```\n\n\n```python\nplt.figure(figsize=(4,4))\nrandwalk(1000,1)\n```\n\n\n```python\nplt.figure(figsize=(8,8))\nx = np.linspace(0,6*np.pi,100);\n\nplt.subplot(2,2,1)\nplt.plot(x,np.sin(x),'r')\n\nplt.subplot(2,2,2)\nplt.plot(x,np.cos(x))\n\nplt.subplot(2,2,3)\nplt.plot(x,np.sin(2*x))\n\nplt.subplot(2,2,4)\nplt.plot(x,np.cos(2*x),'g');\n```\n\n\n```python\nx,y = np.mgrid[-3:3:0.2,-3:3:0.2]\n\nz = x**2-y**2-1\n```\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\n\nfig = plt.figure(figsize=(8,6))\nax = fig.add_subplot(111, projection='3d')\n\nax.plot_surface(x,y,x**2-y**2, cmap=cm.coolwarm, linewidth=0.5, rstride=2, cstride=2);\n```\n\n\n```python\nplt.figure(figsize=(6,6))\nplt.contour(x,y, z , colors=['k']);\nplt.axis('equal');\n```\n\n### animaciones\n\n\n```python\n# si se produce un error:\n# conda install -c menpo ffmpeg\n\nfrom matplotlib import animation, rc\nfrom IPython.display import HTML\nrc('animation', html='html5')\n```\n\n\n```python\nx = np.linspace(0,2,100)\n\ndef wave(lam,freq,x,t):\n return 1*np.sin(2*np.pi*(x/lam - t*freq))\n```\n\n\n```python\nfig, ax = plt.subplots()\nplt.grid()\nplt.title('onda viajera')\nplt.xlabel('x');\nplt.close();\nax.set_xlim(( 0, 2))\nax.set_ylim((-1.1, 1.1))\n\nline1, = ax.plot([], [], '-')\n#line2, = ax.plot([], [], '.', markerSize=20)\n\nlam = 0.8\nfreq = 1/4\n\ndef animate(i):\n t = i/25\n line1.set_data(x,wave(lam,freq,x,t))\n #line2.set_data(1,f(lam,freq,1,t))\n return ()\n\nanimation.FuncAnimation(fig, animate, frames=100, interval=1000/25, blit=True)\n```\n\n\n\n\n\n\n\n\n### data frames\n\nEl m\u00f3dulo `pandas` proporciona el tipo \"dataframe\", muy utilizado en an\u00e1lisis de datos. Permite leer conjuntos de datos almacenados en archivos que pueden estar incluso en una m\u00e1quina remota.\n\n\n```python\nimport pandas as pd\n\ndf = pd.read_table('https://robot.inf.um.es/material/data/ConstanteHubbleDatos-1.txt', sep='\\s+', comment='#')\ndf\n```\n\n\n\n\n

\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
V(km/s)RedshiftMagnitud
0182870.06099817.62
156910.01898315.00
2263820.08800018.59
359960.02000015.54
4192020.06405115.30
5236840.07900016.56
6117020.03903417.14
7172840.05765313.50
8134910.04500017.80
9105660.03524415.25
10147180.04909415.60
11134910.04500014.52
12163250.05445315.30
13206860.06900016.80
1418080.00603111.16
1576030.02536115.18
1610180.00339512.24
173210.00107113.00
1831060.01036012.49
1994260.03144214.53
2074640.02489715.21
21151430.05051217.40
224070.00135810.87
2372570.02420714.60
2491930.03066415.10
25121370.04048514.75
2642640.01422414.98
2743810.01461514.15
28224840.07500017.43
29151620.05057516.50
30300000.10100018.90
31129810.04330015.23
3288030.02936414.90
\n
\n\n\n\nPuede convertirse en un array normal:\n\n\n```python\nA = np.array(df)\nA\n```\n\n\n\n\n array([[1.8287e+04, 6.0998e-02, 1.7620e+01],\n [5.6910e+03, 1.8983e-02, 1.5000e+01],\n [2.6382e+04, 8.8000e-02, 1.8590e+01],\n [5.9960e+03, 2.0000e-02, 1.5540e+01],\n [1.9202e+04, 6.4051e-02, 1.5300e+01],\n [2.3684e+04, 7.9000e-02, 1.6560e+01],\n [1.1702e+04, 3.9034e-02, 1.7140e+01],\n [1.7284e+04, 5.7653e-02, 1.3500e+01],\n [1.3491e+04, 4.5000e-02, 1.7800e+01],\n [1.0566e+04, 3.5244e-02, 1.5250e+01],\n [1.4718e+04, 4.9094e-02, 1.5600e+01],\n [1.3491e+04, 4.5000e-02, 1.4520e+01],\n [1.6325e+04, 5.4453e-02, 1.5300e+01],\n [2.0686e+04, 6.9000e-02, 1.6800e+01],\n [1.8080e+03, 6.0310e-03, 1.1160e+01],\n [7.6030e+03, 2.5361e-02, 1.5180e+01],\n [1.0180e+03, 3.3950e-03, 1.2240e+01],\n [3.2100e+02, 1.0710e-03, 1.3000e+01],\n [3.1060e+03, 1.0360e-02, 1.2490e+01],\n [9.4260e+03, 3.1442e-02, 1.4530e+01],\n [7.4640e+03, 2.4897e-02, 1.5210e+01],\n [1.5143e+04, 5.0512e-02, 1.7400e+01],\n [4.0700e+02, 1.3580e-03, 1.0870e+01],\n [7.2570e+03, 2.4207e-02, 1.4600e+01],\n [9.1930e+03, 3.0664e-02, 1.5100e+01],\n [1.2137e+04, 4.0485e-02, 1.4750e+01],\n [4.2640e+03, 1.4224e-02, 1.4980e+01],\n [4.3810e+03, 1.4615e-02, 1.4150e+01],\n [2.2484e+04, 7.5000e-02, 1.7430e+01],\n [1.5162e+04, 5.0575e-02, 1.6500e+01],\n [3.0000e+04, 1.0100e-01, 1.8900e+01],\n [1.2981e+04, 4.3300e-02, 1.5230e+01],\n [8.8030e+03, 2.9364e-02, 1.4900e+01]])\n\n\n\n\n```python\nx = A[:,0]\ny = A[:,2]\n\n# x,_,y = A.T\n\nplt.plot(x,y,'.');\n```\n\n## c\u00e1lculo cient\u00edfico\n\n### n\u00fameros pseudoaleatorios y estad\u00edstica elemental\n\n`numpy` permite generar arrays de n\u00fameros pseudoaleatorios con diferentes tipos de distribuciones (uniforme, normal, etc.).\n\nTiene tambi\u00e9n funciones de estad\u00edstica descriptiva para calcular caracter\u00edsticas de conjuntos de datos tales como la media, mediana, desviaci\u00f3n t\u00edpica, m\u00e1ximo y m\u00ednimo, etc.\n\nComo ejemplo, podemos estudiar la distribuci\u00f3n de puntuaciones al lanzar 3 dados.\n\n\n```python\ndados = np.random.randint(1,6+1,(100,3))\ndados[:10]\n```\n\n\n\n\n array([[5, 3, 3],\n [3, 2, 6],\n [3, 6, 3],\n [4, 5, 6],\n [2, 1, 3],\n [5, 6, 2],\n [2, 6, 2],\n [3, 4, 1],\n [3, 3, 2],\n [5, 3, 2]])\n\n\n\n\n```python\ns = np.sum(dados,axis=1)\ns\n```\n\n\n\n\n array([11, 11, 12, 15, 6, 13, 10, 8, 8, 10, 10, 17, 7, 10, 12, 9, 11,\n 14, 10, 11, 15, 10, 6, 9, 9, 11, 9, 11, 15, 12, 5, 9, 11, 10,\n 10, 7, 8, 11, 9, 9, 8, 11, 8, 9, 12, 10, 10, 7, 14, 7, 5,\n 7, 14, 12, 7, 12, 7, 8, 13, 15, 17, 12, 10, 12, 13, 12, 9, 8,\n 13, 10, 9, 12, 16, 14, 16, 7, 8, 8, 13, 16, 12, 13, 13, 8, 9,\n 9, 10, 15, 9, 8, 15, 11, 8, 12, 10, 3, 7, 6, 10, 13])\n\n\n\n\n```python\nplt.hist(s,bins=np.arange(2,19)+0.5);\n```\n\n\n```python\ns.mean()\n```\n\n\n\n\n 10.43\n\n\n\n\n```python\ns.std()\n```\n\n\n\n\n 2.8644545728637416\n\n\n\n### implementaci\u00f3n eficiente\n\nLas operaciones de `numpy` est\u00e1n \"optimizadas\" (escritas internamente en c\u00f3digo C eficiente).\n\n\n```python\nx = np.random.rand(10**8)\n```\n\n\n```python\nx\n```\n\n\n\n\n array([0.98545712, 0.76934582, 0.89515805, ..., 0.99684131, 0.91799724,\n 0.0971656 ])\n\n\n\n\n```python\n%%time\n\nnp.mean(x)\n```\n\n CPU times: user 88 ms, sys: 0 ns, total: 88 ms\n Wall time: 88.3 ms\n\n\n\n\n\n 0.5000571511372741\n\n\n\n\n```python\n%%timeit\n\nnp.mean(x)\n```\n\n 86.8 ms \u00b1 741 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\n\n```python\n%%timeit\n\nx @ x\n```\n\n 61.5 ms \u00b1 202 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\nSi la misma operaci\u00f3n se realiza \"manualmente\" con instrucciones normales de Python requiere mucho m\u00e1s tiempo:\n\n\n```python\n%%time\n\ns = 0\nfor e in x:\n s += e\nprint(s/len(x))\n```\n\n 0.5000571511372518\n CPU times: user 29.2 s, sys: 12 ms, total: 29.2 s\n Wall time: 29.1 s\n\n\nPor tanto, si usamos los m\u00f3dulos apropiados los programas en Python no tienen por qu\u00e9 ser m\u00e1s lentos que los de otros lenguajes de programaci\u00f3n. Python es \"glue code\", un pegamento para combinar bibliotecas de funciones, escritas en cualquier lenguaje, que resuelven eficientemente problemas espec\u00edficos.\n\n### \u00e1lgebra lineal\n\nEl subm\u00f3dulo `linalg` ofrece las operaciones usuales de \u00e1lgebra lineal.\n\n\n```python\nimport scipy.linalg as la\n```\n\nPor ejemplo, podemos calcular f\u00e1cilmente el m\u00f3dulo de un vector:\n\n\n```python\nla.norm([1,2,3,4,5])\n```\n\n\n\n\n 7.416198487095663\n\n\n\no el determinante de una matriz:\n\n\n```python\nla.det([[1,2],\n [3,4]])\n```\n\n\n\n\n -2.0\n\n\n\nObserva que muchas de las funciones que trabajan con arrays admiten tambi\u00e9n otros contenedores como listas o tuplas, que son transformadas autom\u00e1ticamente en arrays.\n\nUn problema muy importante es la resoluci\u00f3n de sistemas de ecuaciones lineales. Si tenemos que resolver un sistema como\n\n$$\n\\begin{align*}\nx + 2y &= 3\\\\\n3x+4y &= 5\n\\end{align*}\n$$\n\nLo expresamos en forma matricial $AX=B$ y podemos resolverlo con la inversa de $A$, o directamente con `solve`.\n\n\n```python\nm = np.array([[1,2],\n [3,4]])\n```\n\n\n```python\nm\n```\n\n\n\n\n array([[1, 2],\n [3, 4]])\n\n\n\n\n```python\nla.inv(m)\n```\n\n\n\n\n array([[-2. , 1. ],\n [ 1.5, -0.5]])\n\n\n\n\n```python\nla.inv(m) @ np.array([3,5])\n```\n\n\n\n\n array([-1., 2.])\n\n\n\nEs mejor (m\u00e1s eficiente y num\u00e9ricamente estable) usar la funci\u00f3n `solve`:\n\n\n```python\nla.solve(m,[3,5])\n```\n\n\n\n\n array([-1., 2.])\n\n\n\nLa soluci\u00f3n se deber\u00eda mostrar como una columna, pero en Python los arrays de una dimensi\u00f3n se imprimen como una fila porque no siempre representan vectores matem\u00e1ticos. Si lo preferimos podemos usar matrices de una sola columna.\n\n\n```python\nx = la.solve(m,[[3],\n [5]])\n\nx\n```\n\n\n\n\n array([[-1.],\n [ 2.]])\n\n\n\n\n```python\nm @ x\n```\n\n\n\n\n array([[3.],\n [5.]])\n\n\n\nSi el lado derecho de la ecuaci\u00f3n matricial $A X = B$ es una matriz, la soluci\u00f3n $X$ tambi\u00e9n lo ser\u00e1.\n\n### computaci\u00f3n matricial\n\nPython proporciona una [amplia colecci\u00f3n](https://docs.scipy.org/doc/scipy/reference/linalg.html) de funciones de \u00e1lgebra lineal num\u00e9rica.\n\n\n```python\nla.eigh([[1,2],\n [2,3]])\n```\n\n\n\n\n (array([-0.23606798, 4.23606798]), array([[-0.85065081, 0.52573111],\n [ 0.52573111, 0.85065081]]))\n\n\n\n### m\u00ednimos cuadrados\n\nComo ejemplo de uso de las herramientas de \u00e1lgebra lineal realizaremos el ajuste de un modelo polinomial a unas observaciones ficticias. Encontraremos la soluci\u00f3n de m\u00ednimo error cuadr\u00e1tico a un sistema de ecuaciones sobredeterminado.\n\nEn primer lugar generamos unos datos de prueba artificiales que simulan observaciones contaminadas con ruido de una funci\u00f3n no lineal.\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nx = np.linspace(0,2,30)\n\ny = np.sin(x) + 0.05*np.random.randn(x.size)\n\nplt.plot(x,y,'.');\n```\n\nVamos a ajustar un modelo del tipo $y = ax^2 + bx + c$. Los coeficientes desconocidos $a$, $b$ y $c$ se pueden obtener resolviendo un sistema de ecuaciones lineales.\n\nLa matriz de coeficientes tiene potencias de $x$ hasta el grado que nos interesa.\n\n\n```python\nA = np.vstack([x**2, x, np.ones(x.size)]).T\n\nA\n```\n\n\n\n\n array([[0. , 0. , 1. ],\n [0.00475624, 0.06896552, 1. ],\n [0.01902497, 0.13793103, 1. ],\n [0.04280618, 0.20689655, 1. ],\n [0.07609988, 0.27586207, 1. ],\n [0.11890606, 0.34482759, 1. ],\n [0.17122473, 0.4137931 , 1. ],\n [0.23305589, 0.48275862, 1. ],\n [0.30439952, 0.55172414, 1. ],\n [0.38525565, 0.62068966, 1. ],\n [0.47562426, 0.68965517, 1. ],\n [0.57550535, 0.75862069, 1. ],\n [0.68489893, 0.82758621, 1. ],\n [0.80380499, 0.89655172, 1. ],\n [0.93222354, 0.96551724, 1. ],\n [1.07015458, 1.03448276, 1. ],\n [1.2175981 , 1.10344828, 1. ],\n [1.3745541 , 1.17241379, 1. ],\n [1.54102259, 1.24137931, 1. ],\n [1.71700357, 1.31034483, 1. ],\n [1.90249703, 1.37931034, 1. ],\n [2.09750297, 1.44827586, 1. ],\n [2.3020214 , 1.51724138, 1. ],\n [2.51605232, 1.5862069 , 1. ],\n [2.73959572, 1.65517241, 1. ],\n [2.97265161, 1.72413793, 1. ],\n [3.21521998, 1.79310345, 1. ],\n [3.46730083, 1.86206897, 1. ],\n [3.72889417, 1.93103448, 1. ],\n [4. , 2. , 1. ]])\n\n\n\nEl lado derecho del sistema es directamente el vector con los valores de $y$, la variable independiente del modelo.\n\n\n```python\nB = np.array(y)\n\nB\n```\n\n\n\n\n array([-0.03752051, 0.03269844, 0.15393717, 0.232999 , 0.24648556,\n 0.31061177, 0.39119775, 0.40540727, 0.51922944, 0.59678345,\n 0.64238537, 0.69319441, 0.67420081, 0.82066404, 0.81793398,\n 0.83714339, 0.94898355, 0.96896918, 0.97055846, 1.00053331,\n 0.97931375, 1.02085556, 0.91095985, 0.90903059, 0.99090395,\n 0.97729992, 0.87898498, 0.98701068, 0.96368972, 0.88381366])\n\n\n\nEl sistema que hay que resolver est\u00e1 sobredeterminado: tiene solo tres inc\u00f3gnitas y tantas ecuaciones como observaciones de la funci\u00f3n.\n\n$$A \\begin{bmatrix}a\\\\b\\\\c\\end{bmatrix}= B$$\n\nLa soluci\u00f3n de [m\u00ednimo error cuadr\u00e1tico](https://en.wikipedia.org/wiki/Least_squares) para los coeficientes del modelo se obtiene de manera directa:\n\n\n```python\nsol = la.lstsq(A,B)[0]\n\nsol\n```\n\n\n\n\n array([-0.41752543, 1.31882806, -0.06158726])\n\n\n\n\n```python\nye = A @ sol\n\nplt.plot(x,y,'.',x,ye,'r');\n```\n\nSe puede experimentar con polinomios de mayor o menor grado.\n\n### soluci\u00f3n num\u00e9rica de ecuaciones no lineales\n\nResuelve \n\n$$x^4=16$$\n\n\n```python\nimport scipy as sci\n\nsci.roots([1,0,0,0,-16])\n```\n\n\n\n\n array([-2.00000000e+00+0.j, 1.11022302e-16+2.j, 1.11022302e-16-2.j,\n 2.00000000e+00+0.j])\n\n\n\nResuelve\n\n$$sin(x)+cos(2x)=0$$\n\n\n```python\nimport scipy.optimize as opt\n\nopt.fsolve(lambda x: sci.sin(x) + sci.cos(2*x), 0)\n```\n\n\n\n\n array([-0.52359878])\n\n\n\nResuelve\n\n$$\n\\begin{align*}\nx^2 - 3y &= 10\\\\\nsin(x)+y &= 5\n\\end{align*}\n$$\n\n\n```python\ndef fun(z):\n x,y = z\n return [ x**2 - 3*y - 10\n , sci.sin(x) + y - 5]\n\nopt.fsolve(fun,[0.1,-0.1])\n```\n\n\n\n\n array([5.2511881 , 5.85832548])\n\n\n\n### minimizaci\u00f3n\n\nEncuentra $(x,y)$ que minimiza $(x-1)^2 + (y-2)^2-x+3y$\n\n\n```python\ndef fun(z):\n x,y = z\n return (x-1)**2 + (y-2)**2 - x + 3*y\n\nopt.minimize(fun,[0.1,-0.1])\n```\n\n\n\n\n fun: 2.500000000000014\n hess_inv: array([[ 0.57758622, -0.18103452],\n [-0.18103452, 0.92241375]])\n jac: array([ 0.00000000e+00, -2.38418579e-07])\n message: 'Optimization terminated successfully.'\n nfev: 12\n nit: 2\n njev: 3\n status: 0\n success: True\n x: array([1.49999999, 0.49999988])\n\n\n\n### derivaci\u00f3n num\u00e9rica\n\nCalcula una aproximaci\u00f3n num\u00e9rica para $f'(2)$ cuando $f(x) = \\sin(2x)*\\exp(\\cos(x))$\n\n\n```python\nfrom scipy.misc import derivative\n\nderivative(lambda x: sci.sin(2*x)*sci.exp(sci.cos(x)),2,1E-6)\n```\n\n\n\n\n -0.40836700757052036\n\n\n\n\n```python\n(lambda x: (-np.sin(x)*np.sin(2*x) + 2*np.cos(2*x))*np.exp(np.cos(x)))(2)\n```\n\n\n\n\n -0.40836700756782335\n\n\n\n### integraci\u00f3n num\u00e9rica\n\nCalcula una aproximaci\u00f3n num\u00e9rica a la integral definida\n\n$$\\int_0^1 \\frac{4}{1+x^2}dx$$\n\n\n```python\nfrom scipy.integrate import quad\n\nquad(lambda x: 4/(1+x**2),0,1)\n```\n\n\n\n\n (3.1415926535897936, 3.4878684980086326e-14)\n\n\n\n### ecuaciones diferenciales\n\nResuelve\n\n$$\\ddot{x}+0.95x+0.1\\dot{x}=0$$\n\npara $x(0)=10$, $\\dot{x}(0)=0, t\\in[0,20]$\n\n\n```python\nfrom scipy.integrate import odeint\n\ndef xdot(z,t):\n x,v = z\n return [v,-0.95*x-0.1*v]\n\nt = np.linspace(0,20,1000)\nr = odeint(xdot,[10,0],t)\n# plt.plot(r);\nplt.plot(t,r[:,0],t,r[:,1]);\n```\n\n\n```python\nplt.plot(r[:,0],r[:,1]);\n```\n\n### c\u00e1lculo simb\u00f3lico\n\n[sympy](http://www.sympy.org/en/index.html)\n\n\n```python\nimport sympy\n\nx = sympy.Symbol('x')\n```\n\n\n```python\nsympy.diff( sympy.sin(2*x**3) , x)\n```\n\n\n\n\n 6*x**2*cos(2*x**3)\n\n\n\n\n```python\nsympy.integrate(1/(1+x))\n```\n\n\n\n\n log(x + 1)\n\n\n\n## varios\n\n### v\u00eddeos\n\n\n```python\nfrom IPython.display import YouTubeVideo\n```\n\n\n```python\nYouTubeVideo('p7bzE1E5PMY')\n```\n\n\n\n\n\n\n\n\n\n\n### xkcd\n\n\n```python\nplt.xkcd()\nplt.plot(np.sin(np.linspace(0, 10)))\nplt.title('Whoo Hoo!!!');\n```\n\n### estilo del notebook\n\n\n```python\n# podemos \"tunearlo\"\n#from IPython.display import HTML\n#HTML(open('../css/nb1.css').read())\n```\n", "meta": {"hexsha": "5a7bf23819220e2f215269341839918a1972a139", "size": 836399, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/python.ipynb", "max_stars_repo_name": "josemac95/umucv", "max_stars_repo_head_hexsha": "f0f8de17141f4adcb4966281c3f83539ebda5f0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/python.ipynb", "max_issues_repo_name": "josemac95/umucv", "max_issues_repo_head_hexsha": "f0f8de17141f4adcb4966281c3f83539ebda5f0b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/python.ipynb", "max_forks_repo_name": "josemac95/umucv", "max_forks_repo_head_hexsha": "f0f8de17141f4adcb4966281c3f83539ebda5f0b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 116.9625227241, "max_line_length": 86812, "alphanum_fraction": 0.8763090343, "converted": true, "num_tokens": 16580, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3886180125441397, "lm_q2_score": 0.28776782186926264, "lm_q1q2_score": 0.11183175900898887}} {"text": "```python\nfrom IPython.display import IFrame\n```\n\n### \u30e2\u30c1\u30d9\u30fc\u30b7\u30e7\u30f3\n - \u4e00\u6b21\u5143\u6d45\u6c34\u6d41\u65b9\u7a0b\u5f0f\uff08Saint-Venant \u65b9\u7a0b\u5f0f\uff09\u3092\u5b9f\u6cb3\u5ddd\u306e\u8a08\u7b97\u7528\u306b\u62e1\u5f35\u3057\u305f\u65ad\u9762\u5e73\u5747\u4e00\u6b21\u5143\u6d45\u6c34\u6d41\u65b9\u7a0b\u5f0f\u306e\u8a08\u7b97\u65b9\u6cd5\u306b\u3064\u3044\u3066\u6539\u3081\u3066\u6574\u7406\u3057\u307e\u3057\u305f\u3002\n\n### \u5b9f\u6cb3\u5ddd\u306e\u4e00\u6b21\u5143\u8a08\u7b97\u306e\u72ec\u7279\u306e\u96e3\u3057\u3055\n\n#### \u6a2a\u65ad\u9762\u5f62\u72b6\u306e\u53d6\u308a\u6271\u3044\u65b9\u6cd5\n \u53c2\u7167\n \n#### \u6c34\u6df1\u3092\u967d\u7684\u306b\u6271\u3048\u306a\u3044\n \u65ad\u9762\u5e73\u5747\u4e00\u6b21\u5143\u6d45\u6c34\u6d41\u65b9\u7a0b\u5f0f\u306f\u6b21\u306e\u3068\u304a\u308a\u3067\u3059\u304c\u3001\n \u30dd\u30a4\u30f3\u30c8\u306f\u3001\u6c34\u6df1\u3001\u6cb3\u5e8a\u9ad8\u304c\u5b9a\u7fa9\u3057\u3065\u3089\u3044\u305f\u3081\u3001\u5de6\u8fba\u7b2c3\u9805\u306e$\\dfrac{\\partial H}{\\partial x}$\u3092\n $\\dfrac{\\partial (h+z_b)}{\\partial x}$\u306e\u3088\u3046\u306b\u6c34\u6df1\uff08\u5727\u529b\u9805\uff09\u3068\u6cb3\u5e8a\u9ad8\uff08\u91cd\u529b\u9805\uff09\u306b\u5206\u96e2\u3067\u304d\u306a\u3044\u3053\u3068\u3067\u3059\u3002\n \n$$\n\\begin{align}\n &\\frac{\\partial Q}{\\partial t} + \\frac{\\partial }{\\partial x}\\left(\\dfrac{\\beta Q^2}{A}\\right) \n + gA \\frac{\\partial H}{\\partial x} + gAi_e = 0\n\\end{align}\n$$\n\n\u305d\u306e\u305f\u3081\u3001\u7279\u6027\u66f2\u7dda\u306e\u8003\u3048\u65b9\u304c\u4f7f\u3044\u3065\u3089\u304f\u306a\u308a\u307e\u3059\u3002\u4e00\u65b9\u3001\u30c9\u30e9\u30a4\u30d9\u30c3\u30c9\uff08\u6cb3\u5e8a\u52fe\u914d\u3042\u308a\uff09\u306f\u3053\u306e\u5f0f\u5f62\u306e\u307b\u3046\u304c\u89e3\u304d\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002\n\n#### \u0394x\u3092\u4efb\u610f\u306b\u53d6\u308c\u306a\u3044\n \u5b9f\u6cb3\u5ddd\u3067\u306f\u7e26\u65ad\u65b9\u5411200m\u9593\u9694\u3067\u6a2a\u65ad\u6e2c\u91cf\u3092\u884c\u3046\u3053\u3068\u4e00\u822c\u7684\u3067\u3059\u3002\u6cb3\u5ddd\u5730\u5f62\u306f\u8907\u96d1\u3067\u5e7e\u4f55\u88dc\u9593\u304c\u96e3\u3057\u3044\u305f\u3081\u3001\u6e2c\u91cf\u9593\u9694\u3092\u6570\u5024\u8a08\u7b97\u306e\u0394x\u3068\u4f7f\u3046\u3053\u3068\u304c\u591a\u3044\u3002\n\n\n### \u8a08\u7b97\u30b9\u30ad\u30fc\u30e0\n\n\u4ee5\u4e0b\u306e3\u30b9\u30ad\u30fc\u30e0\u3092\u6bd4\u8f03\u3059\u308b\u3002\n\n1. \u4e0d\u7b49\u6d41\u8a08\u7b97\n2. \u4e0d\u5b9a\u6d41\u8a08\u7b97\uff1a\u30b3\u30ed\u30b1\u30fc\u30c8\u683c\u5b50 \u21d2 Wu\u3055\u3093\u306e\u65b9\u6cd5\u3092\u63a1\u7528\uff08\u8a73\u7d30\u306f\u4ee5\u4e0b\u3092\u53c2\u7167\uff09\n3. \u4e0d\u5b9a\u6d41\u8a08\u7b97\uff1a\u30b9\u30bf\u30c3\u30ac\u30fc\u30c9\u683c\u5b50 \u21d2 \u7d30\u7530\u3055\u3093\u306e\u65b9\u6cd5\u3092\u63a1\u7528\uff08\u8a73\u7d30\u306f\u4ee5\u4e0b\u3092\u53c2\u7167\uff09\n\n\u4e0d\u5b9a\u6d41\u8a08\u7b97\u306e\u30b9\u30ad\u30fc\u30e0\u3092\u9078\u3076\u57fa\u6e96\u306f\u30011). \u6025\u6d41\u6cb3\u5ddd\u3067\u3082\u89e3\u3051\u308b\u3053\u3068\u30012). \u30ed\u30d0\u30b9\u30c8\u3067\u3042\u308b\u3053\u3068\u3067\u3059\u3002\n\n\u7279\u306b\u79c1\u306e\u3088\u3046\u306a\u7acb\u5834\u3067\u306f\u3001\u4e00\u3064\u306e\u30e2\u30c7\u30eb\u3067\u4f55\u5341\u6cb3\u5ddd\u3082\u8a08\u7b97\u3059\u308b\u306e\u3067\u30012).\u30ed\u30d0\u30b9\u30c8\u6027\u306f\u91cd\u8981\u306b\u306a\u308a\u307e\u3059\u3002\n\n\n### \u30c6\u30b9\u30c8\u8a08\u7b97 \n\n#### \u8a08\u7b97\u6761\u4ef6\n\n - \u6cb3\u5e8a\u9ad8\u306f\u3001\u5e73\u5747\u6cb3\u5e8a\u52fe\u914d1/400\u3067\u7e26\u65ad\u65b9\u5411200m\u9593\u9694\u3067\u5730\u5f62\u30c7\u30fc\u30bf\u304c\u5b58\u5728\u3059\u308b\u3068\u3057\u3066\u305d\u306e\u5730\u70b9\u306b-1\u304b\u30891\u306e\u4e71\u6570\u3092\u52a0\u3048\u3066\u5b9f\u6cb3\u5ddd\u3092\u6a21\u3057\u305f\u6cb3\u5e8a\u5f62\u72b6\u3092\u4f5c\u3063\u305f\u3002\n - \u6cb3\u5e45\u306f50m\u3067\u4e00\u5b9a\u3068\u3059\u308b\u3002\n - \u7e26\u65ad\u8ddd\u96e2\u306f10km\n - \u7c97\u5ea6\u4fc2\u6570\u306f0.03\u3068\u3059\u308b\u3002 \n\n\n```python\nIFrame('https://computational-sediment-hyd.github.io/S1D-ComputScheme/zb.html',width=400,height=330)\n```\n\n\n\n\n\n\n\n\n\n\n#### \u8a08\u7b97\u7d50\u679c\n\n##### \u57fa\u672c\u30b1\u30fc\u30b9\n\n - \u5404\u8a08\u7b97\u30b9\u30ad\u30fc\u30e0\u306b\u3088\u308b\u8a08\u7b97\u6c34\u4f4d\u3068\u30d5\u30eb\u30fc\u30c9\u6570\u3068\u56f3\u5316\u3057\u307e\u3057\u305f\u3002\n - \u4e0d\u5b9a\u6d41\u306f\u9069\u5f53\u306a\u521d\u671f\u5024\u3092\u4e0e\u3048\u3066\u5b9a\u5e38\u306b\u306a\u308b\u307e\u3067\u8a08\u7b97\u3057\u3066\u3044\u307e\u3059\u3002\n - \u53c2\u8003\u3068\u3057\u3066\u5e73\u5747\u6cb3\u5e8a\u3067\u306e\u7b49\u6d41\u6c34\u6df1\u3082\u793a\u3057\u3066\u3044\u307e\u3059\u3002\n\n\n```python\nIFrame('https://computational-sediment-hyd.github.io/S1D-ComputScheme/fig1.html',width=630,height=700)\n```\n\n\n\n\n\n\n\n\n\n\n \u62e1\u5927\u56f3\u3092\u898b\u308b\u3068\u3001\n \n - \u5404\u30b9\u30ad\u30fc\u30e0\u3067\u6700\u59271m\u7a0b\u5ea6\u306e\u5dee\u304c\u751f\u3058\u3066\u3044\u308b\u3002\n - \u4e0d\u7b49\u6d41\u304c\u6700\u3082\u30b7\u30e3\u30fc\u30d7\u306a\u6c34\u9762\u5f62\u3067\u305d\u306e\u6b21\u306f\u4e0d\u5b9a\u6d41\uff1a\u30b3\u30ed\u30b1\u30fc\u30c8\u683c\u5b50\u3067\u3042\u308a\u3001\u4e0d\u5b9a\u6d41\uff1a\u30b9\u30bf\u30c3\u30ac\u30fc\u30c9\u304c\u6700\u3082\u6ed1\u3089\u304b\u306b\u306a\u3063\u3066\u3044\u308b\u3002\n - \u6cb3\u5e8a\u5f62\u72b6\u306b\u5bfe\u3059\u308b\u6c34\u9762\u5f62\u306e\u5fdc\u7b54\u304c\u4e0d\u7b49\u6d41\u3068\u4e0d\u5b9a\u6d41\u3067\u305a\u308c\u3066\u3044\u308b\u3002\n\n#### dx\u309250m\u306b\u8a2d\u5b9a\uff1a\u5185\u633f\u65ad\u9762\n - \u524d\u8ff0\u306e\u6cb3\u5e8a\u5f62\u72b6\u3092\u3082\u3068\u306bdx\u3092200m\u304b\u308950m\u306b\u5909\u66f4\u3059\u308b\u3088\u3046\u306b\u5185\u633f\u65ad\u9762\u3092\u8a2d\u5b9a\u3057\u305f\u3002\n - \u6c34\u9762\u5f62\u306e\u6bd4\u8f03\u3092\u793a\u3059\u3002\n\n\n```python\nIFrame('https://computational-sediment-hyd.github.io/S1D-ComputScheme/fig2.html',width=630,height=580)\n```\n\n\n\n\n\n\n\n\n\n\n \u62e1\u5927\u56f3\u3092\u898b\u308b\u3068\u3001\n \n - \u5404\u30b9\u30ad\u30fc\u30e0\u306e\u5dee\u306fdx=200m\u306e\u5834\u5408\u3068\u6bd4\u8f03\u3057\u3066\u5c0f\u3055\u304f\u306a\u3063\u3066\u3044\u308b\u3002\n - \u3053\u306e\u5024\u304c\u771f\u5024\u306b\u8fd1\u3044\u3068\u3059\u308b\u3068\u3001dx=200m\u306b\u3088\u308b\u518d\u73fe\u6027\u306f\u3001\n * \u4e0d\u7b49\u6d41\u306f\u5f62\u72b6\u306f\u771f\u5024\u306b\u8fd1\u3044\u304c\u6c34\u4f4d\u304c\u9ad8\u3044\u3002\n * \u4e0d\u5b9a\u6d41\u306f\u5e73\u5747\u7684\u306a\u6c34\u4f4d\u306f\u771f\u5024\u306b\u8fd1\u3044\u3002\n\n### \u3086\u308b\u304f\u8003\u5bdf\n\n - \u524d\u8ff0\u306e\u3068\u304a\u308a\u5b9f\u6cb3\u5ddd\u306e\u8a08\u7b97\u3067\u306fdx\u3092\u4efb\u610f\u306b\u3068\u308b\u3053\u3068\u304c\u96e3\u3057\u3044\u306e\u3067dx=200m\u3067\u8a08\u7b97\u3059\u308b\u3053\u3068\u3092\u57fa\u672c\u306b\u8003\u3048\u308b\u3002\n - dx=200\u306e\u4e0d\u5b9a\u6d41\u3067\u3082\u3046\u5c11\u3057\u9ad8\u7cbe\u5ea6\u306a\uff08\u30b7\u30e3\u30fc\u30d7\u306a\uff09\u6c34\u9762\u5f62\u3092\u8a08\u7b97\u3057\u305f\u3044\u3002\n - \u6c4e\u7528\u6027\u3092\u8003\u3048\u308b\u3068\u30b3\u30ed\u30b1\u30fc\u30c8\u683c\u5b50\u3092\u4f7f\u3044\u305f\u3044\u3002\n - \u4ee5\u4e0b\u306b\u3088\u308b\u3068\u30b3\u30ed\u30b1\u30fc\u30c8\u683c\u5b50\u306f\u30b9\u30bf\u30c3\u30ac\u30fc\u30c9\u683c\u5b50\u306b\u6bd4\u3079\u3066\u7cbe\u5ea6\u304c\u4f4e\u304f\u306a\u308b\u3002\n - \u3057\u304b\u3057\u3001\u6b21\u306e\u8ad6\u6587\uff08\u68ee\u897f\u3089\uff09\u306b\u3088\u308b\u5b9a\u7fa9\u70b9\u3092\u9069\u5207\u306b\u8a55\u4fa1\u3059\u308b\u3053\u3068\u306b\u3088\u308a\u305d\u306e\u5dee\u306f\u7121\u304f\u306a\u308b\u3068\u793a\u3055\u308c\u3066\u3044\u308b\u3002\n - \u4eca\u56de\u306e\u8a08\u7b97\u3067\u306f\u3001\u5168\u4f53\u7684\u306a\u6c34\u9762\u5f62\u306e\u305a\u308c\u306f\u307b\u307c\u7121\u3044\u305f\u3081\u3001\u5c40\u6240\u7684\u306a\u904b\u52d5\u91cf\u306e\u30d0\u30e9\u30f3\u30b9\u306b\u3088\u3063\u3066\u6c34\u9762\u5f62\u306e\u5dee\u304c\u751f\u3058\u3066\u3044\u308b\u3068\u8003\u3048\u3089\u308c\u308b\u3002\n - \u4eca\u5f8c\u306e\u30c6\u30fc\u30de\u3068\u3057\u3066\u3001\u30b3\u30ed\u30b1\u30fc\u30c8\u683c\u5b50\u3067\u5c40\u6240\u7684\u306a\u4fdd\u5b58\u5247\u3092\u6e80\u305f\u3057\u305f\u30b9\u30ad\u30fc\u30e0\u306e\u958b\u767a\u3092\u8003\u3048\u308b\u3002(\u591a\u5206\u9670\u89e3\u6cd5\u306b\u306a\u308b\u6c17\u304c\u3057\u307e\u3059\u3002)\n", "meta": {"hexsha": "26942b1315bdd65c037afb4660dc71ab1db11f5e", "size": 6783, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "document.ipynb", "max_stars_repo_name": "computational-sediment-hyd/S1D-ComputScheme", "max_stars_repo_head_hexsha": "6b593f115beece2375b1e77fa995899838bd7ea9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "document.ipynb", "max_issues_repo_name": "computational-sediment-hyd/S1D-ComputScheme", "max_issues_repo_head_hexsha": "6b593f115beece2375b1e77fa995899838bd7ea9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "document.ipynb", "max_forks_repo_name": "computational-sediment-hyd/S1D-ComputScheme", "max_forks_repo_head_hexsha": "6b593f115beece2375b1e77fa995899838bd7ea9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.225, "max_line_length": 118, "alphanum_fraction": 0.5137844612, "converted": true, "num_tokens": 1733, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3923368443773709, "lm_q2_score": 0.28457599814899737, "lm_q1q2_score": 0.11164964909931817}} {"text": "# **Save this file as studentid1_studentid2_lab#.ipynb**\n(Your student-id is the number shown on your student card.)\n\nE.g. if you work with 3 people, the notebook should be named:\n12301230_3434343_1238938934_lab1.ipynb.\n\n**This will be parsed by a regexp, so please double check your filename.**\n\nBefore you turn this problem in, please make sure everything runs correctly. First, **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All).\n\n**Make sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your names and email adresses below.**\n\n\n\n\n```python\nNAME = \"Gabriele Bani\"\nNAME2 = \"Andrii Skliar\"\nEMAIL = \"bani.gabri@gmail.com\"\nEMAIL2 = \"anreyws96@gmail.com\"\n```\n\n---\n\n# Lab 2: Classification\n\n### Machine Learning 1, September 17\n\nNotes on implementation:\n\n* You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.\n* Please write your answers right below the questions.\n* Among the first lines of your notebook should be \"%pylab inline\". This imports all required modules, and your plots will appear inline.\n* Use the provided test cells to check if your answers are correct\n* **Make sure your output and plots are correct before handing in your assignment with Kernel -> Restart & Run All**\n\n$\\newcommand{\\bx}{\\mathbf{x}}$\n$\\newcommand{\\bw}{\\mathbf{w}}$\n$\\newcommand{\\bt}{\\mathbf{t}}$\n$\\newcommand{\\by}{\\mathbf{y}}$\n$\\newcommand{\\bm}{\\mathbf{m}}$\n$\\newcommand{\\bb}{\\mathbf{b}}$\n$\\newcommand{\\bS}{\\mathbf{S}}$\n$\\newcommand{\\ba}{\\mathbf{a}}$\n$\\newcommand{\\bz}{\\mathbf{z}}$\n$\\newcommand{\\bv}{\\mathbf{v}}$\n$\\newcommand{\\bq}{\\mathbf{q}}$\n$\\newcommand{\\bp}{\\mathbf{p}}$\n$\\newcommand{\\bh}{\\mathbf{h}}$\n$\\newcommand{\\bI}{\\mathbf{I}}$\n$\\newcommand{\\bX}{\\mathbf{X}}$\n$\\newcommand{\\bT}{\\mathbf{T}}$\n$\\newcommand{\\bPhi}{\\mathbf{\\Phi}}$\n$\\newcommand{\\bW}{\\mathbf{W}}$\n$\\newcommand{\\bV}{\\mathbf{V}}$\n\n\n```python\n%pylab inline\nplt.rcParams[\"figure.figsize\"] = [9,5]\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt \n```\n\n# Part 1. Multiclass logistic regression\n\nScenario: you have a friend with one big problem: she's completely blind. You decided to help her: she has a special smartphone for blind people, and you are going to develop a mobile phone app that can do _machine vision_ using the mobile camera: converting a picture (from the camera) to the meaning of the image. You decide to start with an app that can read handwritten digits, i.e. convert an image of handwritten digits to text (e.g. it would enable her to read precious handwritten phone numbers).\n\nA key building block for such an app would be a function `predict_digit(x)` that returns the digit class of an image patch $\\bx$. Since hand-coding this function is highly non-trivial, you decide to solve this problem using machine learning, such that the internal parameters of this function are automatically learned using machine learning techniques.\n\nThe dataset you're going to use for this is the MNIST handwritten digits dataset (`http://yann.lecun.com/exdb/mnist/`). You can download the data with scikit learn, and load it as follows:\n\n\n```python\nfrom sklearn.datasets import fetch_mldata\n# Fetch the data\nmnist = fetch_mldata('MNIST original')\ndata, target = mnist.data, mnist.target.astype('int')\n# Shuffle\nindices = np.arange(len(data))\nnp.random.seed(123)\nnp.random.shuffle(indices)\ndata, target = data[indices].astype('float32'), target[indices]\n\n# Normalize the data between 0.0 and 1.0:\ndata /= 255. \n\n# Split\nx_train, x_valid, x_test = data[:50000], data[50000:60000], data[60000: 70000]\nt_train, t_valid, t_test = target[:50000], target[50000:60000], target[60000: 70000]\n```\n\nMNIST consists of small 28 by 28 pixel images of written digits (0-9). We split the dataset into a training, validation and testing arrays. The variables `x_train`, `x_valid` and `x_test` are $N \\times M$ matrices, where $N$ is the number of datapoints in the respective set, and $M = 28^2 = 784$ is the dimensionality of the data. The second set of variables `t_train`, `t_valid` and `t_test` contain the corresponding $N$-dimensional vector of integers, containing the true class labels.\n\nHere's a visualisation of the first 8 digits of the trainingset:\n\n\n```python\ndef plot_digits(data, num_cols, targets=None, shape=(28,28)):\n num_digits = data.shape[0]\n num_rows = int(num_digits/num_cols)\n for i in range(num_digits):\n plt.subplot(num_rows, num_cols, i+1)\n plt.imshow(data[i].reshape(shape), interpolation='none', cmap='Greys')\n if targets is not None:\n plt.title(int(targets[i]))\n plt.colorbar()\n plt.axis('off')\n plt.tight_layout()\n plt.show()\n \nplot_digits(x_train[0:40000:5000], num_cols=4, targets=t_train[0:40000:5000])\n```\n\nIn _multiclass_ logistic regression, the conditional probability of class label $j$ given the image $\\bx$ for some datapoint is given by:\n\n$ \\log p(t = j \\;|\\; \\bx, \\bb, \\bW) = \\log q_j - \\log Z$\n\nwhere $\\log q_j = \\bw_j^T \\bx + b_j$ (the log of the unnormalized probability of the class $j$), and $Z = \\sum_k q_k$ is the normalizing factor. $\\bw_j$ is the $j$-th column of $\\bW$ (a matrix of size $784 \\times 10$) corresponding to the class label, $b_j$ is the $j$-th element of $\\bb$.\n\nGiven an input image, the multiclass logistic regression model first computes the intermediate vector $\\log \\bq$ (of size $10 \\times 1$), using $\\log q_j = \\bw_j^T \\bx + b_j$, containing the unnormalized log-probabilities per class. \n\nThe unnormalized probabilities are then normalized by $Z$ such that $\\sum_j p_j = \\sum_j \\exp(\\log p_j) = 1$. This is done by $\\log p_j = \\log q_j - \\log Z$ where $Z = \\sum_i \\exp(\\log q_i)$. This is known as the _softmax_ transformation, and is also used as a last layer of many classifcation neural network models, to ensure that the output of the network is a normalized distribution, regardless of the values of second-to-last layer ($\\log \\bq$)\n\n**Warning**: when computing $\\log Z$, you are likely to encounter numerical problems. Save yourself countless hours of debugging and learn the [log-sum-exp trick](https://hips.seas.harvard.edu/blog/13/01/09/computing-log-sum-exp/ \"Title\").\n\nThe network's output $\\log \\bp$ of size $10 \\times 1$ then contains the conditional log-probabilities $\\log p(t = j \\;|\\; \\bx, \\bb, \\bW)$ for each digit class $j$. In summary, the computations are done in this order:\n\n$\\bx \\rightarrow \\log \\bq \\rightarrow Z \\rightarrow \\log \\bp$\n\nGiven some dataset with $N$ independent, identically distributed datapoints, the log-likelihood is given by:\n\n$ \\mathcal{L}(\\bb, \\bW) = \\sum_{n=1}^N \\mathcal{L}^{(n)}$\n\nwhere we use $\\mathcal{L}^{(n)}$ to denote the partial log-likelihood evaluated over a single datapoint. It is important to see that the log-probability of the class label $t^{(n)}$ given the image, is given by the $t^{(n)}$-th element of the network's output $\\log \\bp$, denoted by $\\log p_{t^{(n)}}$:\n\n$\\mathcal{L}^{(n)} = \\log p(t = t^{(n)} \\;|\\; \\bx = \\bx^{(n)}, \\bb, \\bW) = \\log p_{t^{(n)}} = \\log q_{t^{(n)}} - \\log Z^{(n)}$\n\nwhere $\\bx^{(n)}$ and $t^{(n)}$ are the input (image) and class label (integer) of the $n$-th datapoint, and $Z^{(n)}$ is the normalizing constant for the distribution over $t^{(n)}$.\n\n\n## 1.1 Gradient-based stochastic optimization\n### 1.1.1 Derive gradient equations ( points)\n\nDerive the equations for computing the (first) partial derivatives of the log-likelihood w.r.t. all the parameters, evaluated at a _single_ datapoint $n$.\n\nYou should start deriving the equations for $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$ for each $j$. For clarity, we'll use the shorthand $\\delta^q_j = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$.\n\nFor $j = t^{(n)}$:\n$\n\\delta^q_j\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log p_j}\n\\frac{\\partial \\log p_j}{\\partial \\log q_j}\n+ \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log Z}\n\\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j} \n= \\frac{\\partial \\log q_i}{\\partial \\log q_j} - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n= 1 - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n$\n\nFor $j \\neq t^{(n)}$:\n$\n\\delta^q_j\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log Z}\n\\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j} \n= - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n$\n\nComplete the above derivations for $\\delta^q_j$ by furtherly developing $\\frac{\\partial \\log Z}{\\partial Z}$ and $\\frac{\\partial Z}{\\partial \\log q_j}$. Both are quite simple. For these it doesn't matter whether $j = t^{(n)}$ or not.\n\n\n\nWe have that\n\\begin{align*}\n\\frac{\\partial \\log Z}{\\partial Z} = \\frac{1}{Z}\n\\end{align*}\nand\n\\begin{align*}\n &\\frac{\\partial Z}{\\partial \\log q_j} \\\\\n &=\\frac{\\partial \\sum_k q_k}{\\partial \\log q_j} \\\\\n &=\\frac{\\partial \\sum_k \\exp ( \\log ( q_k) )}{\\partial \\log q_j} \\\\\n &= \\exp(\\log(q_j))\n\\end{align*}\n\nFor $j = t^{(n)}$:\n\\begin{align}\n\\delta^q_j\n&= 1 - \\frac{\\partial \\log Z}{\\partial Z} \\frac{\\partial Z}{\\partial \\log q_j} \\\\\n&= 1 - \\frac{1}{Z} \\exp(\\log(q_j)) \\\\\n&= 1 - \\frac{\\exp(\\log(q_j))}{\\sum_k q_k} \\\\\n&= 1 - \\frac{\\exp(\\log(q_j))}{\\exp(\\log(Z))} \n\\end{align}\nFor $j \\neq t^{(n)}$:\n\\begin{align}\n\\delta^q_j\n&= - \\frac{\\partial \\log Z}{\\partial Z} \\frac{\\partial Z}{\\partial \\log q_j} \\\\\n&= - \\frac{1}{Z} \\exp(\\log(q_j)) \\\\\n&= - \\frac{\\exp(\\log(q_j))}{\\sum_k q_k} \\\\\n&= \\frac{\\exp(\\log(q_j))}{\\exp(\\log(Z))} \n\\end{align}\n\n\n**Note**: we have left the exponents of logarithms for consistency with implementation.\n\nGiven your equations for computing the gradients $\\delta^q_j$ it should be quite straightforward to derive the equations for the gradients of the parameters of the model, $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}}$ and $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j}$. The gradients for the biases $\\bb$ are given by:\n\n$\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j}\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\n\\frac{\\partial \\log q_j}{\\partial b_j}\n= \\delta^q_j\n\\cdot 1\n= \\delta^q_j\n$\n\nThe equation above gives the derivative of $\\mathcal{L}^{(n)}$ w.r.t. a single element of $\\bb$, so the vector $\\nabla_\\bb \\mathcal{L}^{(n)}$ with all derivatives of $\\mathcal{L}^{(n)}$ w.r.t. the bias parameters $\\bb$ is: \n\n$\n\\nabla_\\bb \\mathcal{L}^{(n)} = \\mathbf{\\delta}^q\n$\n\nwhere $\\mathbf{\\delta}^q$ denotes the vector of size $10 \\times 1$ with elements $\\mathbf{\\delta}_j^q$.\n\nThe (not fully developed) equation for computing the derivative of $\\mathcal{L}^{(n)}$ w.r.t. a single element $W_{ij}$ of $\\bW$ is:\n\n$\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}} =\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\n\\frac{\\partial \\log q_j}{\\partial W_{ij}}\n= \\mathbf{\\delta}_j^q\n\\frac{\\partial \\log q_j}{\\partial W_{ij}}\n$\n\nWhat is $\\frac{\\partial \\log q_j}{\\partial W_{ij}}$? Complete the equation above.\n\nIf you want, you can give the resulting equation in vector format ($\\nabla_{\\bw_j} \\mathcal{L}^{(n)} = ...$), like we did for $\\nabla_\\bb \\mathcal{L}^{(n)}$.\n\n\n\n$\\frac{\\partial \\log q_j}{\\partial W_{ij}} = \\frac{\\partial \\mathbf{w}_j^T \\mathbf{x} + b_j}{\\partial W_{ij}} = \n\\frac{\\partial \\sum_k^L w_{kj} x_{k} + b_k}{\\partial W_{ij}} = x_i$\n\nIf we want to use vector notation, we have then\n\n$\\nabla_{\\bw_j} \\mathcal{L}^{(n)} = \\delta_J^q \\mathbf{x} $\n\n\n### 1.1.2 Implement gradient computations (10 points)\n\nImplement the gradient calculations you derived in the previous question. Write a function `logreg_gradient(x, t, w, b)` that returns the gradients $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ (for each $j$) and $\\nabla_{\\bb} \\mathcal{L}^{(n)}$, i.e. the first partial derivatives of the log-likelihood w.r.t. the parameters $\\bW$ and $\\bb$, evaluated at a single datapoint (`x`, `t`).\nThe computation will contain roughly the following intermediate variables:\n\n$\n\\log \\bq \\rightarrow Z \\rightarrow \\log \\bp\\,,\\, \\mathbf{\\delta}^q\n$\n\nfollowed by computation of the gradient vectors $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ (contained in a $784 \\times 10$ matrix) and $\\nabla_{\\bb} \\mathcal{L}^{(n)}$ (a $10 \\times 1$ vector).\n\nFor maximum points, ensure the function is numerically stable.\n\n\n\n```python\n# 1.1.2 Compute gradient of log p(t|x;w,b) wrt w and b\ndef logreg_gradient(x, t, w, b):\n # YOUR CODE HERE\n logq = np.dot(x, w) + b\n \n #log exp trick for numerical stability of logZ calculation\n a = np.max(logq)\n logZ = a + np.log(np.sum(np.exp(logq - a)))\n \n logp = logq - logZ\n Z = np.exp(logZ)\n deltaq = -np.exp(logq) / Z\n # here deltaq is a matrix of dimension 1x10 \n deltaq[0, t] += 1\n dL_db = deltaq\n dL_dw = np.outer(x, deltaq)\n return logp[:,t].squeeze(), dL_dw, dL_db.squeeze()\n\n```\n\n\n```python\nnp.random.seed(123)\n# scalar, 10 X 768 matrix, 10 X 1 vector\nw = np.random.normal(size=(28*28,10), scale=0.001)\n# w = np.zeros((784,10))\nb = np.zeros((10,))\n\n# test gradients, train on 1 sample\nlogpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)\n\nprint(\"Test gradient on one point\")\nprint(\"Log Likelihood:\\t\", logpt)\nprint(\"\\nGrad_W_ij\\t\",grad_w.shape,\"matrix\")\nprint(\"Grad_W_ij[0,152:158]=\\t\", grad_w[152:158,0])\nprint(\"\\nGrad_B_i shape\\t\",grad_b.shape,\"vector\")\nprint(\"Grad_B_i=\\t\", grad_b.T)\nprint(\"i in {0,...,9}; j in M\")\n\nassert logpt.shape == (), logpt.shape\nassert grad_w.shape == (784, 10), grad_w.shape\nassert grad_b.shape == (10,), grad_b.shape\n\n\n\n```\n\n Test gradient on one point\n Log Likelihood:\t -2.2959726720744777\n \n Grad_W_ij\t (784, 10) matrix\n Grad_W_ij[0,152:158]=\t [-0.04518971 -0.06758809 -0.07819784 -0.09077237 -0.07584012 -0.06365855]\n \n Grad_B_i shape\t (10,) vector\n Grad_B_i=\t [-0.10020327 -0.09977827 -0.1003198 0.89933657 -0.10037941 -0.10072863\n -0.09982729 -0.09928672 -0.09949324 -0.09931994]\n i in {0,...,9}; j in M\n\n\n\n```python\n# It's always good to check your gradient implementations with finite difference checking:\n# Scipy provides the check_grad function, which requires flat input variables.\n# So we write two helper functions that provide can compute the gradient and output with 'flat' weights:\nfrom scipy.optimize import check_grad\n\nnp.random.seed(123)\n# scalar, 10 X 768 matrix, 10 X 1 vector\nw = np.random.normal(size=(28*28,10), scale=0.001)\n# w = np.zeros((784,10))\nb = np.zeros((10,))\n\ndef func(b):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)\n return logpt\ndef grad(b):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)\n return grad_b.flatten()\nfinite_diff_error = check_grad(func, grad, b)\nprint('Finite difference error grad_b:', finite_diff_error)\nassert finite_diff_error < 1e-3, 'Your gradient computation for b seems off'\n\n\n\ndef func(w):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)\n return logpt\ndef grad(w):\n logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)\n return grad_w.flatten()\nfinite_diff_error = check_grad(func, grad, w.flatten())\nprint('Finite difference error grad_w:', finite_diff_error)\nassert finite_diff_error < 1e-3, 'Your gradient computation for w seems off'\n\n\n```\n\n Finite difference error grad_b: 5.23511748609e-08\n Finite difference error grad_w: 6.3612946893e-07\n\n\n\n### 1.1.3 Stochastic gradient descent (10 points)\n\nWrite a function `sgd_iter(x_train, t_train, w, b)` that performs one iteration of stochastic gradient descent (SGD), and returns the new weights. It should go through the trainingset once in randomized order, call `logreg_gradient(x, t, w, b)` for each datapoint to get the gradients, and update the parameters **using a small learning rate of `1E-6`**. Note that in this case we're maximizing the likelihood function, so we should actually performing gradient ___ascent___... For more information about SGD, see Bishop 5.2.4 or an online source (i.e. https://en.wikipedia.org/wiki/Stochastic_gradient_descent)\n\n\n```python\ndef sgd_iter(x_train, t_train, W, b):\n p = np.random.permutation(x_train.shape[0])\n xp, tp = x_train[p], t_train[p]\n logp_train = 0\n lr = 1E-6\n N = x_train.shape[0]\n for i in range(N):\n x, t = xp[i], tp[i]\n # convert from column vector (784,) to row vector (1, 784)\n x = x[np.newaxis]\n logp, dw, db = logreg_gradient(x, t, W, b)\n logp_train += logp\n W += lr*dw\n b += lr*db\n return logp_train / N, W , b\n```\n\n\n```python\n# Sanity check:\nnp.random.seed(1243)\nw = np.zeros((28*28, 10))\nb = np.zeros(10)\n \nlogp_train, W, b = sgd_iter(x_train[:5], t_train[:5], w, b)\n\n```\n\n## 1.2. Train\n\n### 1.2.1 Train (10 points)\nPerform 10 SGD iterations through the trainingset. Plot (in one graph) the conditional log-probability of the trainingset and validation set after each iteration.\n\n\n\n```python\n# Function for just calculating the log probabilities\n# Function is equivalent to SGD with learning rate 0 (so, without updating weights)\ndef calc_prob(x_train, t_train, W, b):\n logp_train = 0\n lr = 1E-6\n N = x_train.shape[0]\n for i in range(N):\n x, t = x_train[i], t_train[i]\n # convert from column vector (784,) to row vector (1, 784)\n x = x[np.newaxis] \n logp, dw, db = logreg_gradient(x, t, W, b)\n logp_train += logp\n return logp_train / N\n```\n\n\n```python\ndef test_sgd(x_train, t_train, w, b):\n #list of log probabilities\n tlist = []\n vlist = []\n for i in range(10):\n logp_train, w, b = sgd_iter(x_train, t_train, w, b)\n tlist.append(logp_train)\n logp_valid = calc_prob(x_valid, t_valid, w, b)\n vlist.append(logp_valid)\n \n return w, b, tlist, vlist\n\nnp.random.seed(1243)\nw = np.zeros((28*28, 10))\nb = np.zeros(10)\nw,b, tlist, vlist = test_sgd(x_train, t_train, w, b)\n\ntrain, = plt.plot(tlist, 'r', label='Train')\ntest, = plt.plot(vlist, 'g', label='Validation')\nplt.legend(handles=[train, test])\n```\n\n### 1.2.2 Visualize weights (10 points)\nVisualize the resulting parameters $\\bW$ after a few iterations through the training set, by treating each column of $\\bW$ as an image. If you want, you can use or edit the `plot_digits(...)` above.\n\n\n\n```python\nplot_digits(w.T, num_cols=2)\n```\n\n**Describe in less than 100 words why these weights minimize the loss**\n\nFirst of all, the weights have been optimized by SGD in order to minimize the loss. We have 10 sets of weights, one for every class. We can see that for every class, the corresponding weights are higher for the features of the pixels most used for that class. Even more, the weights resemble the training numbers.\n\n### 1.2.3. Visualize the 8 hardest and 8 easiest digits (10 points)\nVisualize the 8 digits in the validation set with the highest probability of the true class label under the model.\nAlso plot the 8 digits that were assigned the lowest probability.\nAsk yourself if these results make sense.\n\n\n```python\nN = x_valid.shape[0]\nlogs = np.zeros(N)\nfor i in range(N):\n x, t = x_valid[i], t_valid[i]\n x = x[np.newaxis]\n logp, _, _ = logreg_gradient(x, t, w, b)\n logs[i] = logp\n\nlogs_min = np.argsort(logs)\nlogs_max = np.argsort(-logs)\n\nplot_digits(x_valid[logs_min[:8]], num_cols=4, targets=t_valid[logs_min[:8]])\nplot_digits(x_valid[logs_max[:8]], num_cols=4, targets=t_valid[logs_max[:8]])\n\n```\n\n# Part 2. Multilayer perceptron\n\n\nYou discover that the predictions by the logistic regression classifier are not good enough for your application: the model is too simple. You want to increase the accuracy of your predictions by using a better model. For this purpose, you're going to use a multilayer perceptron (MLP), a simple kind of neural network. The perceptron wil have a single hidden layer $\\bh$ with $L$ elements. The parameters of the model are $\\bV$ (connections between input $\\bx$ and hidden layer $\\bh$), $\\ba$ (the biases/intercepts of $\\bh$), $\\bW$ (connections between $\\bh$ and $\\log q$) and $\\bb$ (the biases/intercepts of $\\log q$.\n\nThe conditional probability of the class label $j$ is given by:\n\n$\\log p(t = j \\;|\\; \\bx, \\bb, \\bW) = \\log q_j - \\log Z$\n\nwhere $q_j$ are again the unnormalized probabilities per class, and $Z = \\sum_j q_j$ is again the probability normalizing factor. Each $q_j$ is computed using:\n\n$\\log q_j = \\bw_j^T \\bh + b_j$\n\nwhere $\\bh$ is a $L \\times 1$ vector with the hidden layer activations (of a hidden layer with size $L$), and $\\bw_j$ is the $j$-th column of $\\bW$ (a $L \\times 10$ matrix). Each element of the hidden layer is computed from the input vector $\\bx$ using:\n\n$h_j = \\sigma(\\bv_j^T \\bx + a_j)$\n\nwhere $\\bv_j$ is the $j$-th column of $\\bV$ (a $784 \\times L$ matrix), $a_j$ is the $j$-th element of $\\ba$, and $\\sigma(.)$ is the so-called sigmoid activation function, defined by:\n\n$\\sigma(x) = \\frac{1}{1 + \\exp(-x)}$\n\nNote that this model is almost equal to the multiclass logistic regression model, but with an extra 'hidden layer' $\\bh$. The activations of this hidden layer can be viewed as features computed from the input, where the feature transformation ($\\bV$ and $\\ba$) is learned.\n\n## 2.1 Derive gradient equations (20 points)\n\nState (shortly) why $\\nabla_{\\bb} \\mathcal{L}^{(n)}$ is equal to the earlier (multiclass logistic regression) case, and why $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ is almost equal to the earlier case.\n\nLike in multiclass logistic regression, you should use intermediate variables $\\mathbf{\\delta}_j^q$. In addition, you should use intermediate variables $\\mathbf{\\delta}_j^h = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial h_j}$.\n\nGiven an input image, roughly the following intermediate variables should be computed:\n\n$\n\\log \\bq \\rightarrow Z \\rightarrow \\log \\bp \\rightarrow \\mathbf{\\delta}^q \\rightarrow \\mathbf{\\delta}^h\n$\n\nwhere $\\mathbf{\\delta}_j^h = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\bh_j}$.\n\nGive the equations for computing $\\mathbf{\\delta}^h$, and for computing the derivatives of $\\mathcal{L}^{(n)}$ w.r.t. $\\bW$, $\\bb$, $\\bV$ and $\\ba$. \n\nYou can use the convenient fact that $\\frac{\\partial}{\\partial x} \\sigma(x) = \\sigma(x) (1 - \\sigma(x))$.\n\nThe values of $\\delta_j^q$ are the same as for logistic regression.\n\n$ \\frac{\\partial }{\\partial b_j} \\mathcal{L}^{(n)} = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j} \\frac{\\partial \\log q_j}{\\partial b_j } = \\delta_j^q \\frac{\\bw_j^T \\bh + b_j}{\\partial b_j} = \\delta_j^q$\n\nin vector form, we have then\n$\\frac{\\partial }{\\partial \\bb} \\mathcal{L}^{(n)} = \\boldsymbol{\\delta}^q$\n\nfor $\\bW$, we have\n\n$ \\frac{\\partial }{\\partial W_{ij}} \\mathcal{L}^{(n)} = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j} \\frac{\\partial \\log q_j}{\\partial W_{ij} } = \\delta_j^q \\frac{\\partial (\\bw_j^T \\bh + \\bb_j)}{\\partial W_{ij}} = \\frac{\\partial ((\\sum_k w_{ij}h_i) + bj)}{\\partial W_{ij}}= \\delta_j^q h_i$\n\nwhich, in vector form, can be written as\n\n$\\frac{\\partial }{\\partial \\bW} = \\bh (\\boldsymbol{\\delta}^q)^T$\n\nfor $\\delta_i^h$ we have\n\n$\\delta_i^h = \\frac{\\partial}{\\partial h_i} \\mathcal{L}^{(n)} = \\sum_j \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j} \\frac{\\partial \\log q_j}{\\partial h_i} = \\sum_j \\delta_j^q \\frac{\\partial (\\bw_j^T \\bh + b_j)}{\\partial h_i} = \\sum_j \\delta_j^q \\frac{\\partial ((\\sum_k w_{kj} h_k ) + b_j)}{\\partial h_i} = \\sum_j \\delta_j^q w_{ij} = \\bw_{i, :} \\delta^q $\n\nwhere $\\bw_{i, :}$ denotes the $i^{th}$ row of $\\bW$\nin vectorized form, we have\n\n$\\boldsymbol{\\delta}^h = \\bW \\boldsymbol{\\delta}^q$\n\n$ \\frac{\\partial }{\\partial a_i} \\mathcal{L}^{(n)} = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial h_i} \\frac{\\partial h_i}{\\partial a_i } = \\delta_i^h \\frac{\\partial \\sigma (\\bv_i^T \\bx + a_i) }{\\partial a_i} = \\delta_i^h \\cdot \\sigma(\\bv_i^T \\bx + a_i) \\cdot (1 - \\sigma(\\bv_i^T \\bx + a_i))$\n\nin vector form, we have then\n\n$\\frac{\\partial }{\\partial \\ba} \\mathcal{L}^{(n)} = \\boldsymbol{\\delta}^h * \\sigma(\\bV^T \\bx + a) * (1 - \\sigma(\\bV^T \\bx + a))$\n\nwhere we use the symbol $*$ for denoting the element wise multiplication between two vectors\n\nfor $\\bV$, we have\n\n$ \\frac{\\partial }{\\partial V_{ki}} \\mathcal{L}^{(n)} = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial h_i} \\frac{\\partial h_i}{\\partial V_{ki} } = \\delta_i^h \\frac{\\partial \\sigma (\\bv_i^T \\bx + a_i) }{\\partial V_{ki}} = \\delta_i^h \\cdot \\sigma(\\bv_i^T \\bx + a_i) \\cdot (1 - \\sigma(\\bv_i^T \\bx + a_i)) \\frac{\\partial (\\sum_m w_{mi}x_m)) + a_i}{\\partial V_{ki}} = \\delta_i^h \\cdot \\sigma(\\bv_i^T \\bx + a_i) \\cdot (1 - \\sigma(\\bv_i^T \\bx + a_i)) x_k$\n\nwhich, in vector form, can be written as\n\n$\\frac{\\partial }{\\partial V} = \\bx \\big( \\boldsymbol{\\delta}^h * \\sigma(\\bV^T \\bx + \\ba) * (1 - \\sigma(\\bV^T \\bx + \\ba)) \\big)^T $ \n\nwhere we use the symbol $*$ for denoting the element wise multiplication between two vectors\n\n\n\n## 2.2 MAP optimization (10 points)\n\nYou derived equations for finding the _maximum likelihood_ solution of the parameters. Explain, in a few sentences, how you could extend this approach so that it optimizes towards a _maximum a posteriori_ (MAP) solution of the parameters, with a Gaussian prior on the parameters. \n\nWe know that introducing a prior $p(\\theta | \\alpha) = \\mathcal{N}(0, \\alpha^{-1} I)$ over vector of parameters $\\theta$ corresponds to adding the term $- \\frac{\\alpha}{2} \\mathcal{\\theta}^T\\mathcal{\\theta}$ to the function we are optimizing, in this case to $\\mathcal{L}^{(n)}$. This is because with MAP there is an additional term regarding the log of the prior.\nWe assume a different gaussian prior for $\\bW$ and $\\bV$.\n\n\\begin{align}\n&p(\\mathbf{V} | \\alpha_1) = \\mathcal{N}(0, \\alpha_1^{-1} I) \\\\\n&p(\\mathbf{W} | \\alpha_2) = \\mathcal{N}(0, \\alpha_2^{-1} I) \\\\\n\\end{align}\n\nThis will result in adding the term $ -\\frac{\\alpha_1}{2} \\|\\bV\\|^2 -\\frac{\\alpha_2}{2} \\|\\bW\\|^2$ to $L^{(n)}$.\nNotice that mathematically, we should treat $\\bW$ and $\\bV$ as vectors. However, the final update will be anyways done element by element \nThe updates that we get are\n\n$\\bW = \\bW + \\eta ( \\frac{\\partial }{\\partial \\bW} \\mathcal{L}^{(n)} - \\alpha \\bW)$\n\n$\\bV = \\bV + \\eta ( \\frac{\\partial }{\\partial \\bV} \\mathcal{L}^{(n)} - \\alpha \\bV)$\n\nwhere $\\eta$ is the learning rate. Thus, MAP for SGD would mean weight decay for parameters.\n\n## 2.3. Implement and train a MLP (15 points)\n\nImplement a MLP model with a single hidden layer of ** neurons**. \nTrain the model for **10 epochs**.\nPlot (in one graph) the conditional log-probability of the trainingset and validation set after each two iterations, as well as the weights.\n\n- 10 points: Working MLP that learns with plots\n- +5 points: Fast, numerically stable, vectorized implementation\n\n\n```python\ndef sigmoid(x):\n return 1. / (1. + np.exp(-x))\n\ndef forward(x, V, a, W, b):\n h = sigmoid(V.transpose().dot(x) + a)\n logq = W.transpose().dot(h) + b\n aa = np.max(logq)\n logZ = aa + np.log(np.sum(np.exp(logq - aa)))\n logp = logq - logZ\n return logp, logq, logZ, h\n\ndef backward(x, h, t, V, a, W, b, logq, logZ):\n # here we are using properties of exponentials and logarithms.\n # equivalent to -np.exp(logq) / np.exp(logZ)\n deltaq = - np.exp(logq - logZ)\n deltaq[t] += 1\n db = deltaq\n dw = np.outer(h, deltaq)\n deltah = W.dot(deltaq)\n sigm2 = h*(1-h)\n da = deltah * sigm2\n dv = np.outer(x, deltah * sigm2)\n return dv, da, dw, db\n\n\ndef sgd_iter_train(x_train, t_train, V, a, W, b, lr=1E-2):\n p = np.random.permutation(x_train.shape[0])\n xp, tp = x_train[p], t_train[p]\n logp_train = 0\n for i in range(xp.shape[0]):\n x, t = xp[i], tp[i]\n logp, logq, logZ, h = forward(x, V, a, W, b)\n dv, da, dw, db = backward(x, h, t, V, a, W, b, logq, logZ)\n logp_train += logp[t]\n V += lr*dv\n a += lr*da\n W += lr*dw\n b += lr*db\n return logp_train / x_train.shape[0], V, a, W, b\n\ndef sgd_iter_loss(x_train, t_train, V, a, W, b, lr=1E-2):\n p = np.random.permutation(x_train.shape[0])\n xp, tp = x_train[p], t_train[p]\n logp_train = 0\n for i in range(xp.shape[0]):\n x, t = xp[i], tp[i]\n logp, logq, logZ, h = forward(x, V, a, W, b)\n logp_train += logp[t]\n return logp_train / x_train.shape[0], V, a, W, b\n\ndef test_sgd(x_train, t_train, x_valid, t_valid, V, a, W, b):\n tlist = []\n vlist = []\n for i in range(10):\n logp_train, V, a, W, b = sgd_iter_train(x_train, t_train, V, a, W, b)\n tlist.append(logp_train)\n logp_valid, _, _, _, _ = sgd_iter_loss(x_valid, t_valid, V, a, W, b, lr=0)\n vlist.append(logp_valid)\n return tlist, vlist, V, a, W, b\n\n```\n\n\n```python\nL = 20\nV = np.random.normal(np.zeros((x_train.shape[1], L)), 0.1)\na = np.random.normal(np.zeros(L), 0.1)\nW = np.random.normal(np.zeros((L, 10)), 0.1)\nb = np.random.normal(np.zeros(10), 0.1)\n\ntloglike, vloglike, V, a, W, b = test_sgd(x_train, t_train, x_valid, t_valid, V, a, W, b)\n```\n\n\n```python\ntrain, = plt.plot([x for i,x in enumerate(tloglike) if i % 2 == 0], 'r', label='Train')\ntest, = plt.plot([x for i,x in enumerate(vloglike) if i % 2 == 0], 'g', label='Validation')\nplt.legend(handles=[train, test])\nplt.show()\nplot_digits(V.T, num_cols=5)\n```\n\n### 2.3.1. Explain the weights (5 points)\nIn less than 80 words, explain how and why the weights of the hidden layer of the MLP differ from the logistic regression model, and relate this to the stronger performance of the MLP.\n\nThe weights of logistic regression define a linear transformation of the input variables, while the weights of the hidden layer of the MLP do the same for the features in the hidden layer, which are nonlinear function of the input. In the hidden layer, we have an intermediate representation that is also optimized during training. This allows the MLP to learn which parts of the input are more informative than others in order to classify images.\n\n### 2.3.1. Less than 250 misclassifications on the test set (10 bonus points)\n\nYou receive an additional 10 bonus points if you manage to train a model with very high accuracy: at most 2.5% misclasified digits on the test set. Note that the test set contains 10000 digits, so you model should misclassify at most 250 digits. This should be achievable with a MLP model with one hidden layer. See results of various models at : `http://yann.lecun.com/exdb/mnist/index.html`. To reach such a low accuracy, you probably need to have a very high $L$ (many hidden units), probably $L > 0$, and apply a strong Gaussian prior on the weights. In this case you are allowed to use the validation set for training.\nYou are allowed to add additional layers, and use convolutional networks, although that is probably not required to reach 2.5% misclassifications.\n\n\n```python\nL = 300\nnp.random.seed(1234)\nV = np.random.normal(np.zeros((x_train.shape[1], L)), 0.001)\na = np.zeros(L)\nW = np.random.normal(np.zeros((L, 10)), 0.001)\nb = np.zeros(10)\n\ndef sigmoid(x):\n return 1. / (1. + np.exp(-x))\n\ndef forward(x, V, a, W, b):\n h = sigmoid(V.transpose().dot(x) + a)\n logq = W.transpose().dot(h) + b\n aa = np.max(logq)\n logZ = aa + np.log(np.sum(np.exp(logq - aa)))\n logp = logq - logZ\n return logp, logq, logZ, h\n\ndef backward(x, h, t, V, a, W, b, logq, logZ):\n deltaq = - np.exp(logq - logZ)\n deltaq[t] += 1\n db = deltaq\n dw = np.outer(h, deltaq)\n deltah = W.dot(deltaq)\n sigm2 = h*(1-h)\n da = deltah * sigm2\n dv = np.outer(x, deltah * sigm2)\n return dv, da, dw, db\n\ndef sgd_iter_train(x_train, t_train, V, a, W, b, lr=1E-1):\n N = x_train.shape[0]\n p = np.random.permutation(N)\n xp, tp = x_train[p], t_train[p]\n logp_train = 0\n predicted = np.zeros(N)\n for i in range(N):\n x, t = xp[i], tp[i]\n logp, logq, logZ, h = forward(x, V, a, W, b)\n dv, da, dw, db = backward(x, h, t, V, a, W, b, logq, logZ)\n logp_train += logp[t]\n V += lr*dv\n a += lr*da\n W += lr*dw\n b += lr*db\n return logp_train/N, V, a, W, b\n\ndef sgd_iter_loss(x_train, t_train, V, a, W, b, lr=1E-3):\n N = x_train.shape[0]\n p = np.random.permutation(N)\n xp, tp = x_train[p], t_train[p]\n logp_train = 0\n predicted = np.zeros(N)\n for i in range(N):\n x, t = xp[i], tp[i]\n logp, logq, logZ, h = forward(x, V, a, W, b)\n logp_train += logp[t]\n return logp_train/N, V, a, W, b\n\ndef test_sgd(x_train, t_train, x_valid, t_valid, V, a, W, b):\n tlist = []\n vlist = []\n for i in range(13):\n logp_train, V, a, W, b = sgd_iter_train(x_train, t_train, V, a, W, b)\n tlist.append(logp_train)\n logp_valid, _, _, _, _ = sgd_iter_loss(x_test, t_test, V, a, W, b, lr=0)\n vlist.append(logp_valid)\n return tlist, vlist, V, a, W, b\n\n\ntloglike, vloglike, V, a, W, b = test_sgd(data[:60000], target[:60000], x_test, t_test, V, a, W, b)\n```\n\n\n```python\npredict_test = np.zeros(len(t_test))\n\nfor i in range(x_test.shape[0]):\n x, t = x_test[i], t_test[i]\n logp, logq, logZ, h = forward(x, V, a, W, b)\n predict_test[i] = np.argmax(logp)\n```\n\n\n```python\nassert predict_test.shape == t_test.shape\nn_errors = np.sum(predict_test != t_test)\nprint('Test errors: %d' % n_errors)\n```\n\n Test errors: 239\n\n\n\n```python\nplot_digits(V.T[:20], num_cols=5)\n```\n", "meta": {"hexsha": "51c3e715df331650dfa066e6f41d72eea23dd213", "size": 348120, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "part1/labs/11636785_11640758_lab2.ipynb", "max_stars_repo_name": "askliar/machine-learning", "max_stars_repo_head_hexsha": "c2710db4b2e777f0c00ba810ab6eb1f6338448e8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-17T06:19:50.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-17T06:19:50.000Z", "max_issues_repo_path": "part1/labs/11636785_11640758_lab2.ipynb", "max_issues_repo_name": "askliar/machine-learning", "max_issues_repo_head_hexsha": "c2710db4b2e777f0c00ba810ab6eb1f6338448e8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "part1/labs/11636785_11640758_lab2.ipynb", "max_forks_repo_name": "askliar/machine-learning", "max_forks_repo_head_hexsha": "c2710db4b2e777f0c00ba810ab6eb1f6338448e8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-08-08T03:04:52.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-23T12:01:46.000Z", "avg_line_length": 222.8681177977, "max_line_length": 78096, "alphanum_fraction": 0.8899373779, "converted": true, "num_tokens": 10620, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.22270013366638422, "lm_q1q2_score": 0.11135006683319211}} {"text": "```\n# this mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)\n\n# enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment3/'\nFOLDERNAME = 'cs231n/assignment2'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# this downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content\n```\n\n Mounted at /content/drive\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. \nOne idea along these lines is batch normalization which was proposed by [1] in 2015.\n\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(' means: ', x.mean(axis=axis))\n print(' stds: ', x.std(axis=axis))\n print() \n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n \tThere will be an option for Colab users and another for Jupyter (local) users.\n\n\n\n```\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n## Batch normalization: forward\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n# Now means should be close to beta and stds close to gamma\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927353 -0.04349151 -0.10452686]\n stds: [1.01531399 1.01238345 0.97819961]\n \n\n\n## Batch normalization: backward\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n#You should expect to see relative errors between 1e-13 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.7029278204293723e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n## Batch normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hart part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 0.0\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 1.10x\n\n\n## Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\nHINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.2611955101340957\n W1 relative error: 1.10e-04\n W2 relative error: 2.85e-06\n W3 relative error: 4.05e-10\n b1 relative error: 6.66e-07\n b2 relative error: 2.22e-08\n b3 relative error: 1.01e-10\n beta1 relative error: 7.33e-09\n beta2 relative error: 1.89e-09\n gamma1 relative error: 6.96e-09\n gamma2 relative error: 1.96e-09\n \n Running check with reg = 3.14\n Initial loss: 5.884829928987633\n W1 relative error: 1.98e-06\n W2 relative error: 2.29e-06\n W3 relative error: 6.29e-10\n b1 relative error: 1.78e-07\n b2 relative error: 8.22e-07\n b3 relative error: 2.10e-10\n beta1 relative error: 6.65e-09\n beta2 relative error: 4.23e-09\n gamma1 relative error: 6.27e-09\n gamma2 relative error: 5.28e-09\n\n\n# Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340975\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.312000; val_acc: 0.266000\n (Iteration 21 / 200) loss: 2.039365\n (Epoch 2 / 10) train acc: 0.386000; val_acc: 0.279000\n (Iteration 41 / 200) loss: 2.041103\n (Epoch 3 / 10) train acc: 0.495000; val_acc: 0.308000\n (Iteration 61 / 200) loss: 1.753903\n (Epoch 4 / 10) train acc: 0.530000; val_acc: 0.311000\n (Iteration 81 / 200) loss: 1.246168\n (Epoch 5 / 10) train acc: 0.588000; val_acc: 0.320000\n (Iteration 101 / 200) loss: 1.320491\n (Epoch 6 / 10) train acc: 0.623000; val_acc: 0.326000\n (Iteration 121 / 200) loss: 1.198438\n (Epoch 7 / 10) train acc: 0.689000; val_acc: 0.338000\n (Iteration 141 / 200) loss: 1.072049\n (Epoch 8 / 10) train acc: 0.725000; val_acc: 0.312000\n (Iteration 161 / 200) loss: 0.760441\n (Epoch 9 / 10) train acc: 0.777000; val_acc: 0.316000\n (Iteration 181 / 200) loss: 0.825612\n (Epoch 10 / 10) train acc: 0.799000; val_acc: 0.347000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696062\n (Epoch 6 / 10) train acc: 0.536000; val_acc: 0.346000\n (Iteration 121 / 200) loss: 1.550785\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.310000\n (Iteration 141 / 200) loss: 1.436308\n (Epoch 8 / 10) train acc: 0.622000; val_acc: 0.342000\n (Iteration 161 / 200) loss: 1.000868\n (Epoch 9 / 10) train acc: 0.654000; val_acc: 0.328000\n (Iteration 181 / 200) loss: 0.925456\n (Epoch 10 / 10) train acc: 0.726000; val_acc: 0.335000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?\n\n## Answer:\nTeh baseline model is much more sensetive to weight scale and might not train at all with certain scales. On the other hand, the batchnorm model is more robust and is trainable at almost all scales. There's still a margin of improvement but it is smaller when compared to the normal model.\n\nWhile for some scales the regular model has better raining accuracy, the batchnorm model always generalizes better and has a higher validation accuracy. \nIt is also important to notice that for almost all scales, batchnorm converges to a better minima than the regular model (smaller final loss).\n\n\n# Batch normalization and batch size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\nBatch size appears to affect mostly the training accuracy, which might affect whether or not batch normalization would perform better than the base model (on training data). Usually a larger batch size performs better on training data since it leads to a better estimation of the actual mean and variance of the data (the larger the sample population gets, the closer the empirical mean and variance are to the actual ones).\n\nThe validation accuracy is almost not affected by batch size at all. This is since the validation set has a different mean and variance all together, therefore a better estimate of the training mean and variance would not change much,\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n2. is analogous to layer normalization sonce it normalized per feature vector, which happens to be an image. \n3. is analogous to batch normalization sonce it normalized w.r.t the entire dataset, which is a single batch.\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n# Means should be close to zero and stds close to one\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n# Now means should be close to beta and stds close to gamma\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n#You should expect to see relative errors between 1e-12 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.4336166798862177e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and batch size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\nWhen there's a very small dimension of features layer normalization won't work well since the mean and variance of a small feature vector aren't likely to represent the data distribution properly. This means that normalizing per feature vector is a bad idea as it will misrepresent the data.\n\n", "meta": {"hexsha": "062d5d783b3a3effbaabf6c925ded646fd53e5ea", "size": 447444, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "doronser/CS231n", "max_stars_repo_head_hexsha": "702cc104b0a359858a419701db3c1d0e7c7e9d7c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-12T17:29:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-12T17:29:10.000Z", "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "doronser/CS231n", "max_issues_repo_head_hexsha": "702cc104b0a359858a419701db3c1d0e7c7e9d7c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-04-03T07:06:03.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-09T11:59:34.000Z", "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "doronser/CS231n", "max_forks_repo_head_hexsha": "702cc104b0a359858a419701db3c1d0e7c7e9d7c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 447444.0, "max_line_length": 447444, "alphanum_fraction": 0.9369619438, "converted": true, "num_tokens": 9540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881357207955, "lm_q2_score": 0.22541660542786954, "lm_q1q2_score": 0.11094737878605326}} {"text": "\n\n\n# PHY321: Work and Energy and conservation theorems\n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA\n\nDate: **Feb 4, 2022**\n\nCopyright 1999-2022, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n## Plans for the week January 31-February 4\n\n### Monday\n\nShort repetition from last week on the work-energy theorem with examples. Discussion of momentum and angular momentum.\nReading suggestion: Taylor sections 3.1-3.4 and 4.1-4.3\n\n### Wednesday\n\nExamples and conservation of angular momentum, Taylor sections 3.4 and 3.5. \nDiscussion of exercises 5 and 6. This is also the assignment for Friday's session.\n\n### Friday\n\nSolution of exercises and discussion of homework 3. Focus is exercises 5 and 6, see end of these slides.\n\nIf you wish to read more about conservative forces or not, Feyman's lectures from 1963 are quite interesting.\nHe states for example that **All fundamental forces in nature appear to be conservative**.\nThis statement was made while developing his argument that *there are no nonconservative forces*.\nYou may enjoy the link to [Feynman's lecture](http://www.feynmanlectures.caltech.edu/I_14.html).\n\n## Work, Energy, Momentum and Conservation laws\n\nThe systems we studied the first three weeks have shown us how to use Newton\u2019s laws of\nmotion to determine the motion of an object based on the forces acting\non it. For some of the cases there is an underlying assumption that we can find an analytical solution to a continuous problem.\nWith a continuous problem we mean a problem where the various variables can take any value within a finite or infinite interval. \n\nUnfortunately, in many cases we\ncannot find an exact solution to the equations of motion we get from\nNewton\u2019s second law. The numerical approach, where we discretize the continuous problem, allows us however to study a much richer set of problems.\nFor problems involving Newton's laws and the various equations of motion we encounter, solving the equations numerically, is the standard approach.\n\nIt allows us to focus on the underlying forces. Often we end up using the same numerical algorithm for different problems.\n\nHere we introduce a commonly used technique that allows us to find the\nvelocity as a function of position without finding the position as a\nfunction of time\u2014an alternate form of Newton\u2019s second law. The method\nis based on a simple principle: Instead of solving the equations of\nmotion directly, we integrate the equations of motion. Such a method\nis called an integration method. \n\nThis allows us also to introduce the **work-energy** theorem. This\ntheorem allows us to find the velocity as a function of position for\nan object even in cases when we cannot solve the equations of\nmotion. This introduces us to the concept of work and kinetic energy,\nan energy related to the motion of an object.\n\nAnd finally, we will link the work-energy theorem with the principle of conservation of energy.\n\n## The Work-Energy Theorem\n\nLet us define the kinetic energy $K$ with a given velocity $\\boldsymbol{v}$\n\n$$\nK=\\frac{1}{2}mv^2,\n$$\n\nwhere $m$ is the mass of the object we are considering.\nWe assume also that there is a force $\\boldsymbol{F}$ acting on the given object\n\n$$\n\\boldsymbol{F}=\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t),\n$$\n\nwith $\\boldsymbol{r}$ the position and $t$ the time.\nIn general we assume the force is a function of all these variables.\nMany of the more central forces in Nature however, depende only on the\nposition. Examples are the gravitational force and the force derived\nfrom the Coulomb potential in electromagnetism.\n\n## Rewriting the Kinetic Energy\n\nLet us study the derivative of the kinetic energy with respect to time $t$. Its continuous form is\n\n$$\n\\frac{dK}{dt}=\\frac{1}{2}m\\frac{d\\boldsymbol{v}\\cdot\\boldsymbol{v}}{dt}.\n$$\n\nUsing our results from exercise 3 of homework 1, we can write the derivative of a vector dot product as\n\n$$\n\\frac{dK}{dt}=\\frac{1}{2}m\\frac{d\\boldsymbol{v}\\cdot\\boldsymbol{v}}{dt}= \\frac{1}{2}m\\left(\\frac{d\\boldsymbol{v}}{dt}\\cdot\\boldsymbol{v}+\\boldsymbol{v}\\cdot\\frac{d\\boldsymbol{v}}{dt}\\right)=m\\frac{d\\boldsymbol{v}}{dt}\\cdot\\boldsymbol{v}.\n$$\n\nWe know also that the acceleration is defined as\n\n$$\n\\boldsymbol{a}=\\frac{\\boldsymbol{F}}{m}=\\frac{d\\boldsymbol{v}}{dt}.\n$$\n\nWe can then rewrite the equation for the derivative of the kinetic energy as\n\n$$\n\\frac{dK}{dt}=m\\frac{d\\boldsymbol{v}}{dt}\\boldsymbol{v}=\\boldsymbol{F}\\frac{d\\boldsymbol{r}}{dt},\n$$\n\nwhere we defined the velocity as the derivative of the position with respect to time.\n\n## Discretizing\n\nLet us now discretize the above equation by letting the instantaneous terms be replaced by a discrete quantity, that is\nwe let $dK\\rightarrow \\Delta K$, $dt\\rightarrow \\Delta t$, $d\\boldsymbol{r}\\rightarrow \\Delta \\boldsymbol{r}$ and $d\\boldsymbol{v}\\rightarrow \\Delta \\boldsymbol{v}$.\n\nWe have then\n\n$$\n\\frac{\\Delta K}{\\Delta t}=m\\frac{\\Delta \\boldsymbol{v}}{\\Delta t}\\boldsymbol{v}=\\boldsymbol{F}\\frac{\\Delta \\boldsymbol{r}}{\\Delta t},\n$$\n\nor by multiplying out $\\Delta t$ we have\n\n$$\n\\Delta K=\\boldsymbol{F}\\Delta \\boldsymbol{r}.\n$$\n\nWe define this quantity as the **work** done by the force $\\boldsymbol{F}$\nduring the displacement $\\Delta \\boldsymbol{r}$. If we study the dimensionality\nof this problem we have mass times length squared divided by time\nsquared, or just dimension energy.\n\n## Difference in kinetic energy\n\nIf we now define a series of such displacements $\\Delta\\boldsymbol{r}$ we have a difference in kinetic energy at a final position $\\boldsymbol{r}_n$ and an \ninitial position $\\boldsymbol{r}_0$ given by\n\n$$\n\\Delta K=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\sum_{i=0}^n\\boldsymbol{F}_i\\Delta \\boldsymbol{r},\n$$\n\nwhere $\\boldsymbol{F}_i$ are the forces acting at every position $\\boldsymbol{r}_i$.\n\nThe work done by acting with a force on a set of displacements can\nthen be as expressed as the difference between the initial and final\nkinetic energies.\n\nThis defines the **work-energy** theorem.\n\n## From the discrete version to the continuous version\n\nIf we take the limit $\\Delta \\boldsymbol{r}\\rightarrow 0$, we can rewrite the sum over the various displacements in terms of an integral, that is\n\n$$\n\\Delta K=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\sum_{i=0}^n\\boldsymbol{F}_i\\Delta \\boldsymbol{r}\\rightarrow \\int_{\\boldsymbol{r}_0}^{\\boldsymbol{r}_n}\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t)d\\boldsymbol{r}.\n$$\n\nThis integral defines a path integral since it will depend on the given path we take between the two end points. We will replace the limits with the symbol $c$ in order to indicate that we take a specific countour in space when the force acts on the system. That is the work $W_{n0}$ between two points $\\boldsymbol{r}_n$ and $\\boldsymbol{r}_0$ is labeled as\n\n$$\nW_{n0}=\\frac{1}{2}mv_n^2-\\frac{1}{2}mv_0^2=\\int_{c}\\boldsymbol{F}(\\boldsymbol{r},\\boldsymbol{v},t)d\\boldsymbol{r}.\n$$\n\nNote that if the force is perpendicular to the displacement, then the force does not affect the kinetic energy.\n\nLet us now study some examples of forces and how to find the velocity from the integration over a given path.\n\nThereafter we study how to evaluate an integral numerically.\n\n## Studying the Work-energy Theorem numerically\n\nIn order to study the work- energy, we will normally need to perform\na numerical integration, unless we can integrate analytically. Here we\npresent some of the simpler methods such as the **rectangle** rule, the **trapezoidal** rule and higher-order methods like the Simpson family of methods.\n\n## Example of an Electron moving along a Surface\n\nAs an example, let us consider the following case.\nWe have classical electron which moves in the $x$-direction along a surface. The force from the surface is\n\n$$\n\\boldsymbol{F}(x)=-F_0\\sin{(\\frac{2\\pi x}{b})}\\boldsymbol{e}_1.\n$$\n\nThe constant $b$ represents the distance between atoms at the surface of the material, $F_0$ is a constant and $x$ is the position of the electron.\n\nUsing the work-energy theorem we can find the work $W$ done when moving an electron from a position $x_0$ to a final position $x$ through the\n integral\n\n$$\nW=\\int_{x_0}^x \\boldsymbol{F}(x')dx' = -\\int_{x_0}^x F_0\\sin{(\\frac{2\\pi x'}{b})} dx',\n$$\n\nwhich results in\n\n$$\nW=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right].\n$$\n\n## Finding the Velocity\n\nIf we now use the work-energy theorem we can find the the velocity at a final position $x$ by setting up\nthe differences in kinetic energies between the final position and the initial position $x_0$.\n\nWe have that the work done by the force is given by the difference in kinetic energies as\n\n$$\nW=\\frac{1}{2}m\\left(v^2(x)-v^2(x_0)\\right)=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right],\n$$\n\nand labeling $v(x_0)=v_0$ (and assuming we know the initial velocity) we have\n\n$$\nv(x)=\\pm \\sqrt{v_0^2+\\frac{F_0b}{m\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right]},\n$$\n\nChoosing $x_0=0$m and $v_0=0$m/s we can simplify the above equation to\n\n$$\nv(x)=\\pm \\sqrt{\\frac{F_0b}{m\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-1\\right]},\n$$\n\n## Harmonic Oscillations\n\nAnother well-known force (and we will derive when we come to Harmonic\nOscillations) is the case of a sliding block attached to a wall\nthrough a spring. The block is attached to a spring with spring\nconstant $k$. The other end of the spring is attached to the wall at\nthe origin $x=0$. We assume the spring has an equilibrium length\n$L_0$.\n\nThe force $F_x$ from the spring on the block is then\n\n$$\nF_x=-k(x-L_0).\n$$\n\nThe position $x$ where the spring force is zero is called the equilibrium position. In our case this is\n$x=L_0$.\n\nWe can now compute the work done by this force when we move our block from an initial position $x_0$ to a final position $x$\n\n$$\nW=\\int_{x_0}^{x}F_xdx'=-k\\int_{x_0}^{x}(x'-L_0)dx'=\\frac{1}{2}k(x_0-L_0)^2-\\frac{1}{2}k(x-L_0)^2.\n$$\n\nIf we now bring back the definition of the work-energy theorem in terms of the kinetic energy we have\n\n$$\nW=\\frac{1}{2}mv^2(x)-\\frac{1}{2}mv_0^2=\\frac{1}{2}k(x_0-L_0)^2-\\frac{1}{2}k(x-L_0)^2,\n$$\n\nwhich we rewrite as\n\n$$\n\\frac{1}{2}mv^2(x)+\\frac{1}{2}k(x-L_0)^2=\\frac{1}{2}mv_0^2+\\frac{1}{2}k(x_0-L_0)^2.\n$$\n\nWhat does this mean? The total energy, which is the sum of potential and kinetic energy, is conserved.\nWow, this sounds interesting. We will analyze this next week in more detail when we study energy, momentum and angular momentum conservation.\n\n## Work-Energy Theorem and Energy Conservation\n\nWe have made the observation that energy was conserved for a force which\ndepends only on the position.\nIn particular we considered a force acting on a block \nattached to a spring with a so-called spring\nconstant $k$. The other end of the spring was attached to the wall. \n\nThe force $F_x$ from the spring on the block was defined as\n\n$$\nF_x=-kx.\n$$\n\nThe work done on the block due to a displacement from a position $x_0$ to $x$\n\n$$\nW=\\int_{x_0}^{x}F_xdx'=\\frac{1}{2}kx_0^2-\\frac{1}{2}kx^2.\n$$\n\n## Conservation of energy\nWith the definition of the work-energy theorem in terms of the kinetic energy we obtained\n\n$$\nW=\\frac{1}{2}mv^2(x)-\\frac{1}{2}mv_0^2=\\frac{1}{2}kx_0^2-\\frac{1}{2}kx^2,\n$$\n\nwhich we rewrote as\n\n$$\n\\frac{1}{2}mv^2(x)+\\frac{1}{2}kx^2=\\frac{1}{2}mv_0^2+\\frac{1}{2}kx_0^2.\n$$\n\nThe total energy, which is the sum of potential and kinetic energy, is conserved.\nWe will analyze this interesting result now in more detail when we study energy, momentum and angular momentum conservation.\n\nBut before we start with energy conservation, conservative forces and potential energies, we need to revisit our definitions of momentum and angular momentum.\n\n## What is a Conservative Force?\n\nA conservative force is a force whose property is that the total work\ndone in moving an object between two points is independent of the\ntaken path. This means that the work on an object under the influence\nof a conservative force, is independent on the path of the object. It\ndepends only on the spatial degrees of freedom and it is possible to\nassign a numerical value for the potential at any point. It leads to\nconservation of energy. The gravitational force is an example of a\nconservative force.\n\n## Two important conditions\n\nFirst, a conservative force depends only on the spatial degrees of freedom. This is a necessary condition for obtaining a path integral which is independent of path.\nThe important condition for the final work to be independent of the path is that the **curl** of the force is zero, that\n\n$$\n\\boldsymbol{\\nabla} \\times \\boldsymbol{F}=0\n$$\n\n## Work-energy theorem to show that energy is conserved with a conservative force\n\nThe work-energy theorem states that the work done $W$ by a force $\\boldsymbol{F}$ that moves an object from a position $\\boldsymbol{r}_0$ to a new position $\\boldsymbol{r}_1$\n\n$$\nW=\\int_{\\boldsymbol{r}_0}^{\\boldsymbol{r}_1}\\boldsymbol{F}\\boldsymbol{dr}=\\frac{1}{2}mv_1^2-\\frac{1}{2}mv_0^2,\n$$\n\nwhere $v_1^2$ is the velocity squared at a time $t_1$ and $v_0^2$ the corresponding quantity at a time $t_0$.\nThe work done is thus the difference in kinetic energies. We can rewrite the above equation as\n\n$$\n\\frac{1}{2}mv_1^2=\\int_{\\boldsymbol{r}_0}^{\\boldsymbol{r}_1}\\boldsymbol{F}\\boldsymbol{dr}+\\frac{1}{2}mv_0^2,\n$$\n\nthat is the final kinetic energy is equal to the initial kinetic energy plus the work done by the force over a given path from a position $\\boldsymbol{r}_0$ at time $t_0$ to a final position position $\\boldsymbol{r}_1$ at a later time $t_1$.\n\n## Conservation of Momentum\n\nBefore we move on however, we need to remind ourselves about important aspects like the linear momentum and angular momentum. After these considerations, we move back to more details about conservatives forces.\n\nAssume we have $N$ objects, each with velocity $\\boldsymbol{v}_i$ with\n$i=1,2,\\dots,N$ and mass $m_i$. The momentum of each object is\n$\\boldsymbol{p}_i=m\\boldsymbol{v}_i$ and the total linear (or mechanical) momentum is\ndefined as\n\n$$\n\\boldsymbol{P}=\\sum_{i=1}^N\\boldsymbol{p}_i=\\sum_{i=1}^Nm_i\\boldsymbol{v}_i,\n$$\n\n## Two objects first\n\nLet us assume we have two objects only that interact with each other and are influenced by an external force.\n\nWe define also the total net force acting on object 1 as\n\n$$\n\\boldsymbol{F}_1^{\\mathrm{net}}=\\boldsymbol{F}_1^{\\mathrm{ext}}+\\boldsymbol{F}_{12},\n$$\n\nwhere $\\boldsymbol{F}_1^{\\mathrm{ext}}$ is the external force\n(for example the force due to an electron moving in an electromagnetic field) and $\\boldsymbol{F}_{12}$ is the\nforce between object one and two. Similarly for object 2 we have\n\n$$\n\\boldsymbol{F}_2^{\\mathrm{net}}=\\boldsymbol{F}_2^{\\mathrm{ext}}+\\boldsymbol{F}_{21}.\n$$\n\n## Newton's Third Law\n\nNewton's third law which we met earlier states that **for every action there is an equal and opposite reaction**.\nIt is more accurately stated as\n\n**if two bodies exert forces on each other, these forces are equal in magnitude and opposite in direction**.\n\nThis means that for two bodies $i$ and $j$, if the force on $i$ due to $j$ is called $\\boldsymbol{F}_{ij}$, then\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{F}_{ij}=-\\boldsymbol{F}_{ji}. \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nFor the abovementioned two objects we have thus $\\boldsymbol{F}_{12}=-\\boldsymbol{F}_{21}$.\n\n## Newton's Second Law and Momentum\n\nWith the net forces acting on each object we can now related the momentum to the forces via\n\n$$\n\\boldsymbol{F}_1^{\\mathrm{net}}=m_1\\boldsymbol{a}_i=m_1\\frac{d\\boldsymbol{v}_1}{dt}=\\boldsymbol{F}_1^{\\mathrm{ext}}+\\boldsymbol{F}_{12},\n$$\n\nand\n\n$$\n\\boldsymbol{F}_2^{\\mathrm{net}}=m_2\\boldsymbol{a}_2=m_2\\frac{d\\boldsymbol{v}_2}{dt}=\\boldsymbol{F}_2^{\\mathrm{ext}}+\\boldsymbol{F}_{21}.\n$$\n\nRecalling our definition for the linear momentum we have then\n\n$$\n\\frac{d\\boldsymbol{p}_1}{dt}=\\boldsymbol{F}_1^{\\mathrm{ext}}+\\boldsymbol{F}_{12},\n$$\n\nand\n\n$$\n\\frac{d\\boldsymbol{p}_2}{dt}=\\boldsymbol{F}_2^{\\mathrm{ext}}+\\boldsymbol{F}_{21}.\n$$\n\n## The total Momentum\n\nThe total momentum $\\boldsymbol{P}$ is defined as the sum of the individual momenta, meaning that we can rewrite\n\n$$\n\\boldsymbol{F}_1^{\\mathrm{net}}+\\boldsymbol{F}_2^{\\mathrm{net}}=\\frac{d\\boldsymbol{p}_1}{dt}+\\frac{d\\boldsymbol{p}_2}{dt}=\\frac{d\\boldsymbol{P}}{dt},\n$$\n\nthat is the derivate with respect to time of the total momentum. If we now\nwrite the net forces as sums of the external plus internal forces\nbetween the objects we have\n\n$$\n\\frac{d\\boldsymbol{P}}{dt}=\\boldsymbol{F}_1^{\\mathrm{ext}}+\\boldsymbol{F}_{12}+\\boldsymbol{F}_2^{\\mathrm{ext}}+\\boldsymbol{F}_{21}=\\boldsymbol{F}_1^{\\mathrm{ext}}+\\boldsymbol{F}_2^{\\mathrm{ext}}.\n$$\n\nThe derivative of the total momentum is just **the sum of the external\nforces**. If we assume that the external forces are zero and that only\ninternal (here two-body forces) are at play, we obtain the important\nresult that the derivative of the total momentum is zero. This means\nagain that the total momentum is a constant of the motion and\nconserved quantity. This is a very important result that we will use\nin many applications to come.\n\n## Newton's Second Law\n\nLet us now general to several objects $N$ and let us also assume that there are no external forces. We will label such a system as **an isolated system**. \n\nNewton's second law, $\\boldsymbol{F}=m\\boldsymbol{a}$, can be written for a particle $i$ as\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{F}_i=\\sum_{j\\ne i}^N \\boldsymbol{F}_{ij}=m_i\\boldsymbol{a}_i,\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nwhere $\\boldsymbol{F}_i$ (a single subscript) denotes the net force acting on $i$ from the other objects/particles.\nBecause the mass of $i$ is fixed and we assume it does not change with time, one can see that\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{F}_i=\\frac{d}{dt}m_i\\boldsymbol{v}_i=\\sum_{j\\ne i}^N\\boldsymbol{F}_{ij}.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\n## Summing over all Objects/Particles\n\nNow, one can sum over all the objects/particles and obtain\n\n$$\n\\frac{d}{dt}\\sum_i m_iv_i=\\sum_{ij, i\\ne j}^N\\boldsymbol{F}_{ij}=0.\n$$\n\nHow did we arrive at the last step? We rewrote the double sum as\n\n$$\n\\sum_{ij, i\\ne j}^N\\boldsymbol{F}_{ij}=\\sum_i^N\\sum_{j>i}\\left(\\boldsymbol{F}_{ij}+\\boldsymbol{F}_{ji}\\right),\n$$\n\nand using Newton's third law which states that\n$\\boldsymbol{F}_{ij}=-\\boldsymbol{F}_{ji}$, we obtain that the net sum over all the two-particle\nforces is zero when we only consider so-called **internal forces**.\nStated differently, the last step made use of the fact that for every\nterm $ij$, there is an equivalent term $ji$ with opposite\nforce. Because the momentum is defined as $m\\boldsymbol{v}$, for a system of\nparticles, we have thus\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d}{dt}\\sum_im_i\\boldsymbol{v}_i=0,~~{\\rm for~isolated~particles}.\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\n## Conservation of total Momentum\n\nBy \"isolated\" one means that the only force acting on any particle $i$\nare those originating from other particles in the sum, i.e. \"no\nexternal\" forces. Thus, Newton's third law leads to the conservation\nof total momentum,\n\n$$\n\\boldsymbol{P}=\\sum_i m_i\\boldsymbol{v}_i,\n$$\n\nand we have\n\n$$\n\\frac{d}{dt}\\boldsymbol{P}=0.\n$$\n\n## Example: Rocket Science\n\nConsider a rocket of mass $M$ moving with velocity $v$. After a\nbrief instant, the velocity of the rocket is $v+\\Delta v$ and the mass\nis $M-\\Delta M$. Momentum conservation gives\n\n$$\n\\begin{eqnarray*}\nMv&=&(M-\\Delta M)(v+\\Delta v)+\\Delta M(v-v_e)\\\\\n0&=&-\\Delta Mv+M\\Delta v+\\Delta M(v-v_e),\\\\\n0&=&M\\Delta v-\\Delta Mv_e.\n\\end{eqnarray*}\n$$\n\nIn the second step we ignored the term $\\Delta M\\Delta v$ because it is doubly small. The last equation gives\n\n$$\n\\begin{eqnarray}\n\\Delta v&=&\\frac{v_e}{M}\\Delta M,\\\\\n\\nonumber\n\\frac{dv}{dt}&=&\\frac{v_e}{M}\\frac{dM}{dt}.\n\\end{eqnarray}\n$$\n\n## Integrating the Equations\n\nIntegrating the expression with lower limits $v_0=0$ and $M_0$, one finds\n\n$$\n\\begin{eqnarray*}\nv&=&v_e\\int_{M_0}^M \\frac{dM'}{M'}\\\\\nv&=&v_e\\ln(M/M_0)\\\\\n&=&v_e\\ln[(M_0-\\alpha t)/M_0].\n\\end{eqnarray*}\n$$\n\nBecause the total momentum of an isolated system is constant, one can\nalso quickly see that the center of mass of an isolated system is also\nconstant. The center of mass is the average position of a set of\nmasses weighted by the mass,\n\n\n
\n\n$$\n\\begin{equation}\n\\bar{x}=\\frac{\\sum_im_ix_i}{\\sum_i m_i}.\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\n## Rate of Change\n\nThe rate of change of $\\bar{x}$ is\n\n$$\n\\dot{\\bar{x}}=\\frac{1}{M}\\sum_i m_i\\dot{x}_i=\\frac{1}{M}P_x.\n$$\n\nThus if the total momentum is constant the center of mass moves at a\nconstant velocity, and if the total momentum is zero the center of\nmass is fixed.\n\n## Conservation of Angular Momentum\n\nThe angular momentum is defined as\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{L}=\\boldsymbol{r}\\times\\boldsymbol{p}=m\\boldsymbol{r}\\times\\boldsymbol{v}.\n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\nIt means that the angular momentum is perpendicular to the plane defined by position $\\boldsymbol{r}$ and the momentum $\\boldsymbol{p}$ via $\\boldsymbol{r}\\times \\boldsymbol{p}$.\n\n## Rate of Change of Angular Momentum\n\nThe rate of change of the angular momentum is\n\n$$\n\\frac{d\\boldsymbol{L}}{dt}=m\\boldsymbol{v}\\times\\boldsymbol{v}+m\\boldsymbol{r}\\times\\dot{\\boldsymbol{v}}=\\boldsymbol{r}\\times{\\boldsymbol{F}}\n$$\n\nThe first term is zero because $\\boldsymbol{v}$ is parallel to itself, and the\nsecond term defines the so-called torque. If $\\boldsymbol{F}$ is parallel to $\\boldsymbol{r}$ then the torque is zero and we say that angular momentum is conserved.\n\nIf the force is not radial, $\\boldsymbol{r}\\times\\boldsymbol{F}\\ne 0$ as above, and angular momentum is no longer conserved,\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{d\\boldsymbol{L}}{dt}=\\boldsymbol{r}\\times\\boldsymbol{F}\\equiv\\boldsymbol{\\tau},\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nwhere $\\boldsymbol{\\tau}$ is the torque.\n\n## The Torque, Example 1 (hw 4, exercise 4)\n\nLet us assume we have an initial position $\\boldsymbol{r}_0=x_0\\boldsymbol{e}_1+y_0\\boldsymbol{e}_2$ at a time $t_0=0$.\nWe add now a force in the positive $x$-direction\n\n$$\n\\boldsymbol{F}=F_x\\boldsymbol{e}_1=\\frac{d\\boldsymbol{p}}{dt},\n$$\n\nwhere we used the force as defined by the time derivative of the momentum.\n\nWe can use this force (and its pertinent acceleration) to find the velocity via the relation\n\n$$\n\\boldsymbol{v}(t)=\\boldsymbol{v}_0+\\int_{t_0}^t\\boldsymbol{a}dt',\n$$\n\nand with $\\boldsymbol{v}_0=0$ we have\n\n$$\n\\boldsymbol{v}(t)=\\int_{t_0}^t\\frac{\\boldsymbol{F}}{m}dt',\n$$\n\nwhere $m$ is the mass of the object.\n\n## The Torque, Example 1 (hw 4, exercise 4)\n\nSince the force acts only in the $x$-direction, we have after integration\n\n$$\n\\boldsymbol{v}(t)=\\frac{\\boldsymbol{F}}{m}t=\\frac{F_x}{m}t\\boldsymbol{e}_1=v_x(t)\\boldsymbol{e}_1.\n$$\n\nThe momentum is in turn given by $\\boldsymbol{p}=p_x\\boldsymbol{e}_1=mv_x\\boldsymbol{e}_1=F_xt\\boldsymbol{e}_1$.\n\nIntegrating over time again we find the final position as (note the force depends only on the $x$-direction)\n\n$$\n\\boldsymbol{r}(t)=(x_0+\\frac{1}{2}\\frac{F_x}{m}t^2) \\boldsymbol{e}_1+y_0\\boldsymbol{e}_2.\n$$\n\nThere is no change in the position in the $y$-direction since the force acts only in the $x$-direction.\n\n## The Torque, Example 1 (hw 4, exercise 4)\n\nWe can now compute the angular momentum given by\n\n$$\n\\boldsymbol{l}=\\boldsymbol{r}\\times\\boldsymbol{p}=\\left[(x_0+\\frac{1}{2}\\frac{F_x}{m}t^2) \\boldsymbol{e}_1+y_0\\boldsymbol{e}_2\\right]\\times F_xt\\boldsymbol{e}_1.\n$$\n\nComputing the cross product we find\n\n$$\n\\boldsymbol{l}=-y_0F_xt\\boldsymbol{e}_3=-y_0F_xt\\boldsymbol{e}_z.\n$$\n\nThe torque is the time derivative of the angular momentum and we have\n\n$$\n\\boldsymbol{\\tau}=-y_0F_x\\boldsymbol{e}_3=-y_0F_x\\boldsymbol{e}_z.\n$$\n\nThe torque is non-zero and angular momentum is not conserved.\n\n## The Torque, Example 2\n\nOne can write the torque about a given axis, which we will denote as $\\hat{z}$, in polar coordinates, where\n\n$$\n\\begin{eqnarray}\nx&=&r\\sin\\theta\\cos\\phi,~~y=r\\sin\\theta\\cos\\phi,~~z=r\\cos\\theta,\n\\end{eqnarray}\n$$\n\nto find the $z$ component of the torque,\n\n$$\n\\begin{eqnarray}\n\\tau_z&=&xF_y-yF_x\\\\\n\\nonumber\n&=&-r\\sin\\theta\\left\\{\\cos\\phi \\partial_y-\\sin\\phi \\partial_x\\right\\}V(x,y,z).\n\\end{eqnarray}\n$$\n\n## Chain Rule and Partial Derivatives\n\nOne can use the chain rule to write the partial derivative w.r.t. $\\phi$ (keeping $r$ and $\\theta$ fixed),\n\n$$\n\\begin{eqnarray}\n\\partial_\\phi&=&\\frac{\\partial x}{\\partial\\phi}\\partial_x+\\frac{\\partial_y}{\\partial\\phi}\\partial_y\n+\\frac{\\partial z}{\\partial\\phi}\\partial_z\\\\\n\\nonumber\n&=&-r\\sin\\theta\\sin\\phi\\partial_x+\\sin\\theta\\cos\\phi\\partial_y.\n\\end{eqnarray}\n$$\n\nCombining the two equations,\n\n$$\n\\begin{eqnarray}\n\\tau_z&=&-\\partial_\\phi V(r,\\theta,\\phi).\n\\end{eqnarray}\n$$\n\nThus, if the potential is independent of the azimuthal angle $\\phi$,\nthere is no torque about the $z$ axis and $L_z$ is conserved.\n\n## System of Isolated Particles\n\nFor a system of isolated particles, one can write\n\n$$\n\\begin{eqnarray}\n\\frac{d}{dt}\\sum_i\\boldsymbol{L}_i&=&\\sum_{i\\ne j}\\boldsymbol{r}_i\\times \\boldsymbol{F}_{ij}\\\\\n\\nonumber\n&=&\\frac{1}{2}\\sum_{ij, i\\ne j} \\boldsymbol{r}_i\\times \\boldsymbol{F}_{ij}+\\boldsymbol{r}_j\\times\\boldsymbol{F}_{ji}\\\\\n\\nonumber\n&=&\\frac{1}{2}\\sum_{ij, i\\ne j} (\\boldsymbol{r}_i-\\boldsymbol{r}_j)\\times\\boldsymbol{F}_{ij}=0,\n\\end{eqnarray}\n$$\n\nwhere the last step used Newton's third law,\n$\\boldsymbol{F}_{ij}=-\\boldsymbol{F}_{ji}$. If the forces between the particles are\nradial, i.e. $\\boldsymbol{F}_{ij} ~||~ (\\boldsymbol{r}_i-\\boldsymbol{r}_j)$, then each term in\nthe sum is zero and the net angular momentum is fixed. Otherwise, you\ncould imagine an isolated system that would start spinning\nspontaneously.\n\n## Homework 3, exercises 5 and 6, numerical solution\n\n\n```python\n%matplotlib inline\n\n# let's start by importing useful packages we are familiar with\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\nWe will choose the following values\n1. mass $m=0,2$ kg\n\n2. accelleration (gravity) $g=9.81$ m/s$^{2}$.\n\n3. initial position is the height $h=2$ m\n\n4. initial velocities $v_{x,0}=v_{y,0}=10$ m/s\n\nYou need also to define an initial time and \nthe step size $\\Delta t$. We can define the step size $\\Delta t$ as the difference between any\ntwo neighboring values in time (time steps) that we analyze within\nsome range. It can be determined by dividing the interval we are\nanalyzing, which in our case is time $t_{\\mathrm{final}}-t_0$, by the number of steps we\nare taking $(N)$. This gives us a step size $\\Delta t = \\dfrac{t_{\\mathrm{final}}-t_0}{N}$.\n\nWith these preliminaries we are now ready to plot our results from exercise 5.\n\nIn setting up our code we need to\n\n1. Define and obtain all initial values, constants, and time to be analyzed with step sizes as done above (you can use the same values)\n\n2. Calculate the velocity using $v_{i+1} = v_{i} + (\\Delta t)*a_{i}$\n\n3. Calculate the position using $pos_{i+1} = r_{i} + (\\Delta t)*v_{i}$\n\n4. Calculate the new acceleration $a_{i+1}$.\n\n5. Repeat steps 2-4 for all time steps within a loop.\n\n\n```python\n# Exercise 6, hw3, brute force way with declaration of vx, vy, x and y\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n# Output file\noutfile = open(data_path(\"Eulerresults.dat\"),'w')\n\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\n\ng = 9.80655 #m/s^2\n# The mass and the drag constant D\nD = 0.00245 #mass/length kg/m\nm = 0.2 #kg, mass of falling object\nDeltaT = 0.001\n#set up arrays \ntfinal = 1.4\n# set up number of points for all variables\nn = ceil(tfinal/DeltaT)\n# define scaling constant vT used in analytical solution\nvT = sqrt(m*g/D)\n# set up arrays for t, a, v, and y and arrays for analytical results\n#brute force setting up of arrays for x and y, vx, vy, ax and ay\nt = np.zeros(n)\nvy = np.zeros(n)\ny = np.zeros(n)\nvx = np.zeros(n)\nx = np.zeros(n)\nyanalytic = np.zeros(n)\n# Initial conditions, note that these correspond to an object falling in the y-direction only.\nvx[0] = 0.0 #m/s\nvy[0] = 0.0 #m/s\ny[0] = 10.0 #m\nx[0] = 0.0 #m\nyanalytic[0] = y[0]\n# Start integrating using Euler's method\nfor i in range(n-1):\n # expression for acceleration, note the absolute value and division by mass\n# Note: you need to think of the sign for the drag force as function of the velocity vector \n ax = -D*vx[i]*sqrt(vx[i]**2+vy[i]**2)/m\n ay = -g - D*vy[i]*sqrt(vx[i]**2+vy[i]**2)/m\n # update velocity and position\n vx[i+1] = vx[i] + DeltaT*ax\n x[i+1] = x[i] + DeltaT*vx[i]\n vy[i+1] = vy[i] + DeltaT*ay\n y[i+1] = y[i] + DeltaT*vy[i]\n # update time to next time step and compute analytical answer\n t[i+1] = t[i] + DeltaT\n yanalytic[i+1] = y[0]-(vT*vT/g)*log(cosh(g*t[i+1]/vT))+vy[0]*t[i+1]\n if ( y[i+1] < 0.0):\n break\ndata = {'t[s]': t,\n 'Relative error in y': abs((y-yanalytic)/yanalytic),\n 'vy[m/s]': vy,\n 'x[m]': x,\n 'vx[m/s]': vx\n}\nNewData = pd.DataFrame(data)\ndisplay(NewData)\n# save to file\nNewData.to_csv(outfile, index=False)\n#then plot\nfig, axs = plt.subplots(4, 1)\naxs[0].plot(t, y)\naxs[0].set_xlim(0, tfinal)\naxs[0].set_ylabel('y')\naxs[1].plot(t, vy)\naxs[1].set_ylabel('vy[m/s]')\naxs[1].set_xlabel('time[s]')\naxs[2].plot(t, x)\naxs[2].set_xlim(0, tfinal)\naxs[2].set_ylabel('x')\naxs[3].plot(t, vx)\naxs[3].set_ylabel('vx[m/s]')\naxs[3].set_xlabel('time[s]')\nfig.tight_layout()\nsave_fig(\"EulerIntegration\")\nplt.show()\n```\n\n## A more compact version of the code\n\n\n```python\n# Exercise 6, hw3, smarter way with declaration of vx, vy, x and y\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\ng = 9.80655 #m/s^2 g to 6 leading digits after decimal point\nD = 0.00245 #m/s\nm = 0.2 # kg\n# Define Gravitational force as a vector in x and y. It is a constant\nG = -m*g*np.array([0.0,1])\nDeltaT = 0.01\n#set up arrays \ntfinal = 1.3\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and x\nt = np.zeros(n)\nv = np.zeros((n,2))\nr = np.zeros((n,2))\n# Initial conditions as compact 2-dimensional arrays\nr0 = np.array([0.0,10.0])\nv0 = np.array([10.0,0.0])\nr[0] = r0\nv[0] = v0\n# Start integrating using Euler's method\nfor i in range(n-1):\n # Set up forces, air resistance FD, note now that we need the norm of the vector\n # Here you could have defined your own function for this\n vabs = sqrt(sum(v[i]*v[i]))\n# Note: you need to think of the sign for the drag force \n FD = -D*v[i]*vabs\n # Final net forces acting on falling object\n Fnet = FD+G\n # The accelration at a given time t_i\n a = Fnet/m\n # update velocity, time and position using Euler's method\n v[i+1] = v[i] + DeltaT*a\n r[i+1] = r[i] + DeltaT*v[i]\n t[i+1] = t[i] + DeltaT\n\nfig, axs = plt.subplots(4, 1)\naxs[0].plot(t, r[:,1])\naxs[0].set_xlim(0, tfinal)\naxs[0].set_ylabel('y')\naxs[1].plot(t, v[:,1])\naxs[1].set_ylabel('vy[m/s]')\naxs[1].set_xlabel('time[s]')\naxs[2].plot(t, r[:,0])\naxs[2].set_xlim(0, tfinal)\naxs[2].set_ylabel('x')\naxs[3].plot(t, v[:,0])\naxs[3].set_ylabel('vx[m/s]')\naxs[3].set_xlabel('time[s]')\n\nfig.tight_layout()\nsave_fig(\"EulerIntegration\")\nplt.show()\n```\n", "meta": {"hexsha": "f56f6379dc493e6e5674158c0bbfbd1276e76995", "size": 57003, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week5/ipynb/week5.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/pub/week5/ipynb/week5.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pub/week5/ipynb/week5.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.3738178198, "max_line_length": 366, "alphanum_fraction": 0.5524095223, "converted": true, "num_tokens": 10209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.341582499438317, "lm_q2_score": 0.3242353989809524, "lm_q1q2_score": 0.11075313799029365}} {"text": "```python\n# %load ../../preconfig.py\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.grid'] = False\nimport seaborn as sns\nsns.set(color_codes=True)\n\nimport numpy as np\nimport pandas as pd\n#import itertools\n\nimport sklearn\n\nimport logging\nlogger = logging.getLogger()\n```\n\nExercises\n=========\n\n### Ex 2.1\n##### Question\nSuppose each of K-classes has an associated target $t_k$, which is a vector of all zeros, except a one in the $k$th position. Show that classifying to the largest element of $\\hat{y}$ amounts to choosing the closest target, $\\min_k \\|tk \u2212 \\hat{y}\\|$, if the elements of $\\hat{y}$ sum to one.\n\n\n##### Solution\nGiven: \n$t_k = e_k$; \n$\\sum_i \\hat{y}_i = 1$.\n\nneed to proof: $\\text{arg max}_i \\hat{y}_i = \\text{arg min}_k \\|t_k - \\hat{y}\\|$.\n\nProof: \n\n\\begin{align}\n\\text{arg min}_k \\|t_k - \\hat{y}\\| &= \\text{arg min}_k \\left ( \\displaystyle \\sum_{j \\neq k} \\hat{y}_j + | \\hat{y}_k - 1 | \\right )\\\\\n &= \\text{arg min}_k \\left ( \\displaystyle \\sum_{j \\neq k} \\hat{y}_j + 1 - \\hat{y}_k \\right ) \\\\\n &= \\text{arg min}_k \\left ( 1 - \\hat{y}_k + 1 - \\hat{y}_k \\right ) \\\\\n &= \\text{arg min}_k 2(1 - \\hat{y}_k) \\\\\n &= \\text{arg max}_k \\hat{y}_k\n\\end{align}\n\n### Ex 2.2\n##### Question\nShow how to compute the Bayes decision boundary for the simula- tion example in Figure 2.5.\n\n\n##### Solution\n[ref: Elements of Statistical Learning - Andrew Tulloch](https://github.com/ajtulloch/Elements-of-Statistical-Learning/blob/master/ESL-Solutions.pdf)\n\nAs Eq.(2.22) of textbook:\n\\begin{align}\n \\hat{G}(x) &= \\text{arg min}_{g \\in \\mathcal{G}} [ 1 - P(g | X = x) ] \\\\\n &= \\text{arg max}_{g \\in \\mathcal{G}} P(g | X = x)\n\\end{align}\n\nThe optimal Bayes decision boundary is where:\n\\begin{align}\n P(\\text{orange} | X = x) &= P(\\text{blue} | X = x) \\\\\n \\frac{P( X = x | orange ) P( orange )}{P(X = x)} &= \\frac{P( X = x | blue ) P( blue )}{P(X = x)} \\\\\n P( X = x | orange ) P( orange ) &= P( X = x | blue ) P( blue )\n\\end{align}\n\nAs descriped in Sec 2.3.3, $P(orange)$ is same as $P(blue)$, and $P(X = x | orange)$ and $P(X = x | blue)$ are generated as bivariate Gassuian distribution. Hence we can work out the optimal Bayes decision boundary exactly.\n\n### Ex 2.3\n##### Question\nDerive equation (2.24).\n\nSuppose there are $N$ points, uniformly distributed in the unit sphere in $\\mathbb{R}^p$. What is the median distance from the origin to the closest data point? \n\n##### Solution\n[ref: Example Sheet 1: Solutions](http://www2.stat.duke.edu/~banks/cam-lectures.dir/Examples1-solns.pdf)\n\n(1)\nSuppose $r$ is the median distance from the origin to the closest data point.\n\nLet $r_{\\text{closest}}$ are all possible closetst points. \nBecause $r$ is the median case, $\\forall j$\n$$P(r_{\\text{closest}}^j \\geq r) = \\frac{1}{2}$$\n\nBecause $r_{\\text{closest}}^j$ is the closest point, so all $N$ points have distance $\\geq r_{\\text{closest}}^j \\geq r$. \n\ntogether, we get:\n$$P(\\text{all N points have distance } \\geq r) = \\frac{1}{2}$$\n\n(2)\nFirst, all points are uniformly distributed in the unit sphere in $\\mathbb{R}^p$. \n\nSecond, [the p-dimensional volume of a Euclidean ball of radius R in p-dimensional Euclidean space is](https://en.wikipedia.org/wiki/Volume_of_an_n-ball):\n$$V_p(R) = \\frac{\\pi^{p/2}}{\\Gamma(\\frac{p}{2} + 1)}R^p$$\n\ntogether, for any point $x$, \n\\begin{align}\nP(x \\text{ has distance } \\geq r) &= 1 - P(x \\text{ has distance } < r) \\\\\n &= 1 - \\frac{\\pi^{p/2}}{\\Gamma(\\frac{p}{2} + 1)}r^p \\Big{/} \\frac{\\pi^{p/2}}{\\Gamma(\\frac{p}{2} + 1)}1^p \n &= 1 - r^p\n\\end{align}\n\nThen:\n\\begin{align}\n P(\\text{all N points have distance } \\geq r) &= P^N (x \\text{ has distance } \\geq r) \\\\\n &= (1 - r^p)^N\n\\end{align}\n\n(3)\nIn all,\n$$\\frac{1}{2} = P(\\text{all N points have distance } \\geq r) = (1 - r^p)^N$$\n\nwe get the solution:\n$$r = (1 - (\\frac{1}{2})^{1/N})^{1/p}$$\n\n### Ex 2.4\n##### Question\nThe edge effect problem discussed on page 23 is not peculiar to uniform sampling from bounded domains. \n\nConsider inputs drawn from a spherical multinormal distribution $X \\sim N(0,I_p)$. The squared distance from any sample point to the origin has a $\\mathcal{X}^2_p$ distribution with mean $p$. Consider a prediction point $x_0$ drawn from this distribution, and let $a = x_0 \\big{/} \\| x0 \\|$ be an associated unit vector. Let $z_i = a^T x_i$ be the projection of each of the training points on this direction.\n\nShow that the $z_i$ are distributed $N(0,1)$ with expected squared distance from the origin 1, while the target point has expected squared distance $p$ from the origin.\n\n##### Solution\n$z_i = \\alpha^T x_i$, which is a linear combination. Moreover, $x_i \\sim N(0, I_p)$ means that its elements are all independant.\n\nAs [the variance of a linear combination](https://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_.28Bienaym.C3.A9_formula.29) is:\n$$\\operatorname{Var}\\left( \\sum_{i=1}^N a_iX_i\\right) = \\sum_{i=1}^N a_i^2\\operatorname{Var}(X_i)$$\n\nWe get:\n\\begin{align}\nE(z_i) &= \\alpha^T E(x_i) \\\\\n & = 0\n\\end{align}\nand\n\\begin{align}\n\\operatorname{Var} (z_i) &= \\sum \\alpha_j^2 \\operatorname{Var}(x_j^i) \\\\\n & = \\sum \\alpha_j^2 \\\\\n & = \\| \\alpha \\|^2_2 \\\\\n & = 1\n\\end{align}\n\nThus, $z_i \\sim N(0,1)$.\n\nThe target point has expected suqared distace $\\| x_i \\|_2^2 = p$.\n\n### Ex 2.5\n#### (a)\nDerive equation (2.27). The last line makes use of (3.8) through a conditioning argument. \n\n#### Solution\nFirst, we give:\n\n1. for $y_0 = x_0^T \\beta + \\epsilon; \\ \\epsilon \\sim N(0, \\sigma^2)$:\n + $E_{y_0 | x_0}(y_0) = E(y_0 | x_0) = E(x_0^T \\beta + \\epsilon) = x_0^T \\beta$\n + $\\operatorname{Var}_{y_0 | x_0}(y_0) = \\operatorname{Var}(y_0 | x_0) = \\operatorname{Var}(x_0^T \\beta + \\epsilon) = \\operatorname{Var}(\\epsilon) = \\sigma^2$\n\n2. for $\\hat{y}_0 = x_0^T \\hat{\\beta} = x_0^T \\beta + x_0 (X^T X)^{-1} x_0 \\epsilon$: \n + expected value:\n \\begin{equation}\n E_{\\tau}(\\hat{y_0}) = E(y_0 | x_0) = x_0^T \\beta \\quad \\text{unbiased}\n \\end{equation}\n + variance:\n \\begin{align}\n \\operatorname{Var}_{\\tau}(\\hat{y_0}) &= \\operatorname{Var}_{\\tau}(x_0^T \\hat{\\beta}) \\\\\n &= x_0^T \\operatorname{Var}_{\\tau}(\\hat{\\beta}) x_0 \\\\\n &= x_0^T E_{\\tau}((X^T X)^{-1} \\sigma^2) x_0 \\quad \\text{see Eq 3.8} \\\\\n &= E_{\\tau} x_0^T (X^T X)^{-1} x_0 \\sigma^2\n \\end{align}\n \n3. [Proof of variance and bias relationship](https://en.wikipedia.org/wiki/Mean_squared_error): \n\\begin{align}\n E( (\\hat{\\theta} - \\theta)^2 ) &= E( \\left (\\hat{\\theta} - E(\\hat{\\theta}) \\right )^2 ) + (E(\\hat{\\theta}) - \\theta)^ 2 \\\\\n &= \\operatorname{Var}(\\hat{\\theta}) + \\operatorname{Bias}^2 (\\hat{\\theta}, \\theta)\n\\end{align}\n\nThus, \n\\begin{align}\n \\operatorname{EPE}(x_0) &= E_{y_0 | x_0} E_{\\tau} (y_0 - \\hat{y}_0)^2 \\\\\n &= E_{\\tau} E_{\\color{blue}{y_0 | x_0}} (\\color{blue}{y_0} - \\hat{y}_0)^2 \\\\\n &= E_{\\tau} \\left ( \\operatorname{Var}(y_0 | x_0) + (E_{y_0 | x_0}(y_0) - \\hat{y}_0)^2 \\right ) \\\\\n &= \\operatorname{Var}(y_0 | x_0) + E_{\\color{blue}{\\tau}} \\left( E(y_0 | x_0) - \\color{blue}{\\hat{y}_0} \\right )^2 \\\\\n &= \\operatorname{Var}(y_0 | x_0) + \\operatorname{Var}_{\\tau}(\\hat{y}_0) + \\left ( E_{\\tau}(\\hat{y}_0) - E(y_0 | x_0) \\right)^2 \\\\\n &= \\operatorname{Var}(y_0 | x_0) + \\operatorname{Var}_{\\tau}(\\hat{y}_0) + \\left ( E_{\\tau}(\\hat{y}_0) - x_0^T \\beta \\right)^2 \\\\\n &= \\sigma^2 + E_{\\tau} x_0^T (X^T X)^{-1} x_0 \\sigma^2 + 0^2\n\\end{align}\n\n#### (b)\nDerive equation (2.28), making use of the cyclic property of the trace operator [trace(AB) = trace(BA)], and its linearity (which allows us to interchange the order of trace and expectation).\n\n\n#### Solution\n[ref: A Solution Manual and Notes for: The Elements of Statistical Learning by Jerome Friedman, Trevor Hastie, and Robert Tibshirani](https://www.google.com.sg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiF6sf2hsfLAhWSco4KHfJQCCwQFggbMAA&url=http%3A%2F%2Fwaxworksmath.com%2FAuthors%2FG_M%2FHastie%2FWriteUp%2Fweatherwax_epstein_hastie_solutions_manual.pdf&usg=AFQjCNH3VN6HgCDHtXNIbJtAjEEQNZFINA&sig2=b_zFhNYsupRwqtY62dGnwA)\n\n1. $x_0$ is $p \\times 1$ vector, and $\\mathbf{X}$ is $N \\times p$ matrix, hence $x_0^T (\\mathbf{X}^T \\mathbf{X})^{-1} x_0 = C_{1 \\times 1} = \\operatorname{trance}(C_{1 \\times 1})$\n\n2. [properity of Covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix) \n\\begin{align}\n \\operatorname{Cov}(x_0) &= E(x_0 x_0^T) - E(x_0) E(x_0)^T \\\\\n &= E(x_0 x_0^T) \\quad \\text{as } E(\\mathbf{X}) = 0 \\text{ and $x_0$ is picked randomly}\n\\end{align}\n\nThus,\n\\begin{align}\n E_{x_0} \\operatorname{EPE}(x_0) &= E_{x_0} x_0^T (\\mathbf{X}^T \\mathbf{X})^{-1} x_0 \\sigma^2 + \\sigma^2 \\\\\n &= E_{x_0} \\operatorname{trance} \\left ( x_0^T (\\mathbf{X}^T \\mathbf{X})^{-1} x_0 \\right ) \\sigma^2 + \\sigma^2 \\\\\n &= E_{x_0} \\operatorname{trance} \\left ( (\\mathbf{X}^T \\mathbf{X})^{-1} x_0 x_0^T \\right ) \\sigma^2 + \\sigma^2 \\quad \\text{cyclic property} \\\\\n &\\approx E_{x_0} \\operatorname{trance} \\left ( \\operatorname{Cov}^{-1}(\\mathbf{X}) x_0 x_0^T \\right ) \\frac{\\sigma^2}{N} + \\sigma^2 \\quad \\text{as } \\mathbf{X}^T \\mathbf{X} \\to N \\operatorname{Cov}(\\mathbf{X}) \\\\\n &= \\operatorname{trance} \\left ( \\operatorname{Cov}^{-1}(\\mathbf{X}) \\, E_{x_0}(x_0 x_0^T) \\right ) \\frac{\\sigma^2}{N} + \\sigma^2 \\quad \\text{linearity, interchange}\\\\\n &= \\operatorname{trance} \\left ( \\operatorname{Cov}^{-1}(\\mathbf{X}) \\, \\operatorname{Cov}(x_0) \\right ) \\frac{\\sigma^2}{N} + \\sigma^2 \\quad \\text{see 2. above}\\\\\n &= \\operatorname{trance} (I_p) \\frac{\\sigma^2}{N} + \\sigma^2 \\quad \\text{as } \\operatorname{Cov}(x_0) \\to \\operatorname{Cov}(\\mathbf{X}) \\\\\n &= p \\frac{\\sigma^2}{N} + \\sigma^2\n\\end{align}\n\n### Ex 2.6\n#### Question\nConsider a regression problem with inputs $x_i$ and outputs $y_i$, and a parameterized model $f_{\\theta}(x)$ to be fit by least squares. Show that if there are observations with tied or identical values of $x$, then the fit can be obtained from a reduced weighted least squares problem.\n\n#### Solution\n[ref: A Solution Manual and Notes for: The Elements of Statistical Learning by Jerome Friedman, Trevor Hastie, and Robert Tibshirani](https://www.google.com.sg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiF6sf2hsfLAhWSco4KHfJQCCwQFggbMAA&url=http%3A%2F%2Fwaxworksmath.com%2FAuthors%2FG_M%2FHastie%2FWriteUp%2Fweatherwax_epstein_hastie_solutions_manual.pdf&usg=AFQjCNH3VN6HgCDHtXNIbJtAjEEQNZFINA&sig2=b_zFhNYsupRwqtY62dGnwA)\n\nAssume we have $N$ train samples, and let $N_u$ be the number of *unique* inputs $x$. And for $i$th unique $x_i$, the $y_i = \\{ y_{i,1}, y_{i,2}, \\dotsc, y_{i, n_i} \\} $, namely, consists of $n_i$ observation.\n\n\\begin{align}\n \\displaystyle \\operatorname{argmin}_{\\theta} \\sum_{k=1}^{N} (y_k - f_{\\theta}(x_k))^2 &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} \\sum_{j=1}^{n_i} (y_{ij} - f_{\\theta}(x_i))^2 \\\\\n &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} \\sum_{j=1}^{n_i} y_{ij}^2 - 2 f_{\\theta}(x_i) y_{ij} + f_{\\theta}(x_i)^2 \\\\\n &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} n_i \\left ( \\color{blue}{\\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij}^2} - 2 f_{\\theta}(x_i) \\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij} + f_{\\theta}(x_i)^2 \\right ) \\\\\n &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} n_i \\left ( \\color{red}{(\\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij})^2} - 2 f_{\\theta}(x_i) \\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij} + f_{\\theta}(x_i)^2 - \\color{red}{(\\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij})^2} + \\color{blue}{\\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij}^2} \\right ) \\\\\n &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} n_i \\left ( \\color{red}{\\bar{y}_i^2} - 2 f_{\\theta}(x_i) \\bar{y}_i^2 + f_{\\theta}(x_i)^2 - \\color{red}{\\bar{y}_i^2} + \\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij}^2 \\right ) \\quad \\text{def: } \\bar{y}_i = \\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij}\\\\\n &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} n_i \\left ( \\left ( \\bar{y}_i^2 - f_{\\theta}(x_i) \\right )^2 - \\bar{y}_i^2 + \\frac{1}{n_i} \\sum_{j=1}^{n_i} y_{ij}^2 \\right ) \\\\ \n &= \\operatorname{argmin}_{\\theta} \\left ( \\sum_{i=1}^{N_u} n_i \\left ( \\bar{y}_i^2 - f_{\\theta}(x_i) \\right )^2 - \\sum_{i=1}^{N_u} n_i \\bar{y}_i^2 + \\sum_{i=1}^{N_u} \\sum_{j=1}^{n_i} y_{ij}^2 \\right ) \\\\ \n &= \\operatorname{argmin}_{\\theta} \\left ( \\sum_{i=1}^{N_u} n_i \\left ( \\bar{y}_i^2 - f_{\\theta}(x_i) \\right )^2 + \\mathcal{C} \\right ) \\quad \\text{as $y_{ij}$ is fixed} \\\\ \n &= \\operatorname{argmin}_{\\theta} \\sum_{i=1}^{N_u} n_i \\left ( \\bar{y}_i^2 - f_{\\theta}(x_i) \\right )^2 \n\\end{align}\n\nThus, it's a *weighted* least squares as every $\\bar{y_i}$ is weighted by $n_i$. And also it is a *reduced* problem since $N_u < N$.\n\n### Ex 2.7\nSuppose we have a sample of $N$ pairs $x_i, y_i$ draw i.i.d from the distribution characterized as follows:\n\\begin{align}\n &x_i \\sim h(x), &\\text{the ddesign density} \\\\\n &y_i = f(x_i) + \\epsilon_i, &\\text{$f$ is the regression function} \\\\\n &\\epsilon_i \\sim (0, \\sigma^2), &\\text{mean zero, variance $\\sigma^2$}\n\\end{align}\n\nWe construct an estimator for $f$ *linear* in the $y_i$,\n$$\\hat{f}(x_0) = \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) y_i,$$ \nwhere the weights $\\ell_i(x_0; \\mathcal{X}$ do not depend on the $y_i$, but do depend on the entire training sequence of $x_i$, denoted here by $\\mathcal{X}$.\n\n#### (a) \nShow that linear regression and $k$-nearest-neighbor regression are mem- bers of this class of estimators. Describe explicitly the weights $\\ell_i(x_0; \\mathcal{X}$ in each of these cases.\n\n#### solution\n1. for linear regression \n $\\hat{\\beta} = (\\mathbf{X}^T \\mathbf{X})^{-1} \\mathbf{X}^T \\mathbf{y}$\n \n so:\n \\begin{align}\n \\hat{f}(x_0) &= x_0^T \\hat{\\beta} \\\\\n &= x_0^T (\\mathbf{X}^T \\mathbf{X})^{-1} \\mathbf{X}^T \\mathbf{y} \\\\\n &= \\displaystyle \\sum_{i=1}^N \\left [ x_0^T (\\mathbf{X}^T \\mathbf{X})^{-1} \\mathbf{X}^T \\right ]_i \\ y_i\n \\end{align}\n \n2. for neighbor model \n \\begin{align}\n \\hat{f}(x_0) &= \\frac{1}{k} \\displaystyle \\sum_{x_i \\in N_k(x_0)} y_i \\\\\n &= \\displaystyle \\sum_{i=1}^N \\frac{1}{k} \\ I(x_i \\in N_k(x_0)) \\ y_i\n \\end{align}\n\n#### (b)\nDecompose the conditional mean-squared error \n$$E_{\\mathcal{Y} | \\mathcal{X}} ( f(x_0) - \\hat{f}(x_0) )^2$$\ninto a conditional squared bias and a conditional variance component. Like $\\mathcal{X}$, $\\mathcal{Y}$ represents the entire training sequence of $y_i$.\n\n#### solution\n[ref:](https://en.wikipedia.org/wiki/Mean_squared_error)\nHere $\\mathcal{X}$ is fixed, and $\\mathcal{Y}$ varies. Also $x_0$ and $f(x_0)$ are fixed. \nso: \n\\begin{align}\nE_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0)) &= E_{\\mathcal{Y} | \\mathcal{X}} \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) y_i \\\\\n &= \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, E_{\\mathcal{Y} | \\mathcal{X}} ( y_i ) \\\\\n &= \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, E_{\\mathcal{Y} | \\mathcal{X}} ( f(x_i) + \\epsilon_i ) \\\\\n &= \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, \\left ( E_{\\mathcal{Y} | \\mathcal{X}} ( f(x_i) ) + E_{\\mathcal{Y} | \\mathcal{X}} ( \\epsilon_i ) \\right ) \\\\\n &= \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, E_{\\mathcal{Y} | \\mathcal{X}} ( f(x_i) ) \\\\\n &= \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, f(x_i) \\\\\n &= C \\quad \\text{constant when $\\mathcal{X}$ is fixed}\n\\end{align}\n\nThus:\n\n\\begin{align}\n {} & E_{\\mathcal{Y} | \\mathcal{X}} ( f(x_0) - \\hat{f}(x_0) )^2 \\\\\n = & E_{\\mathcal{Y} | \\mathcal{X}} ( \\hat{f}(x_0) - f(x_0) )^2 \\\\\n = & E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) + E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0) \\right )^2 \\\\\n = & E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right )^2 + 2 \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right ) \\left ( E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0) \\right ) + \\left ( E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0) \\right )^2 \\right ) \\\\\n = & E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right )^2 + E_{\\mathcal{Y} | \\mathcal{X}} 2 \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right ) \\left ( E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0) \\right ) + E_{\\mathcal{Y} | \\mathcal{X}} \\left ( E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0) \\right )^2 \\\\\n = & \\operatorname{Var}(\\hat{f}(x_0) + E_{\\mathcal{Y} | \\mathcal{X}} 2 \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right ) \\left ( \\underbrace{E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0)}_\\text{constant} \\right ) + \\color{blue}{E_{\\mathcal{Y} | \\mathcal{X}}} \\left ( \\underbrace{E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) - f(x_0)}_\\text{constant} \\right )^2 \\\\\n = & \\operatorname{Var}(\\hat{f}(x_0) + 2 \\left ( f(x_0) - C \\right ) \\, E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right ) + \\operatorname{Bias}^2 (\\hat{f}(x_0), f(x_0)) \\\\\n = & \\operatorname{Var}(\\hat{f}(x_0) + 2 \\left ( f(x_0) - C \\right ) \\, \\left ( E_{\\mathcal{Y} | \\mathcal{X}} \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0) \\right ) + \\operatorname{Bias}^2 (\\hat{f}(x_0), f(x_0)) \\\\\n = & \\operatorname{Var}(\\hat{f}(x_0) + \\operatorname{Bias}^2 (\\hat{f}(x_0), f(x_0)) \\\\\n\\end{align}\n\n#### (c)\nDecompose the (unconditional) mean-squared error \n$$E_{\\mathcal{Y}, \\mathcal{X}} ( f(x_0) - \\hat{f}(x_0) )^2$$\ninto a squared bias and a variance component. \n\n#### solution\n\\begin{align}\n {} & E_{\\mathcal{Y}, \\mathcal{X}} (f(x_0) - \\hat{f}(x_0))^2 \\\\\n = & E_{\\mathcal{X}} E_{\\mathcal{Y} | \\mathcal{X}} (f(x_0) - \\hat{f}(x_0))^2 \\\\\n\\end{align}\nuse similar method as above.\n\n#### (d)\nEstablish a relationship between the squared biases and variances in the above two cases.\n\n#### solution\n##### 1. for variance,\nAs we known in (b), \n$$E_{\\mathcal{Y} | \\mathcal{X}}(\\hat{f}(x_0)) = \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, f(x_i)$$\n\nand also:\n$$ \\hat{f}(x_0) = \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) y_i $$ \n$$ y_i = f(x_i) + \\epsilon_i $$\n\nThus,\n\\begin{align}\n \\operatorname{Var}(\\hat{f}(x_0)) &= E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\hat{f}(x_0) - E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\hat{f}(x_0) \\right ) \\right )^2 \\\\\n &= E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) y_i - \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, f(x_i) \\right )^2 \\\\\n &= E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\left ( y_i - f(x_i) \\right ) \\right )^2 \\\\\n &= E_{\\mathcal{Y} | \\mathcal{X}} \\left ( \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\epsilon_i \\right )^2 \\\\\n\\end{align}\n\n##### 2. for squared bias,\n\\begin{align}\n \\operatorname{Bias}^2 (\\hat{f}(x_0), f(x_0)) &= \\left ( E_{\\mathcal{Y} | \\mathcal{X}} \\hat{f}(x_0) - f(x_0) \\right )^2 \\\\\n &= \\left( \\displaystyle \\sum_{i=1}^N \\ell_i(x_0; \\mathcal{X}) \\, f(x_i) - f(x_0) \\right )^2 \\\\\n\\end{align}\n\n##### 3. guess\nvariance is only affected by $\\epsilon$, while squared bias is only affected by $f(x)$. Because $\\epsilon$ is independent with $f(x)$, it might not possible that there is a relation between variance and squared bias. \n\n### Ex 2.8\nCompare the classification performance of linear regression and $k$-nearest neighbor classification on the zipcode data. In particular, consider only the 2\u2019s and 3\u2019s, and $k$ = 1, 3, 5, 7 and 15. Show both the training and test error for each choice. The zipcode data are available from the book website www-stat.stanford.edu/ElemStatLearn.\n\n\n```python\ntrain_dat = pd.read_csv('./res/zip.train', header=None, sep=' ')\ntrain_dat.rename(columns={0: 'digital'}, inplace=True)\ntrain_dat = train_dat.query('digital == 2 or digital == 3')\ntrain_dat.dropna(axis=1, inplace=True)\n```\n\n\n```python\ntrain_dat.set_index('digital', inplace=True)\n```\n\n\n```python\ntrain_dat.head(3)\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
12345678910...247248249250251252253254255256
digital
3-1-1-1-1.00-1.000-0.928-0.2040.7510.4660.234...0.4660.6391.0001.0000.7910.439-0.199-0.883-1-1
3-1-1-1-0.830.4421.0001.0000.479-0.328-0.947...1.0000.6710.345-0.507-1.000-1.000-1.000-1.000-1-1
3-1-1-1-1.00-1.000-0.1040.5490.5790.5790.857...0.3880.5790.8111.0001.0000.7150.107-0.526-1-1
\n

3 rows \u00d7 256 columns

\n
\n\n\n\n\n```python\nplt.imshow(np.reshape(train_dat.iloc[0].values, (16, 16)))\n```\n\n\n```python\ntest_dat = pd.read_csv('./res/zip.test', header=None, sep=' ')\ntest_dat.rename(columns={0: 'digital'}, inplace=True)\ntest_dat = test_dat.query('digital == 2 or digital == 3')\ntest_dat.dropna(axis=1, inplace=True)\n```\n\n\n```python\ntest_dat.set_index('digital', inplace=True)\n```\n\n\n```python\ntest_dat.head(3)\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
12345678910...247248249250251252253254255256
digital
3-1.000-1.000-1.000-0.5930.7001.0001.0001.0001.0000.853...1.0000.7170.3330.162-0.393-1.000-1-1-1-1
2-0.9960.5720.3960.063-0.506-0.847-1.000-1.000-1.000-1.000...-0.668-1.000-1.000-1.000-1.000-1.000-1-1-1-1
2-1.000-1.0000.4690.4131.0001.0000.462-0.116-0.937-1.000...1.0001.0001.0000.270-0.280-0.855-1-1-1-1
\n

3 rows \u00d7 256 columns

\n
\n\n\n\n\n```python\ndef eva(conf, train_dat, test_dat):\n x_train = train_dat.values\n y_train = train_dat.index.values\n \n conf['cls'].fit(x_train, y_train)\n \n x_test = test_dat.values\n y_test = test_dat.index.values\n \n y_pred = conf['cls'].predict(x_test)\n \n accuracy = sklearn.metrics.accuracy_score(y_test, y_pred)\n print('{}, parameter: {}, accuracy: {:.4}'.format(conf['name'], conf['parameter'], accuracy))\n```\n\n\n```python\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.neighbors import KNeighborsClassifier\n\nconfiguration = [\n {'cls': LogisticRegression(), 'name': 'LogisticRegression', 'parameter': None},\n {'cls': KNeighborsClassifier(n_neighbors=1), 'name': 'KNeighborsClassifier', 'parameter': 'N=1'},\n {'cls': KNeighborsClassifier(n_neighbors=3), 'name': 'KNeighborsClassifier', 'parameter': 'N=3'},\n {'cls': KNeighborsClassifier(n_neighbors=5), 'name': 'KNeighborsClassifier', 'parameter': 'N=5'},\n {'cls': KNeighborsClassifier(n_neighbors=7), 'name': 'KNeighborsClassifier', 'parameter': 'N=7'},\n {'cls': KNeighborsClassifier(n_neighbors=15), 'name': 'KNeighborsClassifier', 'parameter': 'N=15'},\n]\n\nfor conf in configuration:\n eva(conf, train_dat, test_dat)\n```\n\n LogisticRegression, parameter: None, accuracy: 0.9643\n KNeighborsClassifier, parameter: N=1, accuracy: 0.9753\n KNeighborsClassifier, parameter: N=3, accuracy: 0.9698\n KNeighborsClassifier, parameter: N=5, accuracy: 0.9698\n KNeighborsClassifier, parameter: N=7, accuracy: 0.967\n KNeighborsClassifier, parameter: N=15, accuracy: 0.9615\n\n\n### 2.9\nConsider a linear regression model with $p$ parameters, fit by least squares to a set of training data $(x_1,y_1), \\dotsc,(x_N,y_N)$ drawn at random from a population. Let $\\hat{\\beta}$ be the least squares estimate. Suppose we have some test data $(\\hat{x}_1, \\hat{y}_1), \\dotsc,(\\hat{x}_N, \\hat{y}_N)$ drawn at random from the same propulation as the training data.\n\nIf $R_{tr}(\\beta) = \\frac{1}{N} \\sum_1^N (y_i - \\beta^T x_i)^2$, and $R_{te}(\\beta) = \\frac{1}{M} \\sum_1^M (\\hat{y}_i - \\beta^T \\hat{x}_i)^2$, prove that\n$$E[R_{tr}(\\hat{\\beta})] \\leq E[R_{te}(\\hat{\\beta})],$$\nwhere the expectation are over all that is random in each expression.\n\n#### solution\nRef: Hint from [Homework 2 - Hector Corrada Bravo](http://www.cbcb.umd.edu/~hcorrada/PracticalML/assignments/hw2.pdf) \n\ndefine: $\\beta^{\\ast}$ is the least squares estimate in **test data**.\n\n1. \nAs both training data and test data are picked randomly, so obviously:\n$$E[R_{tr}(\\hat{\\beta})] = E[R_{te}(\\beta^{\\ast})]$$\n\n2. \nOn the other hand, because $\\beta^{\\ast}$ is the least squares estimate in **test data**, namely, \n$$\\beta^\\ast = \\operatorname{argmin}_\\beta \\frac{1}{M} \\sum_1^M (\\hat{y}_i - \\beta^T \\hat{x}_i)^2 $$\nso obviously:\n$$R_{te}(\\beta^\\ast) = \\frac{1}{M} \\sum_1^M (\\hat{y}_i - \\left (\\color{blue}{\\beta^{\\ast}} \\right )^T \\hat{x}_i)^2 \\leq \\frac{1}{M} \\sum_1^M (\\hat{y}_i - \\color{blue}{\\hat{\\beta}}^T \\hat{x}_i)^2 = R_{te}(\\hat{\\beta}) $$\nThus:\n$$E[R_{te}(\\beta^\\ast)] \\leq E[R_{te}(\\hat{\\beta})]$$\n\n3. \nFinal, we get:\n$$E[R_{tr}(\\hat{\\beta})] = E[R_{te}(\\beta^{\\ast})] \\leq E[R_{te}(\\hat{\\beta})]$$\nnamely,\n$$E[R_{tr}(\\hat{\\beta})] \\leq E[R_{te}(\\hat{\\beta})]$$\n", "meta": {"hexsha": "1ac976d2842c5589e27aa2e7ea12354bfd7263fa", "size": 79585, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "The_Elements_of_Statistical_Learning/Overview_of_Supervised_Learning/exercises.ipynb", "max_stars_repo_name": "ningchi/book_notes", "max_stars_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-12-31T12:10:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T15:49:34.000Z", "max_issues_repo_path": "The_Elements_of_Statistical_Learning/Overview_of_Supervised_Learning/exercises.ipynb", "max_issues_repo_name": "ningchi/book_notes", "max_issues_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-12-05T13:04:14.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-07T16:24:50.000Z", "max_forks_repo_path": "The_Elements_of_Statistical_Learning/Overview_of_Supervised_Learning/exercises.ipynb", "max_forks_repo_name": "ningchi/book_notes", "max_forks_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-06-27T07:19:28.000Z", "max_forks_repo_forks_event_max_datetime": "2017-11-19T08:57:35.000Z", "avg_line_length": 83.4224318658, "max_line_length": 37746, "alphanum_fraction": 0.6958974681, "converted": true, "num_tokens": 11336, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4148988457967688, "lm_q2_score": 0.26588047891687405, "lm_q1q2_score": 0.11031350382250316}} {"text": "\n# PHY321: Conservative Forces, Examples and start Harmonic Oscillations\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway\n\nDate: **Feb 14, 2021**\n\nCopyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n\n\n\n## Aims and Overarching Motivation\n\n### Monday\n\nShort repetition from last week about conservative forces. Discussion\nof conditions for conservative forces and the Earth-Sun gravitional\nforce example. **Reading suggestion**: Taylor sections 4.3, 4.4 and 4.8.\n\n### Wednesday\n\nPotential curves and discussion of the Earth-Sun example, analytical and numerical considerations.\n**Reading suggestions**: Taylor section 4.6, 4.8 and 4.9.\n\n### Friday\n\nEarth-Sun, conservative forces and potential energy.\n**Reading suggestion**: Taylor sections 4.8 and 4.9.\n\nIf we get time, we start with harmonic oscillations and Hooke's law. **Reading suggestion**: Taylor section 5.1.\n\n\n## One Figure to Rule All Forces (thx to Julie)\n\n\n\n

Figure 1:

\n\n\n\n## Repetition from last week: Work, Energy, Momentum and Conservation laws\n\nEnergy conservation is most convenient as a strategy for addressing\nproblems where time does not appear. For example, a particle goes\nfrom position $x_0$ with speed $v_0$, to position $x_f$; what is its\nnew speed? However, it can also be applied to problems where time\ndoes appear, such as in solving for the trajectory $x(t)$, or\nequivalently $t(x)$.\n\n\n\n\n## Energy Conservation\nEnergy is conserved in the case where the potential energy, $V(\\boldsymbol{r})$, depends only on position, and not on time. The force is determined by $V$,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{F}(\\boldsymbol{r})=-\\boldsymbol{\\nabla} V(\\boldsymbol{r}).\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n## Conservative forces\n\nWe say a force is conservative if it satisfies the following conditions:\n1. The force $\\boldsymbol{F}$ acting on an object only depends on the position $\\boldsymbol{r}$, that is $\\boldsymbol{F}=\\boldsymbol{F}(\\boldsymbol{r})$.\n\n2. For any two points $\\boldsymbol{r}_1$ and $\\boldsymbol{r}_2$, the work done by the force $\\boldsymbol{F}$ on the displacement between these two points is independent of the path taken.\n\n3. Finally, the **curl** of the force is zero $\\boldsymbol{\\nabla}\\times\\boldsymbol{F}=0$.\n\n## Forces and Potentials\n\nThe energy $E$ of a given system is defined as the sum of kinetic and potential energies,\n\n$$\nE=K+V(\\boldsymbol{r}).\n$$\n\nWe define the potential energy at a point $\\boldsymbol{r}$ as the negative work done from a starting point $\\boldsymbol{r}_0$ to a final point $\\boldsymbol{r}$\n\n$$\nV(\\boldsymbol{r})=-W(\\boldsymbol{r}_0\\rightarrow\\boldsymbol{r})= -\\int_{\\boldsymbol{r}_0}^{\\boldsymbol{r}}d\\boldsymbol{r}'\\boldsymbol{F}(\\boldsymbol{r}').\n$$\n\nIf the potential depends on the path taken between these two points there is no unique potential.\n\n\n## Example (relevant for homework 5)\n\nWe study a classical electron which moves in the $x$-direction along a surface. The force from the surface is\n\n$$\n\\boldsymbol{F}(x)=-F_0\\sin{(\\frac{2\\pi x}{b})}\\boldsymbol{e}_1.\n$$\n\nThe constant $b$ represents the distance between atoms at the surface of the material, $F_0$ is a constant and $x$ is the position of the electron.\n\nThis is indeed a conservative force since it depends only on position\nand its **curl** is zero, that is $-\\boldsymbol{\\nabla}\\times \\boldsymbol{F}=0$. This means that energy is conserved and the\nintegral over the work done by the force is independent of the path\ntaken. We will come back to this in more detail next week.\n\n## Example Continues\n\n\nUsing the work-energy theorem we can find the work $W$ done when\nmoving an electron from a position $x_0$ to a final position $x$\nthrough the integral\n\n$$\nW=-\\int_{x_0}^x \\boldsymbol{F}(x')dx' = \\int_{x_0}^x F_0\\sin{(\\frac{2\\pi x'}{b})} dx',\n$$\n\nwhich results in\n\n$$\nW=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right].\n$$\n\nSince this is related to the change in kinetic energy we have, with $v_0$ being the initial velocity at a time $t_0$,\n\n$$\nv = \\pm\\sqrt{\\frac{2}{m}\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right]+v_0^2}.\n$$\n\n## The potential energy from this example\n\nThe potential energy, due to energy conservation is\n\n$$\nV(x)=V(x_0)+\\frac{1}{2}mv_0^2-\\frac{1}{2}mv^2,\n$$\n\nwith $v$ given by the velocity from above.\n\nWe can now, in order to find a more explicit expression for the\npotential energy at a given value $x$, define a zero level value for\nthe potential. The potential is defined, using the work-energy\ntheorem, as\n\n$$\nV(x)=V(x_0)+\\int_{x_0}^x (-F(x'))dx',\n$$\n\nand if you recall the definition of the indefinite integral, we can rewrite this as\n\n$$\nV(x)=\\int (-F(x'))dx'+C,\n$$\n\nwhere $C$ is an undefined constant. The force is defined as the\ngradient of the potential, and in that case the undefined constant\nvanishes. The constant does not affect the force we derive from the\npotential.\n\nWe have then\n\n$$\nV(x)=V(x_0)-\\int_{x_0}^x \\boldsymbol{F}(x')dx',\n$$\n\nwhich results in\n\n$$\nV(x)=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}-\\cos{(\\frac{2\\pi x_0}{b})}\\right]+V(x_0).\n$$\n\nWe can now define\n\n$$\n\\frac{F_0b}{2\\pi}\\cos{(\\frac{2\\pi x_0}{b})}=V(x_0),\n$$\n\nwhich gives\n\n$$\nV(x)=\\frac{F_0b}{2\\pi}\\left[\\cos{(\\frac{2\\pi x}{b})}\\right].\n$$\n\n## Force and Potential\n\nWe have defined work as the energy resulting from a net force acting\non an object (or sseveral objects), that is\n\n$$\nW(\\boldsymbol{r}\\rightarrow \\boldsymbol{r}+d\\boldsymbol{r})= \\boldsymbol{F}(\\boldsymbol{r})d\\boldsymbol{r}.\n$$\n\nIf we write out this for each component we have\n\n$$\nW(\\boldsymbol{r}\\rightarrow \\boldsymbol{r}+d\\boldsymbol{r})=\\boldsymbol{F}(\\boldsymbol{r})d\\boldsymbol{r}=F_xdx+F_ydy+F_zdz.\n$$\n\nThe work done from an initial position to a final one defines also the difference in potential energies\n\n$$\nW(\\boldsymbol{r}\\rightarrow \\boldsymbol{r}+d\\boldsymbol{r})=-\\left[V(\\boldsymbol{r}+d\\boldsymbol{r})-V(\\boldsymbol{r})\\right].\n$$\n\n## Getting to $\\boldsymbol{F}(\\boldsymbol{r})=-\\boldsymbol{\\nabla} V(\\boldsymbol{r})$\n\nWe can write out the differences in potential energies as\n\n$$\nV(\\boldsymbol{r}+d\\boldsymbol{r})-V(\\boldsymbol{r})=V(x+dx,y+dy,z+dz)-V(x,y,z)=dV,\n$$\n\nand using the expression the differential of a multi-variable function $f(x,y,z)$\n\n$$\ndf=\\frac{\\partial f}{\\partial x}dx+\\frac{\\partial f}{\\partial y}dy+\\frac{\\partial f}{\\partial z}dz,\n$$\n\nwe can write the expression for the work done as\n\n$$\nW(\\boldsymbol{r}\\rightarrow \\boldsymbol{r}+d\\boldsymbol{r})=-dV=-\\left[\\frac{\\partial V}{\\partial x}dx+\\frac{\\partial V}{\\partial y}dy+\\frac{\\partial V}{\\partial z}dz \\right].\n$$\n\n## Final expression\n\nComparing the last equation with\n\n$$\nW(\\boldsymbol{r}\\rightarrow \\boldsymbol{r}+d\\boldsymbol{r})=F_xdx+F_ydy+F_zdz,\n$$\n\nwe have\n\n$$\nF_xdx+F_ydy+F_zdz=-\\left[\\frac{\\partial V}{\\partial x}dx+\\frac{\\partial V}{\\partial y}dy+\\frac{\\partial V}{\\partial z}dz \\right],\n$$\n\nleading to\n\n$$\nF_x=-\\frac{\\partial V}{\\partial x},\n$$\n\nand\n\n$$\nF_y=-\\frac{\\partial V}{\\partial y},\n$$\n\nand\n\n$$\nF_z=-\\frac{\\partial V}{\\partial z},\n$$\n\nor just\n\n$$\n\\boldsymbol{F}=-\\frac{\\partial V}{\\partial x}\\boldsymbol{e}_1-\\frac{\\partial V}{\\partial y}\\boldsymbol{e}_2-\\frac{\\partial V}{\\partial z}\\boldsymbol{e}_3=-\\boldsymbol{\\nabla}V(\\boldsymbol{r}).\n$$\n\nAnd this connection is the one we wanted to show.\n\n\n## Net Energy\n\nThe net energy, $E=V+K$ where $K$ is the kinetic energy, is then conserved,\n\n$$\n\\begin{eqnarray}\n\\frac{d}{dt}(K+V)&=&\\frac{d}{dt}\\left(\\frac{m}{2}(v_x^2+v_y^2+v_z^2)+V(\\boldsymbol{r})\\right)\\\\\n\\nonumber\n&=&m\\left(v_x\\frac{dv_x}{dt}+v_y\\frac{dv_y}{dt}+v_z\\frac{dv_z}{dt}\\right)\n+\\partial_xV\\frac{dx}{dt}+\\partial_yV\\frac{dy}{dt}+\\partial_zV\\frac{dz}{dt}\\\\\n\\nonumber\n&=&v_xF_x+v_yF_y+v_zF_z-F_xv_x-F_yv_y-F_zv_z=0.\n\\end{eqnarray}\n$$\n\n## In Vector Notation\n\nThe same proof can be written more compactly with vector notation,\n\n$$\n\\begin{eqnarray}\n\\frac{d}{dt}\\left(\\frac{m}{2}v^2+V(\\boldsymbol{r})\\right)\n&=&m\\boldsymbol{v}\\cdot\\dot{\\boldsymbol{v}}+\\boldsymbol{\\nabla} V(\\boldsymbol{r})\\cdot\\dot{\\boldsymbol{r}}\\\\\n\\nonumber\n&=&\\boldsymbol{v}\\cdot\\boldsymbol{F}-\\boldsymbol{F}\\cdot\\boldsymbol{v}=0.\n\\end{eqnarray}\n$$\n\nInverting the expression for kinetic energy,\n\n\n
\n\n$$\n\\begin{equation}\nv=\\sqrt{2K/m}=\\sqrt{2(E-V)/m},\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nallows one to solve for the one-dimensional trajectory $x(t)$, by finding $t(x)$,\n\n\n
\n\n$$\n\\begin{equation}\nt=\\int_{x_0}^x \\frac{dx'}{v(x')}=\\int_{x_0}^x\\frac{dx'}{\\sqrt{2(E-V(x'))/m}}.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nNote this would be much more difficult in higher dimensions, because\nyou would have to determine which points, $x,y,z$, the particles might\nreach in the trajectory, whereas in one dimension you can typically\ntell by simply seeing whether the kinetic energy is positive at every\npoint between the old position and the new position.\n\n\n## Harmonic Oscillator Potential\n\n\nConsider a simple harmonic oscillator potential, $V(x)=kx^2/2$, with a particle emitted from $x=0$ with velocity $v_0$. Solve for the trajectory $t(x)$,\n\n$$\n\\begin{eqnarray}\nt&=&\\int_{0}^x \\frac{dx'}{\\sqrt{2(E-kx^2/2)/m}}\\\\\n\\nonumber\n&=&\\sqrt{m/k}\\int_0^x~\\frac{dx'}{\\sqrt{x_{\\rm max}^2-x^{\\prime 2}}},~~~x_{\\rm max}^2=2E/k.\n\\end{eqnarray}\n$$\n\nHere $E=mv_0^2/2$ and $x_{\\rm max}$ is defined as the maximum\ndisplacement before the particle turns around. This integral is done\nby the substitution $\\sin\\theta=x/x_{\\rm max}$.\n\n$$\n\\begin{eqnarray}\n(k/m)^{1/2}t&=&\\sin^{-1}(x/x_{\\rm max}),\\\\\n\\nonumber\nx&=&x_{\\rm max}\\sin\\omega t,~~~\\omega=\\sqrt{k/m}.\n\\end{eqnarray}\n$$\n\n## The Earth-Sun system\n\nWe will now venture into a study of a system which is energy\nconserving. The aim is to see if we (since it is not possible to solve\nthe general equations analytically) we can develop stable numerical\nalgorithms whose results we can trust!\n\nWe solve the equations of motion numerically. We will also compute\nquantities like the energy numerically.\n\nWe start with a simpler case first, the Earth-Sun system in two dimensions only. The gravitational force $F_G$ on the earth from the sun is\n\n$$\n\\boldsymbol{F}_G=-\\frac{GM_{\\odot}M_E}{r^3}\\boldsymbol{r},\n$$\n\nwhere $G$ is the gravitational constant,\n\n$$\nM_E=6\\times 10^{24}\\mathrm{Kg},\n$$\n\nthe mass of Earth,\n\n$$\nM_{\\odot}=2\\times 10^{30}\\mathrm{Kg},\n$$\n\nthe mass of the Sun and\n\n$$\nr=1.5\\times 10^{11}\\mathrm{m},\n$$\n\nis the distance between Earth and the Sun. The latter defines what we call an astronomical unit **AU**.\n\n\n## The Earth-Sun system, Newton's Laws\n\nFrom Newton's second law we have then for the $x$ direction\n\n$$\n\\frac{d^2x}{dt^2}=-\\frac{F_{x}}{M_E},\n$$\n\nand\n\n$$\n\\frac{d^2y}{dt^2}=-\\frac{F_{y}}{M_E},\n$$\n\nfor the $y$ direction.\n\nHere we will use that $x=r\\cos{(\\theta)}$, $y=r\\sin{(\\theta)}$ and\n\n$$\nr = \\sqrt{x^2+y^2}.\n$$\n\nWe can rewrite\n\n$$\nF_{x}=-\\frac{GM_{\\odot}M_E}{r^2}\\cos{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}x,\n$$\n\nand\n\n$$\nF_{y}=-\\frac{GM_{\\odot}M_E}{r^2}\\sin{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}y,\n$$\n\nfor the $y$ direction.\n\n## The Earth-Sun system, rewriting the Equations\n\nWe can rewrite these two equations\n\n$$\nF_{x}=-\\frac{GM_{\\odot}M_E}{r^2}\\cos{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}x,\n$$\n\nand\n\n$$\nF_{y}=-\\frac{GM_{\\odot}M_E}{r^2}\\sin{(\\theta)}=-\\frac{GM_{\\odot}M_E}{r^3}y,\n$$\n\nas four first-order coupled differential equations\n\n4\n3\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n4\n4\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n4\n5\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n$$\n\\frac{dy}{dt}=v_y.\n$$\n\n## Building a code for the solar system, final coupled equations\n\nThe four coupled differential equations\n\n4\n7\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n4\n8\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n4\n9\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n$$\n\\frac{dy}{dt}=v_y,\n$$\n\ncan be turned into dimensionless equations or we can introduce astronomical units with $1$ AU = $1.5\\times 10^{11}$. \n\nUsing the equations from circular motion (with $r =1\\mathrm{AU}$)\n\n$$\n\\frac{M_E v^2}{r} = F = \\frac{GM_{\\odot}M_E}{r^2},\n$$\n\nwe have\n\n$$\nGM_{\\odot}=v^2r,\n$$\n\nand using that the velocity of Earth (assuming circular motion) is\n$v = 2\\pi r/\\mathrm{yr}=2\\pi\\mathrm{AU}/\\mathrm{yr}$, we have\n\n$$\nGM_{\\odot}= v^2r = 4\\pi^2 \\frac{(\\mathrm{AU})^3}{\\mathrm{yr}^2}.\n$$\n\n## Building a code for the solar system, discretized equations\n\nThe four coupled differential equations can then be discretized using Euler's method as (with step length $h$)\n\n5\n4\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n5\n5\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n5\n6\n \n<\n<\n<\n!\n!\nM\nA\nT\nH\n_\nB\nL\nO\nC\nK\n\n$$\ny_{i+1}=y_i+hv_{y,i},\n$$\n\n## Code Example with Euler's Method\n\nThe code here implements Euler's method for the Earth-Sun system using a more compact way of representing the vectors. Alternatively, you could have spelled out all the variables $v_x$, $v_y$, $x$ and $y$ as one-dimensional arrays.\n\n\n```python\n%matplotlib inline\n\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n\nDeltaT = 0.001\n#set up arrays \ntfinal = 10 # in years\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and x\nt = np.zeros(n)\nv = np.zeros((n,2))\nr = np.zeros((n,2))\n# Initial conditions as compact 2-dimensional arrays\nr0 = np.array([1.0,0.0])\nv0 = np.array([0.0,2*pi])\nr[0] = r0\nv[0] = v0\nFourpi2 = 4*pi*pi\n# Start integrating using Euler's method\nfor i in range(n-1):\n # Set up the acceleration\n # Here you could have defined your own function for this\n rabs = sqrt(sum(r[i]*r[i]))\n a = -Fourpi2*r[i]/(rabs**3)\n # update velocity, time and position using Euler's forward method\n v[i+1] = v[i] + DeltaT*a\n r[i+1] = r[i] + DeltaT*v[i]\n t[i+1] = t[i] + DeltaT\n# Plot position as function of time \nfig, ax = plt.subplots()\n#ax.set_xlim(0, tfinal)\nax.set_ylabel('x[m]')\nax.set_xlabel('y[m]')\nax.plot(r[:,0], r[:,1])\nfig.tight_layout()\nsave_fig(\"EarthSunEuler\")\nplt.show()\n```\n\n## Problems with Euler's Method\n\nWe notice here that Euler's method doesn't give a stable orbit. It\nmeans that we cannot trust Euler's method. In a deeper way, as we will\nsee in homework 5, Euler's method does not conserve energy. It is an\nexample of an integrator which is not\n[symplectic](https://en.wikipedia.org/wiki/Symplectic_integrator).\n\nHere we present thus two methods, which with simple changes allow us to avoid these pitfalls. The simplest possible extension is the so-called Euler-Cromer method.\nThe changes we need to make to our code are indeed marginal here.\nWe need simply to replace\n\n\n```python\n r[i+1] = r[i] + DeltaT*v[i]\n```\n\nin the above code with the velocity at the new time $t_{i+1}$\n\n\n```python\n r[i+1] = r[i] + DeltaT*v[i+1]\n```\n\nBy this simple caveat we get stable orbits.\nBelow we derive the Euler-Cromer method as well as one of the most utlized algorithms for sovling the above type of problems, the so-called Velocity-Verlet method. \n\n## Deriving the Euler-Cromer Method\n\nLet us repeat Euler's method.\nWe have a differential equation\n\n\n
\n\n$$\n\\begin{equation}\ny'(t_i)=f(t_i,y_i) \n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand if we truncate at the first derivative, we have from the Taylor expansion\n\n\n
\n\n$$\n\\begin{equation}\ny_{i+1}=y(t_i) + (\\Delta t) f(t_i,y_i) + O(\\Delta t^2), \\label{eq:euler} \\tag{5}\n\\end{equation}\n$$\n\nwhich when complemented with $t_{i+1}=t_i+\\Delta t$ forms\nthe algorithm for the well-known Euler method. \nNote that at every step we make an approximation error\nof the order of $O(\\Delta t^2)$, however the total error is the sum over all\nsteps $N=(b-a)/(\\Delta t)$ for $t\\in [a,b]$, yielding thus a global error which goes like\n$NO(\\Delta t^2)\\approx O(\\Delta t)$. \n\nTo make Euler's method more precise we can obviously\ndecrease $\\Delta t$ (increase $N$), but this can lead to loss of numerical precision.\nEuler's method is not recommended for precision calculation,\nalthough it is handy to use in order to get a first\nview on how a solution may look like.\n\nEuler's method is asymmetric in time, since it uses information about the derivative at the beginning\nof the time interval. This means that we evaluate the position at $y_1$ using the velocity\nat $v_0$. A simple variation is to determine $x_{n+1}$ using the velocity at\n$v_{n+1}$, that is (in a slightly more generalized form)\n\n\n
\n\n$$\n\\begin{equation} \ny_{n+1}=y_{n}+ v_{n+1}+O(\\Delta t^2)\n\\label{_auto5} \\tag{6}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\nv_{n+1}=v_{n}+(\\Delta t) a_{n}+O(\\Delta t^2).\n\\label{_auto6} \\tag{7}\n\\end{equation}\n$$\n\nThe acceleration $a_n$ is a function of $a_n(y_n, v_n, t_n)$ and needs to be evaluated\nas well. This is the Euler-Cromer method.\n\n**Exercise**: go back to the above code with Euler's method and add the Euler-Cromer method. \n\n\n## Deriving the Velocity-Verlet Method\n\nLet us stay with $x$ (position) and $v$ (velocity) as the quantities we are interested in.\n\nWe have the Taylor expansion for the position given by\n\n$$\nx_{i+1} = x_i+(\\Delta t)v_i+\\frac{(\\Delta t)^2}{2}a_i+O((\\Delta t)^3).\n$$\n\nThe corresponding expansion for the velocity is\n\n$$\nv_{i+1} = v_i+(\\Delta t)a_i+\\frac{(\\Delta t)^2}{2}v^{(2)}_i+O((\\Delta t)^3).\n$$\n\nVia Newton's second law we have normally an analytical expression for the derivative of the velocity, namely\n\n$$\na_i= \\frac{d^2 x}{dt^2}\\vert_{i}=\\frac{d v}{dt}\\vert_{i}= \\frac{F(x_i,v_i,t_i)}{m}.\n$$\n\nIf we add to this the corresponding expansion for the derivative of the velocity\n\n$$\nv^{(1)}_{i+1} = a_{i+1}= a_i+(\\Delta t)v^{(2)}_i+O((\\Delta t)^2)=a_i+(\\Delta t)v^{(2)}_i+O((\\Delta t)^2),\n$$\n\nand retain only terms up to the second derivative of the velocity since our error goes as $O(h^3)$, we have\n\n$$\n(\\Delta t)v^{(2)}_i\\approx a_{i+1}-a_i.\n$$\n\nWe can then rewrite the Taylor expansion for the velocity as\n\n$$\nv_{i+1} = v_i+\\frac{(\\Delta t)}{2}\\left( a_{i+1}+a_{i}\\right)+O((\\Delta t)^3).\n$$\n\n## The velocity Verlet method\n\nOur final equations for the position and the velocity become then\n\n$$\nx_{i+1} = x_i+(\\Delta t)v_i+\\frac{(\\Delta t)^2}{2}a_{i}+O((\\Delta t)^3),\n$$\n\nand\n\n$$\nv_{i+1} = v_i+\\frac{(\\Delta t)}{2}\\left(a_{i+1}+a_{i}\\right)+O((\\Delta t)^3).\n$$\n\nNote well that the term $a_{i+1}$ depends on the position at $x_{i+1}$. This means that you need to calculate \nthe position at the updated time $t_{i+1}$ before the computing the next velocity. Note also that the derivative of the velocity at the time\n$t_i$ used in the updating of the position can be reused in the calculation of the velocity update as well. \n\n\n## Adding the Velocity-Verlet Method\n\nWe can now easily add the Verlet method to our original code as\n\n\n```python\nDeltaT = 0.01\n#set up arrays \ntfinal = 10\nn = ceil(tfinal/DeltaT)\n# set up arrays for t, a, v, and x\nt = np.zeros(n)\nv = np.zeros((n,2))\nr = np.zeros((n,2))\n# Initial conditions as compact 2-dimensional arrays\nr0 = np.array([1.0,0.0])\nv0 = np.array([0.0,2*pi])\nr[0] = r0\nv[0] = v0\nFourpi2 = 4*pi*pi\n# Start integrating using the Velocity-Verlet method\nfor i in range(n-1):\n # Set up forces, air resistance FD, note now that we need the norm of the vecto\n # Here you could have defined your own function for this\n rabs = sqrt(sum(r[i]*r[i]))\n a = -Fourpi2*r[i]/(rabs**3)\n # update velocity, time and position using the Velocity-Verlet method\n r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a\n rabs = sqrt(sum(r[i+1]*r[i+1]))\n anew = -4*(pi**2)*r[i+1]/(rabs**3)\n v[i+1] = v[i] + 0.5*DeltaT*(a+anew)\n t[i+1] = t[i] + DeltaT\n# Plot position as function of time \nfig, ax = plt.subplots()\nax.set_ylabel('x[m]')\nax.set_xlabel('y[m]')\nax.plot(r[:,0], r[:,1])\nfig.tight_layout()\nsave_fig(\"EarthSunVV\")\nplt.show()\n```\n\nYou can easily generalize the calculation of the forces by defining a function\nwhich takes in as input the various variables. We leave this as a challenge to you.\n\n\n\n\n## Harmonic Oscillator\n\nThe harmonic oscillator is omnipresent in physics. Although you may think \nof this as being related to springs, it, or an equivalent\nmathematical representation, appears in just about any problem where a\nmode is sitting near its potential energy minimum. At that point,\n$\\partial_x V(x)=0$, and the first non-zero term (aside from a\nconstant) in the potential energy is that of a harmonic oscillator. In\na solid, sound modes (phonons) are built on a picture of coupled\nharmonic oscillators, and in relativistic field theory the fundamental\ninteractions are also built on coupled oscillators positioned\ninfinitesimally close to one another in space. The phenomena of a\nresonance of an oscillator driven at a fixed frequency plays out\nrepeatedly in atomic, nuclear and high-energy physics, when quantum\nmechanically the evolution of a state oscillates according to\n$e^{-iEt}$ and exciting discrete quantum states has very similar\nmathematics as exciting discrete states of an oscillator.\n\n\n## Harmonic Oscillator, deriving the Equations\nThe potential energy for a single particle as a function of its position $x$ can be written as a Taylor expansion about some point $x_0$\n\n\n
\n\n$$\n\\begin{equation}\nV(x)=V(x_0)+(x-x_0)\\left.\\partial_xV(x)\\right|_{x_0}+\\frac{1}{2}(x-x_0)^2\\left.\\partial_x^2V(x)\\right|_{x_0}\n+\\frac{1}{3!}\\left.\\partial_x^3V(x)\\right|_{x_0}+\\cdots\n\\label{_auto7} \\tag{8}\n\\end{equation}\n$$\n\nIf the position $x_0$ is at the minimum of the resonance, the first two non-zero terms of the potential are\n\n$$\n\\begin{eqnarray}\nV(x)&\\approx& V(x_0)+\\frac{1}{2}(x-x_0)^2\\left.\\partial_x^2V(x)\\right|_{x_0},\\\\\n\\nonumber\n&=&V(x_0)+\\frac{1}{2}k(x-x_0)^2,~~~~k\\equiv \\left.\\partial_x^2V(x)\\right|_{x_0},\\\\\n\\nonumber\nF&=&-\\partial_xV(x)=-k(x-x_0).\n\\end{eqnarray}\n$$\n\nPut into Newton's 2nd law (assuming $x_0=0$),\n\n$$\n\\begin{eqnarray}\nm\\ddot{x}&=&-kx,\\\\\nx&=&A\\cos(\\omega_0 t-\\phi),~~~\\omega_0=\\sqrt{k/m}.\n\\end{eqnarray}\n$$\n\n## Harmonic Oscillator, Technicalities\n\nHere $A$ and $\\phi$ are arbitrary. Equivalently, one could have\nwritten this as $A\\cos(\\omega_0 t)+B\\sin(\\omega_0 t)$, or as the real\npart of $Ae^{i\\omega_0 t}$. In this last case $A$ could be an\narbitrary complex constant. Thus, there are 2 arbitrary constants\n(either $A$ and $B$ or $A$ and $\\phi$, or the real and imaginary part\nof one complex constant. This is the expectation for a second order\ndifferential equation, and also agrees with the physical expectation\nthat if you know a particle's initial velocity and position you should\nbe able to define its future motion, and that those two arbitrary\nconditions should translate to two arbitrary constants.\n\nA key feature of harmonic motion is that the system repeats itself\nafter a time $T=1/f$, where $f$ is the frequency, and $\\omega=2\\pi f$\nis the angular frequency. The period of the motion is independent of\nthe amplitude. However, this independence is only exact when one can\nneglect higher terms of the potential, $x^3, x^4\\cdots$. Once can\nneglect these terms for sufficiently small amplitudes, and for larger\namplitudes the motion is no longer purely sinusoidal, and even though\nthe motion repeats itself, the time for repeating the motion is no\nlonger independent of the amplitude.\n\nOne can also calculate the velocity and the kinetic energy as a function of time,\n\n$$\n\\begin{eqnarray}\n\\dot{x}&=&-\\omega_0A\\sin(\\omega_0 t-\\phi),\\\\\n\\nonumber\nK&=&\\frac{1}{2}m\\dot{x}^2=\\frac{m\\omega_0^2A^2}{2}\\sin^2(\\omega_0t-\\phi),\\\\\n\\nonumber\n&=&\\frac{k}{2}A^2\\sin^2(\\omega_0t-\\phi).\n\\end{eqnarray}\n$$\n\n## Harmonic Oscillator, Total Energy\n\nThe total energy is then\n\n\n
\n\n$$\n\\begin{equation}\nE=K+V=\\frac{1}{2}m\\dot{x}^2+\\frac{1}{2}kx^2=\\frac{1}{2}kA^2.\n\\label{_auto8} \\tag{9}\n\\end{equation}\n$$\n\nThe total energy then goes as the square of the amplitude.\n\n\nA pendulum is an example of a harmonic oscillator. By expanding the\nkinetic and potential energies for small angles find the frequency for\na pendulum of length $L$ with all the mass $m$ centered at the end by\nwriting the eq.s of motion in the form of a harmonic oscillator.\n\nThe potential energy and kinetic energies are (for $x$ being the displacement)\n\n$$\n\\begin{eqnarray*}\nV&=&mgL(1-\\cos\\theta)\\approx mgL\\frac{x^2}{2L^2},\\\\\nK&=&\\frac{1}{2}mL^2\\dot{\\theta}^2\\approx \\frac{m}{2}\\dot{x}^2.\n\\end{eqnarray*}\n$$\n\nFor small $x$ Newton's 2nd law becomes\n\n$$\nm\\ddot{x}=-\\frac{mg}{L}x,\n$$\n\nand the spring constant would appear to be $k=mg/L$, which makes the\nfrequency equal to $\\omega_0=\\sqrt{g/L}$. Note that the frequency is\nindependent of the mass.\n", "meta": {"hexsha": "8ababaca116b5bd441fbba0fea20fad01372a225", "size": 46330, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week7/ipynb/.ipynb_checkpoints/week7-checkpoint.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/pub/week7/ipynb/.ipynb_checkpoints/week7-checkpoint.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/pub/week7/ipynb/.ipynb_checkpoints/week7-checkpoint.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 24.9892125135, "max_line_length": 253, "alphanum_fraction": 0.5173105979, "converted": true, "num_tokens": 8454, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4263215925474903, "lm_q2_score": 0.25683199707586785, "lm_q1q2_score": 0.10949302601053636}} {"text": "

Notebook 1A: Exploring through the most useful and important features within a Jupyter notebook\n\n***\n_This will be an incomplete and biased run-through of the important features and functions in a jupyter notebook_\n\nIncomplete, because no one notebook (or set) could cover all of the available features and abilities of the Jupyter project\n*** \n# 1. Cell Types\n# 2. Editing modes\n# 3. Imports and output\n# 4. Help \n\n\n### Great sources of Jupyter notebooks to explore and tick off the list:\n- [Jupyter.org a range of tools for reproducible computing](https://jupyter.org/try) \n\n- [The Carpentries, great basic training for software and data handling](https://software-carpentry.org/lessons/index.html) \n\n- [And the main Jupyter notebook documentation site](https://jupyter-notebook.readthedocs.io/en/stable/) \n\n- [IPython, information for interactive computing, the precedant of Jupyter](https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Index.ipynb) \n \n\n\n>We love stories. The ability to provide information and **context** to accompany _data_ and `code` means that, in addtion to providing comments for the function and behavior of the `code` you can return the results of each stage of your computation, as well as allowing a more gerenal discussion of motivations and possible findings\n\n# 1. Cell types:\n\n1. Markdown \n2. Code\n3. Heading\n4. Raw NBConvert\n\n\n\n## 1_1: Markdown and Formatting\n\n# Heading 1\n## Heading 2 \n### Heading 3 \n\nbody text \n_italic_ or *italic*\n__Bold__ or **Bold**\n\n

Markdown cells also render HTML

\n
\n

When unsure you can always import 🐼 🐼

\n
\n\n\nNot that I am encouraging even a light dusting of emoji, but a great number of options are available to communicate:\nhttps://www.w3schools.com/charsets/ref_emoji.asp \n\n\nFurther resources:\n- https://daringfireball.net/projects/markdown/\n- https://www.w3schools.com/\n\n\n
\n
\n Why this matters? \n
\n
\n\n***\n> ### The first thing we may need to look at is the types of information that we want to provide to the reader \n> ### (99% of the time, that will be you). \n***\nOver time these might be:\n \n - Overall aims and research goals\n - Specific tasks to be achived here\n - Descriptions of data \n - Libraires and code \n\n## 1_2: Code cells\n\n\n```python\n#code and comments\na = 1\nb = 2\na + b\n```\n\n### We can use markdown to format code in a number of ways, using \\`\\`\\` followed by a language (_e.g._ \\`\\`\\`python) then code on lines below finished with \\`\\`\\` on a separate line\n\n```python\ndef TimesTable(val=1, n=10):\n for i in range(1,n):\n print (i*val)\n```\n\n### a formatted html example\n\n```html\n

Hello world!

\n```\n\n### Equations: LaTeX and Mathjax options to illustrate \n\nFull LaTex is an option for those familiar with it\n\n\n```latex\n%%latex\n\\begin{align}\nF(k) = {\\sum}_0^{\\infty}(x) e^{2\\pi}y\n\\end{align}\n```\n\n### MathJax provides \nhttps://www.mathjax.org/\n\n\n```python\n# https://www.mathjax.org/\n\nfrom IPython.display import Math\nMath(r'F(k) = \\ {\\sum}_0^{\\infty}(x) e^{2\\pi}y')\n```\n\n#### But markdown can also shorten this process\nby using '$' before and after your text\n\n$F(k) = \\ {\\sum}_0^{\\infty}(x) e^{2\\pi}y$\n\n# 2. Editing Modes\n\n# 3. Imports and output\n\n### Well done for getting this far. You deserve a high-five. \n### Just a little work to earn it...\n---\nWe will import a couple of packages and use these to show live code in the notebook. \nRemember, we are executing each cell in turn (either to render the HTML or Markdown, or to run the code), by hitting the 'Run' button, or by hitting **Ctrl+Enter** / **Shift+Enter** /**Cmnd+Enter**\n\nNotice the `In [*]` near the left of each cell as the cell is executed. This changes to a number when finished. \n\n---\n\n\n\n\n```python\nimport IPython.display as ipd\nimport random\n```\n\n\n```python\nimage = ipd.Image('https://upload.wikimedia.org/wikipedia/commons/f/fb/High_five%21%21.jpg', width=200)\nipd.display(image) , image.metadata\n```\n\n\n```python\nh_fives= []\nh_fives.append(r'')\nh_fives.append(r'')\nh_fives.append(r'')\nh_fives.append(r'')\n```\n\n\n```python\n#a high-five for you (observe repeated re-run for another)\n\nwebout = ipd.HTML(h_fives[random.randint(0,3)])\nwebout\n```\n\n###### You also have many options provided by IPython and Jupyter for other media to enrich you presentations and explainations.\n\nMore detailed notes and notebooks are provided here:\n\nhttps://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Index.ipynb\n\nhttps://jupyter.org/try\n\n\n\n```python\nvid =ipd.YouTubeVideo('3VDw7XIulIk', autoplay='0', width=720, height=400)\n\nipd.display(vid)\n```\n\n\n```python\n#a list of the available cell and line 'magics'\n\n%lsmagic\n```\n\n\n```python\n%%html\n

HTML magics can \nbe used for whole cells

\n
\n```\n\n
\n\n\n```python\ndef TimesTable(val=1, n=10):\n for i in range(1,n):\n print (i*val)\n```\n\n\n```python\n#%timeit\n%time TimesTable(3,10)\n```\n\n# 4. Help\n\n\n```python\n?random.randint\n#core python\n```\n\n\n```python\nimport pandas as pd\n```\n\n\n```python\npd.read_csv?\n\n```\n\n\n```python\n\n```\n\n## Conclusion: Jupyter notebooks are an environment in which you can learn (and recall), explain and explore. Code, data, and context\n\n# Please try:\n - creating and renaming a new notebook for yourself\n - making a copy of an exisitng notebook\n - Search for an example of an interactive notebook from your area of research\n - \n \n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "2d3da21e2c8606f8ab67e2cdd1706cd787b6555a", "size": 12368, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebook1A_kicking_tyres.ipynb", "max_stars_repo_name": "LozRiviera/LAB_Intro_Jupyter", "max_stars_repo_head_hexsha": "41f3ff04bbda6d573f1c34a7805fa4794492cdcc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-16T10:21:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-16T10:21:48.000Z", "max_issues_repo_path": "Notebook1A_kicking_tyres.ipynb", "max_issues_repo_name": "LozRiviera/LAB_Intro_Jupyter", "max_issues_repo_head_hexsha": "41f3ff04bbda6d573f1c34a7805fa4794492cdcc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebook1A_kicking_tyres.ipynb", "max_forks_repo_name": "LozRiviera/LAB_Intro_Jupyter", "max_forks_repo_head_hexsha": "41f3ff04bbda6d573f1c34a7805fa4794492cdcc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-09T10:08:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-09T10:08:01.000Z", "avg_line_length": 25.3963039014, "max_line_length": 379, "alphanum_fraction": 0.5494016818, "converted": true, "num_tokens": 1711, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.256831980010821, "lm_q2_score": 0.42632159254749036, "lm_q1q2_score": 0.10949301873533843}} {"text": "```python\n%matplotlib inline\n```\n\n\nWord Embeddings: Encoding Lexical Semantics\n===========================================\n\nWord embeddings are dense vectors of real numbers, one per word in your\nvocabulary. In NLP, it is almost always the case that your features are\nwords! But how should you represent a word in a computer? You could\nstore its ascii character representation, but that only tells you what\nthe word *is*, it doesn't say much about what it *means* (you might be\nable to derive its part of speech from its affixes, or properties from\nits capitalization, but not much). Even more, in what sense could you\ncombine these representations? We often want dense outputs from our\nneural networks, where the inputs are $|V|$ dimensional, where\n$V$ is our vocabulary, but often the outputs are only a few\ndimensional (if we are only predicting a handful of labels, for\ninstance). How do we get from a massive dimensional space to a smaller\ndimensional space?\n\nHow about instead of ascii representations, we use a one-hot encoding?\nThat is, we represent the word $w$ by\n\n\\begin{align}\\overbrace{\\left[ 0, 0, \\dots, 1, \\dots, 0, 0 \\right]}^\\text{|V| elements}\\end{align}\n\nwhere the 1 is in a location unique to $w$. Any other word will\nhave a 1 in some other location, and a 0 everywhere else.\n\nThere is an enormous drawback to this representation, besides just how\nhuge it is. It basically treats all words as independent entities with\nno relation to each other. What we really want is some notion of\n*similarity* between words. Why? Let's see an example.\n\nSuppose we are building a language model. Suppose we have seen the\nsentences\n\n* The mathematician ran to the store.\n* The physicist ran to the store.\n* The mathematician solved the open problem.\n\nin our training data. Now suppose we get a new sentence never before\nseen in our training data:\n\n* The physicist solved the open problem.\n\nOur language model might do OK on this sentence, but wouldn't it be much\nbetter if we could use the following two facts:\n\n* We have seen mathematician and physicist in the same role in a sentence. Somehow they\n have a semantic relation.\n* We have seen mathematician in the same role in this new unseen sentence\n as we are now seeing physicist.\n\nand then infer that physicist is actually a good fit in the new unseen\nsentence? This is what we mean by a notion of similarity: we mean\n*semantic similarity*, not simply having similar orthographic\nrepresentations. It is a technique to combat the sparsity of linguistic\ndata, by connecting the dots between what we have seen and what we\nhaven't. This example of course relies on a fundamental linguistic\nassumption: that words appearing in similar contexts are related to each\nother semantically. This is called the `distributional\nhypothesis `__.\n\n\nGetting Dense Word Embeddings\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nHow can we solve this problem? That is, how could we actually encode\nsemantic similarity in words? Maybe we think up some semantic\nattributes. For example, we see that both mathematicians and physicists\ncan run, so maybe we give these words a high score for the \"is able to\nrun\" semantic attribute. Think of some other attributes, and imagine\nwhat you might score some common words on those attributes.\n\nIf each attribute is a dimension, then we might give each word a vector,\nlike this:\n\n\\begin{align}q_\\text{mathematician} = \\left[ \\overbrace{2.3}^\\text{can run},\n \\overbrace{9.4}^\\text{likes coffee}, \\overbrace{-5.5}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\n\\begin{align}q_\\text{physicist} = \\left[ \\overbrace{2.5}^\\text{can run},\n \\overbrace{9.1}^\\text{likes coffee}, \\overbrace{6.4}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\nThen we can get a measure of similarity between these words by doing:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = q_\\text{physicist} \\cdot q_\\text{mathematician}\\end{align}\n\nAlthough it is more common to normalize by the lengths:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = \\frac{q_\\text{physicist} \\cdot q_\\text{mathematician}}\n {\\| q_\\text{\\physicist} \\| \\| q_\\text{mathematician} \\|} = \\cos (\\phi)\\end{align}\n\nWhere $\\phi$ is the angle between the two vectors. That way,\nextremely similar words (words whose embeddings point in the same\ndirection) will have similarity 1. Extremely dissimilar words should\nhave similarity -1.\n\n\nYou can think of the sparse one-hot vectors from the beginning of this\nsection as a special case of these new vectors we have defined, where\neach word basically has similarity 0, and we gave each word some unique\nsemantic attribute. These new vectors are *dense*, which is to say their\nentries are (typically) non-zero.\n\nBut these new vectors are a big pain: you could think of thousands of\ndifferent semantic attributes that might be relevant to determining\nsimilarity, and how on earth would you set the values of the different\nattributes? Central to the idea of deep learning is that the neural\nnetwork learns representations of the features, rather than requiring\nthe programmer to design them herself. So why not just let the word\nembeddings be parameters in our model, and then be updated during\ntraining? This is exactly what we will do. We will have some *latent\nsemantic attributes* that the network can, in principle, learn. Note\nthat the word embeddings will probably not be interpretable. That is,\nalthough with our hand-crafted vectors above we can see that\nmathematicians and physicists are similar in that they both like coffee,\nif we allow a neural network to learn the embeddings and see that both\nmathematicians and physicists have a large value in the second\ndimension, it is not clear what that means. They are similar in some\nlatent semantic dimension, but this probably has no interpretation to\nus.\n\n\nIn summary, **word embeddings are a representation of the *semantics* of\na word, efficiently encoding semantic information that might be relevant\nto the task at hand**. You can embed other things too: part of speech\ntags, parse trees, anything! The idea of feature embeddings is central\nto the field.\n\n\nWord Embeddings in Pytorch\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBefore we get to a worked example and an exercise, a few quick notes\nabout how to use embeddings in Pytorch and in deep learning programming\nin general. Similar to how we defined a unique index for each word when\nmaking one-hot vectors, we also need to define an index for each word\nwhen using embeddings. These will be keys into a lookup table. That is,\nembeddings are stored as a $|V| \\times D$ matrix, where $D$\nis the dimensionality of the embeddings, such that the word assigned\nindex $i$ has its embedding stored in the $i$'th row of the\nmatrix. In all of my code, the mapping from words to indices is a\ndictionary named word\\_to\\_ix.\n\nThe module that allows you to use embeddings is torch.nn.Embedding,\nwhich takes two arguments: the vocabulary size, and the dimensionality\nof the embeddings.\n\nTo index into this table, you must use torch.LongTensor (since the\nindices are integers, not floats).\n\n\n\n\n\n```python\n# Author: Robert Guthrie\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport config\n\ntorch.manual_seed(1)\n```\n\n\n\n\n \n\n\n\n\n```python\nword_to_ix = {\"hello\": 0, \"world\": 1}\nembeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings\nlookup_tensor = torch.tensor([word_to_ix[\"hello\"]], dtype=torch.long)\nhello_embed = embeds(lookup_tensor)\nprint(hello_embed)\n```\n\n tensor([[-0.8923, -0.0583, -0.1955, -0.9656, 0.4224]], grad_fn=)\n\n\nAn Example: N-Gram Language Modeling\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRecall that in an n-gram language model, given a sequence of words\n$w$, we want to compute\n\n\\begin{align}P(w_i | w_{i-1}, w_{i-2}, \\dots, w_{i-n+1} )\\end{align}\n\nWhere $w_i$ is the ith word of the sequence.\n\nIn this example, we will compute the loss function on some training\nexamples and update the parameters with backpropagation.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2\nEMBEDDING_DIM = 10\n# We will use Shakespeare Sonnet 2\ntest_sentence = \"\"\"When forty winters shall besiege thy brow,\nAnd dig deep trenches in thy beauty's field,\nThy youth's proud livery so gazed on now,\nWill be a totter'd weed of small worth held:\nThen being asked, where all thy beauty lies,\nWhere all the treasure of thy lusty days;\nTo say, within thine own deep sunken eyes,\nWere an all-eating shame, and thriftless praise.\nHow much more praise deserv'd thy beauty's use,\nIf thou couldst answer 'This fair child of mine\nShall sum my count, and make my old excuse,'\nProving his beauty by succession thine!\nThis were to be new made when thou art old,\nAnd see thy blood warm when thou feel'st it cold.\"\"\".split()\n# we should tokenize the input, but we will ignore that for now\n# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)\ntrigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])\n for i in range(len(test_sentence) - 2)]\n# print the first 3, just so you can see what they look like\nprint(trigrams[:3])\n\nvocab = set(test_sentence)\nword_to_ix = {word: i for i, word in enumerate(vocab)}\n\nprint(vocab)\nprint(word_to_ix)\n```\n\n [(['When', 'forty'], 'winters'), (['forty', 'winters'], 'shall'), (['winters', 'shall'], 'besiege')]\n {'say,', 'cold.', 'make', 'Were', 'asked,', 'of', 'see', \"deserv'd\", 'Shall', 'small', 'thriftless', 'winters', 'to', 'old,', 'deep', 'praise.', 'all-eating', 'made', 'This', 'the', 'lusty', 'praise', 'where', 'Proving', 'his', 'lies,', 'in', 'mine', 'thine', 'brow,', 'held:', 'now,', 'If', 'it', 'sunken', 'own', 'sum', 'a', 'fair', \"feel'st\", 'weed', 'count,', 'child', 'besiege', 'my', 'thy', 'And', 'couldst', 'on', 'Where', 'thine!', \"excuse,'\", 'more', 'new', \"youth's\", 'forty', 'dig', 'trenches', 'and', 'Then', 'When', \"beauty's\", 'so', 'answer', 'thou', 'old', 'much', 'livery', 'when', 'How', 'by', 'Will', 'Thy', 'days;', 'proud', \"totter'd\", 'shall', 'were', 'being', 'an', 'all', 'To', 'warm', 'be', 'blood', 'field,', \"'This\", 'treasure', 'succession', 'worth', 'shame,', 'use,', 'art', 'beauty', 'gazed', 'within', 'eyes,'}\n {'say,': 0, 'cold.': 1, 'make': 2, 'Were': 3, 'asked,': 4, 'of': 5, 'see': 6, \"deserv'd\": 7, 'Shall': 8, 'small': 9, 'thriftless': 10, 'winters': 11, 'to': 12, 'old,': 13, 'deep': 14, 'praise.': 15, 'all-eating': 16, 'made': 17, 'This': 18, 'the': 19, 'lusty': 20, 'praise': 21, 'where': 22, 'Proving': 23, 'his': 24, 'lies,': 25, 'in': 26, 'mine': 27, 'thine': 28, 'brow,': 29, 'held:': 30, 'now,': 31, 'If': 32, 'it': 33, 'sunken': 34, 'own': 35, 'sum': 36, 'a': 37, 'fair': 38, \"feel'st\": 39, 'weed': 40, 'count,': 41, 'child': 42, 'besiege': 43, 'my': 44, 'thy': 45, 'And': 46, 'couldst': 47, 'on': 48, 'Where': 49, 'thine!': 50, \"excuse,'\": 51, 'more': 52, 'new': 53, \"youth's\": 54, 'forty': 55, 'dig': 56, 'trenches': 57, 'and': 58, 'Then': 59, 'When': 60, \"beauty's\": 61, 'so': 62, 'answer': 63, 'thou': 64, 'old': 65, 'much': 66, 'livery': 67, 'when': 68, 'How': 69, 'by': 70, 'Will': 71, 'Thy': 72, 'days;': 73, 'proud': 74, \"totter'd\": 75, 'shall': 76, 'were': 77, 'being': 78, 'an': 79, 'all': 80, 'To': 81, 'warm': 82, 'be': 83, 'blood': 84, 'field,': 85, \"'This\": 86, 'treasure': 87, 'succession': 88, 'worth': 89, 'shame,': 90, 'use,': 91, 'art': 92, 'beauty': 93, 'gazed': 94, 'within': 95, 'eyes,': 96}\n\n\n\n```python\nclass NGramLanguageModeler(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(NGramLanguageModeler, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n self.linear2 = nn.Linear(128, vocab_size)\n\n def forward(self, inputs):\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n\nlosses = []\nloss_function = nn.NLLLoss()\nmodel = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\n\n```\n\n\n```python\nfor epoch in range(10):\n total_loss = 0\n for context, target in trigrams:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in tensors)\n context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n\n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a tensor)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # The loss decreased every iteration over the training data!\n```\n\n [521.9658379554749, 519.4242827892303, 516.8998155593872, 514.3911921977997, 511.89916229248047, 509.42258977890015, 506.96097111701965, 504.5128610134125, 502.07800102233887, 499.65542340278625]\n\n\nExercise: Computing Word Embeddings: Continuous Bag-of-Words\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep\nlearning. It is a model that tries to predict words given the context of\na few words before and a few words after the target word. This is\ndistinct from language modeling, since CBOW is not sequential and does\nnot have to be probabilistic. Typcially, CBOW is used to quickly train\nword embeddings, and these embeddings are used to initialize the\nembeddings of some more complicated model. Usually, this is referred to\nas *pretraining embeddings*. It almost always helps performance a couple\nof percent.\n\nThe CBOW model is as follows. Given a target word $w_i$ and an\n$N$ context window on each side, $w_{i-1}, \\dots, w_{i-N}$\nand $w_{i+1}, \\dots, w_{i+N}$, referring to all context words\ncollectively as $C$, CBOW tries to minimize\n\n\\begin{align}-\\log p(w_i | C) = -\\log \\text{Softmax}(A(\\sum_{w \\in C} q_w) + b)\\end{align}\n\nwhere $q_w$ is the embedding for word $w$.\n\nImplement this model in Pytorch by filling in the class below. Some\ntips:\n\n* Think about which parameters you need to define.\n* Make sure you know what shape each operation expects. Use .view() if you need to\n reshape.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2 # 2 words to the left, 2 to the right\nraw_text = \"\"\"We are about to study the idea of a computational process.\nComputational processes are abstract beings that inhabit computers.\nAs they evolve, processes manipulate other abstract things called data.\nThe evolution of a process is directed by a pattern of rules\ncalled a program. People create programs to direct processes. In effect,\nwe conjure the spirits of the computer with our spells.\"\"\".split()\n\n# By deriving a set from `raw_text`, we deduplicate the array\nvocab = set(raw_text)\nvocab_size = len(vocab)\n\nword_to_ix = {word: i for i, word in enumerate(vocab)}\ndata = []\nfor i in range(2, len(raw_text) - 2):\n context = [raw_text[i - 2], raw_text[i - 1],\n raw_text[i + 1], raw_text[i + 2]]\n target = raw_text[i]\n data.append((context, target))\nprint(data[:5])\n\nprint(word_to_ix)\n```\n\n [(['We', 'are', 'to', 'study'], 'about'), (['are', 'about', 'study', 'the'], 'to'), (['about', 'to', 'the', 'idea'], 'study'), (['to', 'study', 'idea', 'of'], 'the'), (['study', 'the', 'of', 'a'], 'idea')]\n {'computer': 0, 'evolution': 1, 'about': 2, 'they': 3, 'are': 4, 'called': 5, 'People': 6, 'idea': 7, 'computers.': 8, 'pattern': 9, 'of': 10, 'beings': 11, 'spirits': 12, 'other': 13, 'program.': 14, 'we': 15, 'a': 16, 'that': 17, 'to': 18, 'Computational': 19, 'directed': 20, 'our': 21, 'As': 22, 'effect,': 23, 'direct': 24, 'conjure': 25, 'manipulate': 26, 'In': 27, 'spells.': 28, 'inhabit': 29, 'computational': 30, 'abstract': 31, 'study': 32, 'things': 33, 'processes': 34, 'The': 35, 'rules': 36, 'with': 37, 'process.': 38, 'data.': 39, 'evolve,': 40, 'the': 41, 'We': 42, 'create': 43, 'processes.': 44, 'programs': 45, 'process': 46, 'by': 47, 'is': 48}\n\n\n\n```python\ndef make_context_vector(context, word_to_ix):\n idxs = [word_to_ix[w] for w in context]\n return torch.tensor(idxs, dtype=torch.long)\n\n\nmake_context_vector(data[0][0], word_to_ix) # example\n```\n\n\n\n\n tensor([42, 4, 18, 32])\n\n\n\n\n```python\nclass CBOW(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(CBOW, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n self.linear2 = nn.Linear(128, vocab_size)\n \n self.to(config.HOST_DEVICE)\n\n def forward(self, inputs):\n inputs = inputs.to(config.HOST_DEVICE)\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n# create your model and train. here are some functions to help you make\n# the data ready for use by your module\n\nlosses = []\nloss_function = nn.NLLLoss()\nmodel = CBOW(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE *2)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\n\n```\n\n\n```python\nfor epoch in range(100):\n total_loss = 0\n for context, target in data:\n\n context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long)\n\n model.zero_grad()\n log_probs = model(context_idxs)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long).to(config.HOST_DEVICE))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # The loss decreased every iteration over the training data!\n```\n\n [227.96513605117798, 226.33953070640564, 224.7250897884369, 223.12321376800537, 221.5327503681183, 219.95335602760315, 218.38373279571533, 216.82241106033325, 215.27163434028625, 213.73100638389587, 212.1968069076538, 210.6700885295868, 209.1500744819641, 207.6340024471283, 206.12175297737122, 204.61291980743408, 203.10775470733643, 201.60511016845703, 200.10666966438293, 198.61163640022278, 197.1199231147766, 195.63121843338013, 194.14271926879883, 192.65562915802002, 191.16730761528015, 189.68121099472046, 188.1959400177002, 186.71414136886597, 185.23253798484802, 183.75069332122803, 182.26797318458557, 180.7839744091034, 179.30089831352234, 177.81779623031616, 176.33366703987122, 174.85206604003906, 173.36930131912231, 171.88582372665405, 170.40109252929688, 168.91687297821045, 167.43051958084106, 165.9450876712799, 164.45748043060303, 162.96973729133606, 161.48047637939453, 159.991441488266, 158.50047850608826, 157.010014295578, 155.51938772201538, 154.0270857810974, 152.53851675987244, 151.04881501197815, 149.55951476097107, 148.0729103088379, 146.58622431755066, 145.09854412078857, 143.61334228515625, 142.12981605529785, 140.64747190475464, 139.1655023097992, 137.6874279975891, 136.2080373764038, 134.7309467792511, 133.25460290908813, 131.78092551231384, 130.30930876731873, 128.8403356075287, 127.37420415878296, 125.90907168388367, 124.44745779037476, 122.98964381217957, 121.52959251403809, 120.07670259475708, 118.62520599365234, 117.17907500267029, 115.73558521270752, 114.2972764968872, 112.86128568649292, 111.43117451667786, 110.0034122467041, 108.58165979385376, 107.16655611991882, 105.75730538368225, 104.35439801216125, 102.95808625221252, 101.56578516960144, 100.1802282333374, 98.80347967147827, 97.43173885345459, 96.06761407852173, 94.71109199523926, 93.36294507980347, 92.02087163925171, 90.68952512741089, 89.36666655540466, 88.05137276649475, 86.74717879295349, 85.45002460479736, 84.16323351860046, 82.88799285888672, 81.62128496170044, 80.36572122573853, 79.12079429626465, 77.88800954818726, 76.66643762588501, 75.45351839065552, 74.25549387931824, 73.06729936599731, 71.88959503173828, 70.72631669044495]\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a02580220d602c2b749e0e176fb506985f8c5977", "size": 26694, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dl4nlp/word_embedding/word_embeddings_tutorial.ipynb", "max_stars_repo_name": "rdcsung/practical-pytorch", "max_stars_repo_head_hexsha": "6c57013c16eb928232af5e9bbe886a41c4ac9f9e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dl4nlp/word_embedding/word_embeddings_tutorial.ipynb", "max_issues_repo_name": "rdcsung/practical-pytorch", "max_issues_repo_head_hexsha": "6c57013c16eb928232af5e9bbe886a41c4ac9f9e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dl4nlp/word_embedding/word_embeddings_tutorial.ipynb", "max_forks_repo_name": "rdcsung/practical-pytorch", "max_forks_repo_head_hexsha": "6c57013c16eb928232af5e9bbe886a41c4ac9f9e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.9245960503, "max_line_length": 2163, "alphanum_fraction": 0.6056791788, "converted": true, "num_tokens": 6293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43014736319616964, "lm_q2_score": 0.2538610126142736, "lm_q1q2_score": 0.10919764519433933}} {"text": "# SNLP Assignment 1\n\nName 1: Sangeet Sagar
\nStudent id 1: 7009050
\nEmail 1: sasa00001@stud.uni-saarland.de
\n\n\nName 2: Nikhil Paliwal
\nStudent id 2: 7009915
\nEmail 2: nipa00002@stud.uni-saarland.de
\n\n**Instructions:** Read each question carefully.
\nMake sure you appropriately comment your code wherever required. Your final submission should contain the completed Notebook and the respective Python files for exercises 2, 3, and the bonus question (if you attempt it). There is no need to submit the data files.
\nUpload the zipped folder in Teams. Make sure to click on \"Turn-in\" after your upload your submission, otherwise the assignment will not be considered as submitted. Only one member of the group should make the submisssion.\n\n---\n\n## Exercise 1 (0 points)\n\nPlease carefully read the instructions on how to use Jupyter Notebooks and how to hand in the assignments.\n\n## Exercise 2 (4 = 1+1+2 points)\n\nThe **Mandelbrot distribution** is a power-law distribution over ranked data.\n\\begin{equation}\nf(r) \\propto \\frac{m}{(c+r)^B}\n\\end{equation}\nHere $r$ is the rank of the data point and $c$ and $B$ are the parameters that define the distribution. $m$ is a normalizing constant ensuring that the distribution is a true probability distribution. \n\n**Zipf's** law or rather the Zipfian distribution is a special case of the Mandelbrot distribution. It holds that the relative frequency of a word in a corpus is inversely proportional to its rank in the frequency table. \n\n1. Which values for $m$, $c$ yield the Zipfian distribution? Explain how you arrived at these values. Show the result in the form of a $\\LaTeX$ formula. What is a reasonable value for $B$? (1 Point)\n\n2. Look again at Chapter 2, Slide 16. Why do the parameters of the distribution ($m$, $c$, $B$) differ in practice, i. e. for a real language, whether natural or artificial, from those obtained in 1.? (1 Point)\n\n3. The so-called stick-breaking process is a notion of the [Dirichlet process](https://en.wikipedia.org/wiki/Dirichlet_process#The_stick-breaking_process). (the following [blog post](https://medium.com/@albertoarrigoni/dirichlet-processes-917f376b02d2) gives a nice introduction into the Dirichlet process, you should at least read the part concerned with stick-breaking).\nThe function `stick_breaking` in the code cell below draws a sample from a stick-breaking process with intensity $\\alpha$. \n * Choose a suitable value of $\\alpha$ such that the distribution follows Zipf's law, and explain how $\\alpha$ affects the distribution. \n * Sample 100 values from the distribution, and plot them on log scale along with the 'ideal' Zipfian distribution obtained in 1. You will have to adjust the exponent $B$ such that it matches the distribution. The plotting code should be added to and imported from `exercise_2.py`. If you make changes to the code block below, please comment on why it was necessary.\n * Relate to your findings in 2.\n\n**Answers**\n1. For Zipfian distribution $m$ should be freuqncy of the most frequently occuring word and $c = 0$. We arrive at these values by looking at the Zipfian distribution where it says that $f \\propto \\frac{1}{r}$ or $f.r \\propto k$ ($k$ being some constant). Since the denominator $(c+r)^B$ in the Mandelbrot distribution $\\frac{m}{(c+r)^B}$ must reduce to $r$, therefore, reasonable value of $B$ would be $1$.\n\\begin{equation}\nf(r) \\propto \\frac{m}{(c+r)^B} \\\\\nf(r) \\propto \\frac{m}{(0+r)^1} \\\\\nf(r) \\propto \\frac{m}{r} \\\\\nf(r)*r \\propto m \n\\end{equation}\nThis $m$ is analogous to the constant $k$ in the Zipfian distribution.\n\n2. The Zipfian distribution to said be incompetent in capturing the details. The parameter $B$ i.e. slope when it becomes $1$, the Zipf's law predicts that the resulting graph between rank and frequency would be a straight line. This is however, a bad fit for low and high rank words as the graph tends to bulge slightly near high and low ranks words. Hence in order to correctly capture these details, the parameters $(m, c, B)$ differ from those followed by Zpfian distribution.\n\n3. \n (i) Lower values of $\\alpha$ gives better fit to the \"ideal\" Zipfian distribution
\n (iii) With $B=0.9$ and $\\alpha=1$, the resulting graph follows a exponential nature at the tail. The \"ideal\" Zipfian distribution however matches only with higher rank i.e until around $2^4$\n\n\n```python\nfrom importlib import reload\nimport exercise_2\nexercise_2 = reload(exercise_2)\n\nn = 100\nalpha = 1 # TODO: choose alpha\nB = 0.9 # TODO: choose B\n\nstick_lengths = exercise_2.stick_breaking(n, alpha)\nexercise_2.plot_stick_lengths(stick_lengths, alpha, B) #TODO: in exercise_2.py\n```\n\n## Exercise 3 (6 = 3+0.5+1+0.5+1 points)\n\nThe following cell executes the function `analysis` from the `exercise_3.py` file. You are given a tokenized input (list of words). \n\n1. Plot the frequencies against rank for the inputs (different languages) along with an 'ideal' curve according to the Zipf's law. Use the log-log scale. (3 points)\n\nThen, answer the following questions and elaborate:\n\n2. Does Zipf's law form an accurate prediction of your data? (0.5 point)\n3. What are the differences between the languages? What causes them? (1 point)\n4. In your plot, what causes the vertical gaps (\"steps\") for high-rank words (rightmost)? (0.5 point)\n5. Zipf's law \"predicts\" the frequency of the n-th rank word. Compute the mean squared error of these predictions $\\big(\\frac{1}{n} \\sum (\\hat{y} - y)^2\\big)$, and output the value to 10 decimal digits. (1 point)\n\nPlease extend `exercise_3.py`. Ideally the following cell remains unchanged and outputs your code. If you make changes, please comment on why it was necessary.\n\n**Answers**
\n1. Below\n\n2. No the generated plots do not accurately follow Zipf's law since an exact linear relationship is not observed. Slight bulge can be observed for words with rank < 10. However the bulge becomes significant for the formal language: Python.\n\n3. The languages differ on the basis of morphology. English and German being natural language has more frequently words like \"the\", \"are\", \"sie\", \"und\" etc, however such words are not many and the curve experiences a slight flat region for a short time. For a formal language like Python, there are more sets of these common occuring tokens like \".\", \"(\", \"input\" etc hence, this flatness persits longer and doesnt follows the ideal Zipfian distribution.\n\n4. The vertical gaps are caused by the least-occuring words whose freuencies range in the order 1-10. This is becuase $log(x)$ curve grows faster between for $x=0$ to $x=1$ $(log(1)=0$ and $log(10)=1)$ and then grows slowly afterwards. Therefore when these freuqencies increases from 1 to 2 (if we look at the graph right to left), the resulting log plot reflects significantly faster growth resulting in steps. \n\n5. \n- MSE for English: 110.6970613293\n- MSE for German: 183.4465929729\n- MSE for Python: 890.9301358561\n\n\n```python\nimport tokenize\nfrom importlib import reload\nimport exercise_3\nexercise_3 = reload(exercise_3)\n\n# run on English text\nwith open(\"data/alice_in_wonderland.txt\", \"r\") as f:\n exercise_3.analysis(\"English\", f.read().lower().split())\n\n# run on German text\nwith open(\"data/alice_im_wunderland.txt\", \"r\") as f:\n exercise_3.analysis(\"German\", f.read().lower().split())\n\n# run on PyTorch source\nwith open(\"data/torch_activation.py\", \"r\") as f:\n tokens = [\n x.string\n for x in tokenize.generate_tokens(f.readline)\n if x.type not in {\n tokenize.COMMENT, tokenize.STRING, tokenize.INDENT, tokenize.DEDENT, tokenize.NEWLINE\n }\n ]\n exercise_3.analysis(\"Python\", tokens)\n```\n\n# Bonus (1 point)\n\nRepeat exercise 3 but on the character level (as opposed to word level). Your analysis can be much shorter but comment on the differences between the languages. You have to, however, write your own loader similar to the one we provided. For this, you may create a file `bonus.py` and import your code from there in a similar fashion to the above questions.\n", "meta": {"hexsha": "bbbd98138272d347cfb3f9191e583df19a95385a", "size": 167554, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "snlp/hw1/Assignment1.ipynb", "max_stars_repo_name": "sangeet2020/ss-21", "max_stars_repo_head_hexsha": "c2dbcf9668cb82b27a76e766a977483dd5fae0d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-13T21:07:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T21:07:49.000Z", "max_issues_repo_path": "snlp/hw1/Assignment1.ipynb", "max_issues_repo_name": "sangeet2020/ss-21", "max_issues_repo_head_hexsha": "c2dbcf9668cb82b27a76e766a977483dd5fae0d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "snlp/hw1/Assignment1.ipynb", "max_forks_repo_name": "sangeet2020/ss-21", "max_forks_repo_head_hexsha": "c2dbcf9668cb82b27a76e766a977483dd5fae0d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 551.1644736842, "max_line_length": 43216, "alphanum_fraction": 0.9405266362, "converted": true, "num_tokens": 2090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295350395727, "lm_q2_score": 0.24508501313237172, "lm_q1q2_score": 0.10919261194603314}} {"text": "Probabilistic Programming and Bayesian Methods for Hackers \n========\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n#### Looking for a printed version of Bayesian Methods for Hackers?\n\n_Bayesian Methods for Hackers_ is now a published book by Addison-Wesley, available on [Amazon](http://www.amazon.com/Bayesian-Methods-Hackers-Probabilistic-Addison-Wesley/dp/0133902838)! \n\n\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assumes that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```python\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json, matplotlib\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) / 2, 2, k + 1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```python\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2 * p / (1 + p), color=\"#348ABD\", lw=3)\n# plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2 * (0.2) / 1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Is my code bug-free?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```python\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1. / 3, 2. / 3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.ylim(0,1)\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n#### Expected Value\nExpected value (EV) is one of the most important concepts in probability. The EV for a given probability distribution can be described as \"the mean value in the long run for many repeated samples from that distribution.\" To borrow a metaphor from physics, a distribution's EV as like its \"center of mass.\" Imagine repeating the same experiment many times over, and taking the average over each outcome. The more you repeat the experiment, the closer this average will become to the distributions EV. (side note: as the number of repeated experiments goes to infinity, the difference between the average outcome and the EV becomes arbitrarily small.)\n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots, \\; \\; \\lambda \\in \\mathbb{R}_{>0} $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```python\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```python\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1. / l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0, 1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```python\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```python\nimport pymc as pm\n\nalpha = 1.0 / count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nprint n_count_data\nlambda_1 = pm.Exponential(\"lambda_1\", alpha)\nlambda_2 = pm.Exponential(\"lambda_2\", alpha)\n\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\n 74\n\n\n\n```python\nlambda_1\n```\n\n\n\n\n \n\n\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```python\nprint(\"Random output:\", tau.random(), tau.random(), tau.random())\n```\n\n ('Random output:', array(37), array(16), array(36))\n\n\n\n```python\ntest = np.zeros(10)\nprint(test)\ntest[:3] = 3\ntest[3:] = -9\nprint(test)\n```\n\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 3. 3. 3. -9. -9. -9. -9. -9. -9. -9.]\n\n\n\n```python\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. Deterministic functions will be covered in Chapter 2. \n\n\n```python\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```python\n# Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [-----------------100%-----------------] 40000 of 40000 complete in 12.9 sec\n\n\n```python\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```python\nfigsize(12.5, 10)\n# histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data) - 20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```python\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```python\nprint(lambda_1_samples.mean())\nprint(lambda_2_samples.mean())\n```\n\n 17.7672607564\n 22.7047356746\n\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```python\nprint(lambda_1_samples.mean()/lambda_2_samples.mean())\nprint(lambda_1_samples/lambda_2_samples).mean()\n```\n\n 0.78253545917\n 0.783728743238\n\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```python\ny = np.bincount(tau_samples)\nprint(y)\nii = np.nonzero(y)[0]\nzip(ii,y[ii]) \n```\n\n [ 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 4 6 29 1089 3470 10991 14408 3]\n\n\n\n\n\n [(39, 4),\n (40, 6),\n (41, 29),\n (42, 1089),\n (43, 3470),\n (44, 10991),\n (45, 14408),\n (46, 3)]\n\n\n\n\n```python\nix = tau_samples < 45\n\nprint(ix)\nprint(ix.sum())\nprint(len(ix))\n```\n\n [ True True True ..., True True True]\n 15589\n 30000\n\n\n\n```python\nlambda_1_samples[ix].mean()\n```\n\n\n\n\n 17.763521264856525\n\n\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg/).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```python\nfrom IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2362e95280d3e6c44e1b23686240376f79af2b8e", "size": 315904, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_stars_repo_name": "CanCeylan/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "11cf66434ed562939feb14888c59b639c878e5e1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-15T15:49:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-15T15:49:31.000Z", "max_issues_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_issues_repo_name": "CanCeylan/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "11cf66434ed562939feb14888c59b639c878e5e1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb", "max_forks_repo_name": "CanCeylan/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "11cf66434ed562939feb14888c59b639c878e5e1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 245.0768037238, "max_line_length": 90178, "alphanum_fraction": 0.8857564323, "converted": true, "num_tokens": 12208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38491214448393346, "lm_q2_score": 0.28140560742914383, "lm_q1q2_score": 0.10831643582535568}} {"text": "# Data Preparation\n\n**DIVE into Deep Learning**\n___\n\n\n```python\n%matplotlib inline\nimport pprint as pp\nimport tensorflow_datasets as tfds\nimport tensorflow.compat.v2 as tf\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom IPython import display\nimport matplotlib.pyplot as plt\n# produce vector inline graphics\nfrom matplotlib_inline.backend_inline import set_matplotlib_formats\nset_matplotlib_formats('svg')\n```\n\n## Loading Data\n\n**What is an example in a dataset?**\n\nA neural network learns from many examples collected together as a *dataset*. For instance, the [MNIST (Modified National Institute of Standards and Technology)](https://en.wikipedia.org/wiki/MNIST_database) dataset consists of labeled handwritten digits.$\\def\\abs#1{\\left\\lvert #1 \\right\\rvert}\n\\def\\Set#1{\\left\\{ #1 \\right\\}}\n\\def\\mc#1{\\mathcal{#1}}\n\\def\\M#1{\\boldsymbol{#1}}\n\\def\\R#1{\\mathsf{#1}}\n\\def\\RM#1{\\boldsymbol{\\mathsf{#1}}}\n\\def\\op#1{\\operatorname{#1}}\n\\def\\E{\\op{E}}\n\\def\\d{\\mathrm{\\mathstrut d}}\n$\n\n\n\nA dataset is a sequence \n\n$$\n\\begin{align}\n(\\RM{x}_1,\\R{y}_1),(\\RM{x}_2,\\R{y}_2), \\dots\\tag{dataset}\n\\end{align}\n$$\n\nof *tuples/instances* $(\\RM{x}_i,\\R{y}_i)$, each of which consists of\n\n- an *input feature vector* $\\RM{x}_i$ such as an image of a handwritten digit and\n\n- a *label* $\\R{y}_i$ such as the digit type of the handwritten digit.\n\nThe goal is to classify the digit type of a handwritten digit.\n\n**How to load the MNIST dataset?**\n\nWe first specify the folder to download the data. \nPress `Shift+Enter` to evaluate the following cell:\n\n\n```python\nimport os\n\nuser_home = os.getenv(\"HOME\") # get user home directory\ndata_dir = os.path.join(user_home, \"data\") # create download folder path\n\ndata_dir # show the path\n```\n\nThe MNIST dataset can be obtained in many ways due to its popularity in image recognition. \nOne way is to use the package [`tensorflow_datasets`](https://blog.tensorflow.org/2019/02/introducing-tensorflow-datasets.html).\n\n\n```python\nimport tensorflow_datasets as tfds # give a shorter name tfds for convenience\n\nds, ds_info = tfds.load(\n 'mnist',\n data_dir=data_dir, # download location\n as_supervised=True, # separate input features and label\n with_info=True, # return information of the dataset\n)\n\nds\n```\n\n- The function `tfds.load` downloads the data to `data_dir` and prepare it for loading using variable `ds`.\n- The data are loaded as [`Tensor`s](https://www.tensorflow.org/guide/tensor), which can be operated faster by GPU or TPU instead of CPU.\n\nThe dataset is split into \n- a training set `ds[\"train\"]` and\n- a test set `ds[\"test\"]`.\n\n`tfds.load?` shows more information about the function. E.g., we can control the split ratio using the argument [`split`](https://www.tensorflow.org/datasets/splits).\n\n**Why split the data?**\n\nThe test set is used to evaluate the performance of a neural network trained using the training set (separate from the test set).\n\nThe purpose of separating the test set from the training set is to avoid *overly-optimistic* performance estimate. Why?\n\nSuppose the final exam questions (test set) are the same as the previous homework questions (training set). \n- Students may get a high exam score simply by studying the model answers to the homework instead of understanding entire subject.\n- The exam score is therefore an overly-optimistic estimate of the students' understanding of the subject.\n\n**How large are the training set and test set?**\n\nBoth the training and test sets are loaded as [`Dataset` objects](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).\n- The loading is lazy, i.e., the data is not yet in memory, we cannot count the number of instances directly. \n- Instead, we obtain such information from `ds_info`.\n\n**Exercise** Assign to `train_size` and `test_size` the numbers of instances in the training set and test set respectively.\n\nReplace `raise NotImplementedError()` in the solution cell by the following code with the blanks filled with the desired numbers:\n\n```Python\ntrain_size = ___\ntest_size = ___\n```\n\n**Hint** Open a scratchpad with `CTRL+B` and evaluate \n- `ds_info` or\n- `dir(ds_info.splits[\"train\"])` and `dir(ds_info.splits[\"test\"])`\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\ntrain_size, test_size\n```\n\n\n```python\n# tests\nassert 0 < train_size < 100000\nassert 0 < test_size < 50000\n# hidden tests will be run to check your answers precisely after submission\n```\n\nNote that the training set is often much larger than the test set especially for deep learning because \n- training a neural network requires many examples but\n- estimating its performance does not.\n\n## Data Visualization\n\nThe following retrieves an example from the training set.\n\n\n```python\nfor image, label in ds[\"train\"].take(1):\n print(\n f'image dtype: {type(image)} shape: {image.shape} element dtype: {image.dtype}'\n )\n print(f'label dtype: {label.dtype}')\n```\n\nThe for loop above takes one example from `ds[\"train\"]` using the method [`take`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#take) and print its data types. \n- The handwritten digit is represented by a 28x28x1 [`EagerTensor`](https://www.tensorflow.org/guide/eager), which is essentially a 2D array of bytes (8-bit unsigned integers `uint8`). \n- The digit type is an integer.\n\nThe following function plots the image using the `imshow` function from `matplotlib.pyplot`.\n\n\n```python\nimport matplotlib.pyplot as plt\n\nfor image, label in ds[\"train\"].take(1): # take 1 example from training set\n plt.imshow(image) # plot the image\n plt.title(label.numpy()) # show digit type as plot title\n```\n\n- The method `numpy()` is needed to convert the label to the correct integer type for `matplotlib`.\n\nThe following function plots the image properly in grayscale labeled by pixel values:\n\n\n```python\ndef plot_mnist_image(example, ax=None, pixel_format=None):\n (image, label) = example\n if ax == None:\n ax = plt.gca()\n ax.imshow(image, cmap=\"gray_r\") # show image\n ax.title.set_text(label.numpy()) # show digit type as plot title\n # Major ticks\n ax.set_xticks(np.arange(0, 28, 3))\n ax.set_yticks(np.arange(0, 28, 3))\n # Minor ticks\n ax.set_xticks(np.arange(-.5, 28, 1), minor=True)\n ax.set_yticks(np.arange(-.5, 28, 1), minor=True)\n if pixel_format is not None:\n for i in range(28):\n for j in range(28):\n ax.text(\n j,\n i,\n pixel_format.format(image[i, j,\n 0].numpy()), # show pixel value\n va='center',\n ha='center',\n color='white',\n fontweight='bold',\n fontsize='small')\n ax.grid(color='lightblue', linestyle='-', linewidth=1, which='minor')\n ax.set_xlabel('2nd dimension')\n ax.set_ylabel('1st dimension')\n ax.title.set_text('Image with label ' + ax.title.get_text())\n\n\nif input('Execute? [Y/n]').lower != 'n':\n plt.figure(figsize=(11, 11), dpi=80)\n for example in ds[\"train\"].take(1):\n plot_mnist_image(example, pixel_format='{}')\n plt.show()\n```\n\n- We set the parameter `cmap` to `gray_r` so the color is darker if the pixel value is larger.\n\n**Exercise** Complete the following code to generate a matrix plot of the first 50 examples from the training sets. \nThe parameter `nrows` and `ncols` specify the number of rows and columns respectively. You code may look like\n```Python\n...\n for ax, example in zip(axes.flat, ds[\"train\"].____(nrows * ncols)):\n plot_mnist_image(_______, ax)\n ax.axes.xaxis.set_visible(False)\n ax.axes.yaxis.set_visible(False)\n...\n```\nand the output image should look like\n\n\n\n```python\nif input('Execute? [Y/n]').lower != 'n':\n def plot_mnist_image_matrix(ds, nrows=5, ncols=10):\n fig, axes = plt.subplots(nrows=nrows, ncols=ncols)\n\n # YOUR CODE HERE\n raise NotImplementedError()\n\n fig.tight_layout() # adjust spacing between subplots automatically\n return fig, axes\n\n\n fig, axes = plot_mnist_image_matrix(ds, nrows=5)\n fig.set_figwidth(9)\n fig.set_figheight(6)\n fig.set_dpi(80)\n # plt.savefig('mnist_examples.svg')\n plt.show()\n```\n\n## Data Preprocessing\n\nWe will use the [`tensorflow`](https://www.tensorflow.org/) library to process the data and train the neural network. (Another popular library is [PyTorch](https://pytorch.org/).)\n\n\n```python\nimport tensorflow.compat.v2 as tf # explicitly use tensorflow version 2\n```\n\nEach pixel is stored as an integer from $\\{0,\\dots,255\\}$ ($2^8$ possible values). However, for computations by the neural network, we need to convert it to a floating point number. We will also normalize each pixel value to be within the unit interval $[0,1]$:\n\n\\begin{align} \nv \\mapsto \\frac{v - v_{\\min}}{v_{\\max} - v_{\\min}} = \\frac{v}{255}\\tag{min-max normalization}\n\\end{align}\n\n\n\n**Exercise** Using the function `map`, normalize each element of an image to the unit interval $[0,1]$ after converting them to `tf.float32` using [`tf.cast`](https://www.tensorflow.org/api_docs/python/tf/cast).\n\nYour code may look like\n```Python\n...\n ds_n[part] = ds[part].map(\n lambda image, label: (_____(image, _____) / ___, label),\n num_parallel_calls=tf.data.experimental.AUTOTUNE)\n...\n```\n`map` applies the conversion to each example in the dataset.\n\n\n```python\ndef normalize_mnist(ds):\n \"\"\"\n Returns:\n MNIST Dataset with image pixel values normalized to float32 in [0,1].\n \"\"\"\n ds_n = dict.fromkeys(ds.keys()) # initialize the normalized dataset\n for part in ds.keys():\n # normalize pixel values to [0,1]\n # YOUR CODE HERE\n raise NotImplementedError()\n return ds_n\n\n\nds_n = normalize_mnist(ds)\nds_n\n```\n\n\n```python\n# Plot the normalized digit\nif input('Execute? [Y/n]').lower != 'n':\n plt.figure(figsize=(11, 11), dpi=80)\n for example in ds_n[\"train\"].take(1):\n plot_mnist_image(example,\n pixel_format='{:.2f}') # show pixel value to 2 d.p.s\n # plt.savefig('mnist_example_normalized.svg')\n plt.show()\n```\n\n\n```python\n# tests\n```\n\nTo avoid overfitting, the training of a neural network uses *stochastic gradient descent* which\n- divides the training into many steps where\n- each step uses a *randomly* selected minibatch of samples \n- to improve the neural network *bit-by-bit*.\n\n\n```python\ndef batch_mnist(ds_n):\n ds_b = dict.fromkeys(ds_n.keys()) # initialize the batched dataset\n for part in ds_n.keys():\n ds_b[part] = (\n ds_n[part].batch(\n 128) # Use a minibatch of examples for each training step\n .shuffle(\n ds_info.splits[part].num_examples,\n reshuffle_each_iteration=True) # shuffle data for each epoch\n .cache() # cache current elements \n .prefetch(tf.data.experimental.AUTOTUNE)\n ) # preload subsequent elements\n return ds_b\n\n\nds_b = batch_mnist(ds_n)\nds_b\n```\n\nThe above code \n- specifies the batch size (128) and \n- enables caching and prefetching to reduce the latency in loading examples repeatedly for training and testing.\n\n**Exercise** The output to the above cell should look like\n```Python\n{'test': ,\n 'train': }\n```\nwith a new first dimension of unknown size `None`. Why?\n\n*Hint:* Is the total number of examples divisible by the batch sizs?\n\nYOUR ANSWER HERE\n\n## Release Memory\n\nYou cannot run a notebook if you have insufficient memory. It is important to shut down a notebook to release the memory: \n- `Kernel`->`Shut Down Kernel`.\n\nThe JupyterLab interface also contains tools to help you monitor your memory consumption.\n", "meta": {"hexsha": "6608f83d7d590ea3694c2dd01e08043ad7c816ec", "size": 29514, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "part2/preparation.ipynb", "max_stars_repo_name": "ccha23/divedeep", "max_stars_repo_head_hexsha": "dd9c5e0a589613fa37c467b7863e58c5d22d8d3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-29T00:46:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-29T00:46:39.000Z", "max_issues_repo_path": "part2/preparation.ipynb", "max_issues_repo_name": "ccha23/divedeep", "max_issues_repo_head_hexsha": "dd9c5e0a589613fa37c467b7863e58c5d22d8d3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "part2/preparation.ipynb", "max_forks_repo_name": "ccha23/divedeep", "max_forks_repo_head_hexsha": "dd9c5e0a589613fa37c467b7863e58c5d22d8d3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-07-03T02:44:06.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-03T02:44:06.000Z", "avg_line_length": 25.6643478261, "max_line_length": 310, "alphanum_fraction": 0.5443857153, "converted": true, "num_tokens": 3045, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.2254166210386804, "lm_q1q2_score": 0.10830788008669119}} {"text": "```julia\nHTML(\"\"\"\n\n\"\"\")\n```\n\n\n\n\n\n\n\n\n\n## 1.1 \u4e3a\u4ec0\u4e48\u9700\u8981\u91cf\u5b50\u8ba1\u7b97\n\n### \u540e\u6469\u5c14\u65f6\u4ee3\n\n\u6211\u4eec\u5bf9\u7b97\u529b\u7684\u9700\u6c42\u5728\u4e0d\u65ad\u589e\u957f\uff0c\u4f46\u662f\u6469\u5c14\u5b9a\u5f8b\u5374\u4e0d\u518d\u6709\u6548\u4e86\u3002\n\n#### \u6469\u5c14\u5b9a\u5f8b\n\n
\n\n
\n\n- \u65b0\u7684\u601d\u8def\uff08\u4e09\u7ef4\u5806\u53e0\uff0cetc\uff09\n- \u5b9a\u5236\u82af\u7247\uff08GPU\uff0cTPU\uff0cFPGA\uff0cetc\uff09\n- \u66f4\u6362\u8ba1\u7b97\u6a21\u578b\uff08\u91cf\u5b50\u8ba1\u7b97\uff0cetc\uff09\n\n### \u91cf\u5b50\u591a\u4f53\u7269\u7406\u7684\u7ef4\u5ea6\u8bc5\u5492\n\n**\u91cf\u5b50\u591a\u4f53\u7269\u7406**\u662f\u6307\u591a\u4e2a\u5177\u6709\u91cf\u5b50\u6548\u5e94\u7684\u7269\u7406\u5bf9\u8c61\u6240\u6784\u6210\u7684\u7cfb\u7edf\u3002\u8fd9\u79cd\u7cfb\u7edf\u6240\u6784\u6210\u7684\u72b6\u6001\u7a7a\u95f4\u5f80\u5f80\u968f\u7740\u5b83\u7684\u7c92\u5b50\u6570\u76ee\u7684\u589e\u957f\u800c\u6307\u6570\u589e\u957f\u3002\u800c\u91cf\u5b50\u591a\u4f53\u7269\u7406\u7684\u7406\u8bba\u7814\u7a76\uff0c\u5c06\u76f4\u63a5\u5bf9\u65b0\u578b\u6750\u6599\uff0c\u65b0\u578b\u836f\u7269\u7684\u7814\u7a76\u9020\u6210\u5f71\u54cd\u3002\n\n
\n\n
\n\n\u800c\u7531\u4e8e\u6307\u6570\u589e\u957f\u7684\u7a7a\u95f4\u5927\u5c0f\uff0c\u7ecf\u5178\u8ba1\u7b97\u673a\u96be\u4ee5\u7cbe\u786e\u6a21\u62df\u590d\u6742\u7684\u591a\u4f53\u7cfb\u7edf\uff0c\u800c\u9700\u8981\u8fdb\u884c\u4e00\u4e9b\u5047\u8bbe\uff0c\u7b80\u5316\u7b49\uff0c\u5e38\u89c1\u7684\u65b9\u6cd5\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\uff1a\n\n- DFT\uff0c\u5bc6\u5ea6\u6cdb\u51fd\n\n- DMRG\uff0c\u5bc6\u5ea6\u77e9\u9635\u91cd\u6574\u5316\uff0c\u5f20\u91cf\u7f51\u7edc\n\n- QMC\uff0c\u91cf\u5b50\u8499\u7279\u5361\u6d1b\uff0c\u53d8\u5206\u91cf\u5b50\u8499\u7279\u5361\u6d1b\n\n\u5176\u4e2d\u7684\u5f88\u591a\u65b9\u6cd5\u5bf9\u4e8e\u8ba1\u7b97\u8d44\u6e90\u7684\u6d88\u8017\u662f\u5de8\u5927\u7684\u3002\u4f8b\u5982[PEPS++](https://arxiv.org/abs/1806.03761)\u8fd9\u4e2a\u5de5\u4f5c\u751a\u81f3\u7528\u4e0a\u4e86\u4e2d\u56fd\u6700\u5f3a\u7684\u8d85\u7ea7\u8ba1\u7b97\u673a\u795e\u5a01\u592a\u6e56\u4e4b\u5149\u6574\u673a\u3002\n\n\u6240\u4ee5\u6709\u4eba\u5c31\u60f3\uff1a**\u80fd\u4e0d\u80fd\u7528\u91cf\u5b50\u7cfb\u7edf\u6a21\u62df\u91cf\u5b50\u7cfb\u7edf\u5462\uff1f**\n\n\u8fd9\u4e2a\u4eba\u5c31\u662f\uff1a\u8d39\u6069\u66fc\uff08Feynman\uff09[Simulating Physics with Computers](https://people.eecs.berkeley.edu/~christos/classics/Feynman.pdf)\n\n\n\n\u5f53\u7136\u540e\u6765\u5728\u8fd9\u4e2a\u60f3\u6cd5\u4e0a\u6709\u5f88\u591a\u5de5\u4f5c\uff0c\u7528\u4e00\u4e2a\u53ef\u4ee5\u7cbe\u786e\u64cd\u7eb5\u7684\u91cf\u5b50\u7cfb\u7edf\u6a21\u62df\u53e6\u5916\u4e00\u4e2a\u91cf\u5b50\u7cfb\u7edf\u4e5f\u79f0\u4e3a\u91cf\u5b50\u6a21\u62df\uff08Quantum Simulation\uff09\u3002\u8fd9\u4e5f\u662f\u88ab\u8ba4\u4e3a\u4e00\u4e2a\u5728\u8fd1\u671f\u6700\u6709\u53ef\u80fd\u6210\u4e3a\u91cf\u5b50\u8ba1\u7b97\u673a\u7684\u6740\u624b\u7ea7\u5e94\u7528\u7684\u65b9\u5411\u3002\n\n### \u4f5c\u4e3a\u57fa\u7840\u5b66\u79d1\u7684\u542f\u53d1\u6027\u7814\u7a76\n\n#### \u542f\u53d1\u7ecf\u5178\u7b97\u6cd5\u8bbe\u8ba1\n\n- [Quantum Inspired Recommendation System](https://arxiv.org/abs/1807.04271)\n- [Quantum Inspired PCA](https://arxiv.org/abs/1811.00414)\n- [Simulated Quantum Annealing Can Be Exponentially Faster than Classical Simulated Annealing](https://arxiv.org/abs/1601.03030)\n\n#### \u5e2e\u52a9\u6211\u4eec\u83b7\u5f97\u7cbe\u786e\u63a7\u5236\u91cf\u5b50\u7cfb\u7edf\u7684\u80fd\u529b\n\n- \u63a7\u5236\u5355\u91cf\u5b50\u6bd4\u7279\u7cfb\u7edf\n- \u63a7\u5236\u591a\u91cf\u5b50\u6bd4\u7279\u7cfb\u7edf\n- etc...\n\n#### \u5e2e\u52a9\u6211\u4eec\u7406\u89e3\u8fd9\u4e2a\u4e16\u754c\n\n- \u91cf\u5b50\u673a\u5668\u5b66\u4e60\n- \u5b87\u5b99\u5b66\uff1a[Quantum Circuit Cosmology: The Expansion of the Universe Since the First Qubit](https://arxiv.org/abs/1702.06959)\n- \u9ed1\u6d1e\uff1a[Quantum Circuit Model of Black Hole Evaporation](https://arxiv.org/abs/1807.07672)\n- etc.\n\n### \u9605\u8bfb\u6750\u6599\uff08\u53ef\u80fd\u9700\u8981VPN\uff09\n\n- [Why we need quantum computing](https://www.research.ibm.com/ibm-q/learn/what-is-quantum-computing/#)\n- [Julia\u8bed\u8a00\u5165\u95e8](https://www.bilibili.com/video/av28178443/)\n\n\n### \u8bfe\u7a0b\u5c06\u7528\u5230\u7684\u6750\u6599\n\n- Michael A. Nielsen & Isaac L. Chuang, Quantum Computation & Quantum Information\n- Yao.jl - Extensible Efficient Quantum Algorithm Design for Humans\n\n## 1.2 \u7ecf\u5178\u7684\u903b\u8f91\u7535\u8def\n\n\u7ecf\u5178\u7684\u903b\u8f91\u7535\u8def\u6784\u6210\u4e86\u6211\u4eec\u73b0\u4ee3\u8ba1\u7b97\u673a\u7684\u57fa\u7840\uff0c\u672c\u7ae0\u6211\u4eec\u5c06\u7740\u91cd\u5b66\u4e60\u4e00\u4e9b\u7ecf\u5178\u903b\u8f91\u7535\u8def\u7684\u57fa\u7840\u77e5\u8bc6\uff0c\u7136\u540e\u4e4b\u540e\u6211\u5c06\u4ecb\u7ecd\u5982\u4f55\u4ece\u7ecf\u5178\u7684\u903b\u8f91\u7535\u8def\u8fc7\u5ea6\u5230\u91cf\u5b50\u7ebf\u8def\u3002\n\n### \u903b\u8f91\u6bd4\u7279\n\n\u4e00\u822c\u60c5\u51b5\u4e0b\u6211\u4eec\u5f80\u5f80\u4f7f\u7528\u6bd4\u7279\uff0c\u4e5f\u5c31\u662f\u4e24\u79cd\u4e0d\u540c\u7684\u7269\u7406\u72b6\u6001\u6765\u8fdb\u884c\u8ba1\u7b97\uff0c\u6211\u4eec\u5c06\u5176\u79f0\u4e3a**\u6bd4\u7279**\u3002\u800c\u7406\u8bba\u4e0a\uff0c\u6211\u4eec\u5c06\u8fd9\u4e24\u79cd\u72b6\u6001\u62bd\u8c61\u51fa\u6765\uff0c\u7528\u6570\u5b66\u7b26\u53f7\u8868\u793a\u4e4b\uff0c\u4ece\u800c\u7b80\u5316\u95ee\u9898\u3002\u6211\u4eec\u5c06\u8fd9\u6837\u7684\u62bd\u8c61\u79f0\u4e3a**\u903b\u8f91\u6bd4\u7279**\uff0c\u4e00\u4e2a\u7406\u8bba\u7684\u903b\u8f91\u6bd4\u7279\uff0c\u5728\u5b9e\u9645\u7684\u7269\u7406\u5b9e\u73b0\u4e0a\u6709\u53ef\u80fd\u5bf9\u5e94\u591a\u4e2a**\u7269\u7406\u6bd4\u7279**\u3002\n\n\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u6211\u4eec\u7684**Yao**\u6765\u5b9a\u4e49\u4e00\u7ec4\u903b\u8f91\u6bd4\u7279\n\n\n```julia\nusing Yao\n\nArrayReg(bit\"000\")\n```\n\n \u250c Info: Recompiling stale cache file /Users/roger/.julia/compiled/v1.1/Yao/TDiQQ.ji for Yao [5872b779-8223-5990-8dd0-5abbb0748c8c]\n \u2514 @ Base loading.jl:1184\n\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 3/3\n\n\n\n\u903b\u8f91\u6bd4\u7279\u7684\u5411\u91cf\u8868\u793a\uff0c\u5b9e\u9645\u4e0a\u6211\u4eec\u4e5f\u53ef\u4ee5\u7528\u4e00\u4e2aone hot\u5411\u91cf\u8868\u793a\u903b\u8f91\u6bd4\u7279\uff0c\u4f8b\u5982\u903b\u8f91\u6bd4\u727901\uff0c\u53ef\u4ee5\u8868\u793a\u4e3a\n\n\n| \u5411\u91cf\u7684\u5143\u7d20\u503c | \u5bf9\u5e94\u7684\u903b\u8f91\u6bd4\u7279 |\n| ---------- | ------------ |\n| 0 | 00 |\n| 1 | 01 |\n| 0 | 10 |\n| 0 | 11 |\n\n### \u903b\u8f91\u95e8\n\n\u903b\u8f91\u95e8\u662f\u64cd\u4f5c\u903b\u8f91\u6bd4\u7279\u7684\u4e00\u79cd\u7279\u6b8a\u51fd\u6570\uff0c\u5b83\u7684\u8f93\u5165\u662f\u4e8c\u8fdb\u5236\u7684\u6bd4\u7279\uff0c\u8f93\u51fa\u662f\u53e6\u5916\u4e00\u7ec4\u4e8c\u8fdb\u5236\u7684\u6bd4\u7279\u3002\u6211\u4eec\u5e38\u89c1\u7684\u64cd\u4f5c\u6709\u903b\u8f91\uff1a\u4e0e\uff0c\u6216\uff0c\u975e\n\n\u4e0e\uff1a\n\n- \u5982\u679c\u4e24\u4e2a\u8f93\u5165\u7684\u6bd4\u7279\u90fd\u4e3a0\uff0c\u90a3\u4e48\u8f93\u51fa0\n- \u5982\u679c\u4e24\u4e2a\u8f93\u5165\u7684\u6bd4\u7279\u90fd\u4e3a1\uff0c\u90a3\u4e48\u8f93\u51fa1\n- \u5982\u679c\u4e24\u4e2a\u8f93\u5165\u7684\u6bd4\u7279\u4e0d\u540c\uff0c\u90a3\u4e48\u4e5f\u8f93\u51fa0\n\n### \u5e03\u5c14\u4ee3\u6570\u548c\u771f\u503c\u8868\n\n\u7528\u6587\u5b57\u63cf\u8ff0\u4e0a\u9762\u7684\u64cd\u4f5c\u975e\u5e38\u9ebb\u70e6\uff0c\u6570\u5b66\u4e0a\u6211\u4eec\u6709\u5e03\u5c14\u4ee3\u6570\u548c\u771f\u503c\u8868\u6765\u63cf\u8ff0\u903b\u8f91\u7535\u8def\u3002\u4f8b\u5982\u975e\u95e8\uff1a\n\n\u975e\uff1a\n\n- \u5982\u679c\u8f93\u5165\u7684\u6bd4\u7279\u4e3a 0\uff0c\u90a3\u4e48\u8f93\u51fa1\n- \u5982\u679c\u8f93\u5165\u7684\u6bd4\u7279\u4e3a 1\uff0c\u90a3\u4e48\u8f93\u51fa0\n\n\u771f\u503c\u8868\n\n| $A$ | $\\neg A$ |\n|-----|----------|\n| 0 | 1 |\n| 1 | 0 |\n\n\u5982\u679c\u6211\u4eec\u7528\u4e00\u4e2a\u72ec\u70ed\u7f16\u7801\uff08onehot\uff09\u7684\u5411\u91cf\u6765\u8868\u793a\u4e00\u4e2a\u903b\u8f91\u6bd4\u7279\uff0c\u90a3\u4e48\u6211\u4eec\u53ef\u4ee5\u7528\u7ebf\u6027\u4ee3\u6570\u6765\u8868\u793a\u903b\u8f91\u95e8\u7684\u8fd0\u7b97\uff0c\u4f8b\u5982\u6211\u4eec\u7528\u5982\u4e0b\u7684\u65b9\u5f0f\u8868\u793a\u5355\u4e2a\u6bd4\u7279\n\n$$\n0 = \\begin{pmatrix} 1\\\\ 0 \\end{pmatrix}, \\quad 1 = \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}\n$$\n\n\u6211\u4eec\u6709\n\n$$\n\\neg 0 = \\begin{pmatrix} 0 & 1\\\\1 & 0 \\end{pmatrix} \\begin{pmatrix} 1\\\\ 0 \\end{pmatrix} = \\begin{pmatrix} 0\\\\ 1\\end{pmatrix} = 1\n$$\n\n\u540c\u7406\u6709\n\n$$\n\\neg 1 = \\begin{pmatrix} 0 & 1\\\\1 & 0 \\end{pmatrix} \\begin{pmatrix} 0\\\\ 1 \\end{pmatrix} = \\begin{pmatrix} 1\\\\ 0\\end{pmatrix}\n$$\n\n\u6211\u4eec\u4ee5\u540e\u4e3a\u4e86\u7b80\u4fbf\uff0c\u5c06\u8fd9\u4e2a\u77e9\u9635\u8bb0\u4e3a **X**\n\n\u4f60\u53ef\u4ee5\u7528Yao\u6765\u9a8c\u8bc1\u4e0a\u8ff0\u7ed3\u8bba\uff0c\u5176\u4e2d **X** \u5373\u4e3a\u6211\u4eec\u6240\u8bf4\u7684\u975e\u95e8\n\n\n```julia\napply!(ArrayReg(bit\"0\"), X) == ArrayReg(bit\"1\")\n```\n\n\n\n\n true\n\n\n\n#### \u7ec3\u4e60\n\n1. \u6216\u95e8\uff08**OR**\uff09\u662f\u4e00\u4e2a\u4e24\u6bd4\u7279\u95e8\uff0c\u5b83\u7684\u5b9a\u4e49\u662f\u53ea\u8981\u4e24\u4e2a\u6bd4\u7279\u4e2d\u6709\u4e00\u4e2a\u6bd4\u7279\u662f 1 \u90a3\u4e48\u5c31\u8f93\u51fa 1\uff0c\u5199\u51fa\u5b83\u7684\u771f\u503c\u8868\u548c\u77e9\u9635\u5f62\u5f0f\n2. Toffli\u95e8\u662f\u4e00\u4e2a\u4e09\u6bd4\u7279\u95e8\uff0c\u5b83\u7684\u8f93\u5165\u662f **A,B,C**\uff0c\u5b9a\u4e49\u4e3a **A, B** \u4e3a\u63a7\u5236\u6bd4\u7279\uff0c\u5f53**A\uff0cB** \u90fd\u4e3a1\u65f6\uff0c\u7ffb\u8f6c **C**\n\n#### \u7b54\u6848\n\n#### 1. \u6216\u95e8\uff08**OR**\uff09\n\n\u771f\u503c\u8868\n\n| AB | **OR**(A, B) |\n| -- | ------------ |\n| 00 | 0 |\n| 01 | 1 |\n| 10 | 1 |\n| 11 | 1 |\n\n\u77e9\u9635\u5f62\u5f0f\n\n$$\nOR = \\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 1 & 1 & 1\n\\end{pmatrix}\n$$\n\n\u6211\u4eec\u6709\n\n$$\n\\begin{pmatrix}1\\\\0\\\\0\\\\0 \\end{pmatrix} \\cdot OR = \\begin{pmatrix}1\\\\0\\end{pmatrix}\\quad\n\\begin{pmatrix}0\\\\1\\\\0\\\\0 \\end{pmatrix} \\cdot OR = \\begin{pmatrix}0\\\\1\\end{pmatrix}\\quad\n\\begin{pmatrix}0\\\\0\\\\1\\\\0 \\end{pmatrix} \\cdot OR = \\begin{pmatrix}0\\\\1\\end{pmatrix}\\quad\n\\begin{pmatrix}0\\\\0\\\\0\\\\1 \\end{pmatrix} \\cdot OR = \\begin{pmatrix}0\\\\1\\end{pmatrix}\n$$\n\n#### 2. Toffli\u95e8\n\n\u771f\u503c\u8868\n\n| ABC | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111 |\n|:-----------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| Toffli(ABC) | 000 | 001 | 010 | 011 | 100 | 101 | 111 | 110 |\n\n\u77e9\u9635\n\n\n```julia\nusing Latexify\n\nlatexarray(Int.(mat(ConstGate.Toffoli)))\n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{cccccccc}\n1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n### \u53ef\u9006\u8ba1\u7b97\n\n\u53ef\u9006\u8ba1\u7b97\uff0c\u987e\u540d\u601d\u4e49\u662f\u8bf4\u8ba1\u7b97\u8fc7\u7a0b\u4e2d\u4e0d\u4f1a\u4e22\u5931\u4fe1\u606f\uff0c\u53ef\u4ee5\u4ece\u7ed3\u679c\u53cd\u63a8\u51fa\u521d\u59cb\u72b6\u6001\u7684\u8ba1\u7b97\u8fc7\u7a0b\u3002\n\n\n```julia\nInt.(inv(mat(ConstGate.Toffoli)))\n```\n\n\n\n\n 8\u00d78 Array{Int64,2}:\n 1 0 0 0 0 0 0 0\n 0 1 0 0 0 0 0 0\n 0 0 1 0 0 0 0 0\n 0 0 0 0 0 0 0 1\n 0 0 0 0 1 0 0 0\n 0 0 0 0 0 1 0 0\n 0 0 0 0 0 0 1 0\n 0 0 0 1 0 0 0 0\n\n\n\n\u5bf9\u4e00\u4e2a\u6bd4\u7279\uff0c\u53ef\u9006\u8ba1\u7b97\u7684\u95e8\u6709\u54ea\u4e9b\uff1f\n\n\u4e24\u4e2a\u6bd4\u7279\u5462\uff1f\n\nCNOT\uff08\u63a7\u5236\u7ffb\u8f6c\uff09\u5c31\u662f\u4e00\u79cd\uff0c\u56e0\u4e3a\u8fde\u7eed\u8fdb\u884c\u4e24\u6b21\u7ffb\u8f6c\u5c31\u4f1a\u56de\u5230\u539f\u5148\u7684\u72b6\u6001\n\n\n```julia\nCNOT = control(2, 2, 1=>X)\nInt.(mat(CNOT))\n```\n\n\n\n\n 4\u00d74 Array{Int64,2}:\n 1 0 0 0\n 0 1 0 0\n 0 0 0 1\n 0 0 1 0\n\n\n\n\u4e09\u4e2a\u6bd4\u7279\u5462\uff1f\n\nToffli\u95e8\u5c31\u662f\u4e09\u4e2a\u6bd4\u7279\u7684\u53ef\u9006\u95e8\uff0c\u8bd5\u8bd5\u770b\u7528Toffli\u8fde\u7eed\u4f5c\u7528\u4e24\u6b21\u4f1a\u53d1\u751f\u4ec0\u4e48\uff1f\n\n\n```julia\nusing Yao\n\nr = rand_state(3)\nr1 = copy(r) |> ConstGate.Toffoli |> ConstGate.Toffoli\nr1 \u2248 r\n```\n\n\n\n\n true\n\n\n\n\u53ef\u9006\u8ba1\u7b97\u8981\u6c42\uff0c\u5b58\u5728\u5bf9\u5e94\u77e9\u9635\u7684\u9006\u77e9\u9635\uff0c\u5bf9\u4e8e\u4e00\u4e2a\u95e8\u7684\u77e9\u9635\u5f62\u5f0f $A$\uff0c\u5b58\u5728 $B$ \u4f7f\u5f97\u4ed6\u4eec\u7684\u4e58\u79ef\u4e3a1\n\n$$\nA B = I, \\quad \\exists B\n$$\n\n### \u4ece\u7ecf\u5178\u5230\u91cf\u5b50\n\n\u5982\u679c\u6211\u4eec\u4e0d\u9650\u5236\u8fd9\u4e9b\u95e8\u77e9\u9635\u53ea\u80fd\u662f\u5b9e\u65700\u6216\u80051\uff0c\u4e5f\u4e0d\u9650\u5236\u72b6\u6001\u5411\u91cf\u662fonehot\u5411\u91cf\u5462\uff1f\n\n## 1.3 \u91cf\u5b50\u7ebf\u8def\u6a21\u578b\n\n\u4e0a\u4e00\u8282\u6211\u4eec\u5b66\u4e60\u4e86\u7ecf\u5178\u7ebf\u8def\u6a21\u578b\u3002\u800c\u91cf\u5b50\u7ebf\u8def\u6a21\u578b\u53ef\u4ee5\u770b\u4f5c\u662f\u7ecf\u5178\u7ebf\u8def\u6a21\u578b\u7684\u63a8\u5e7f\n\n### \u91cf\u5b50\u6001\n\n\u91cf\u5b50\u7ebf\u8def\u4f7f\u7528\u91cf\u5b50\u6001\u8868\u793a\u5f53\u524d\u7684\u8ba1\u7b97\u72b6\u6001\uff0c\u6211\u4eec\u5c06\u7ecf\u5178\u7ebf\u8def\u4e2d\u5bf9\u72b6\u6001\u4e3aonehot\u7684\u8981\u6c42\u653e\u677e\u4e00\u4e9b\uff1a\n\n\u5141\u8bb8**\u4efb\u610f\u590d\u6570\uff0c\u6a21\u4e3a1\u7684\u5411\u91cf\u4f5c\u4e3a\u5f53\u524d\u8ba1\u7b97\u7684\u72b6\u6001\u8868\u793a\u3002**\n\n\u800c\u5b83\u7684\u7269\u7406\u610f\u4e49\u5373\u4e3a\u91cf\u5b50\u529b\u5b66\u4e2d\u7684\u91cf\u5b50\u6001\u3002\u5f88\u81ea\u7136\u7684\uff0c\u4ed6\u4eec\u53ef\u4ee5\u8fd9\u6837\u53e0\u52a0\n\n\n```julia\nr = ArrayReg(bit\"010\") + ArrayReg(bit\"110\")\n```\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 3/3\n\n\n\n\n```julia\nstate(r)\n```\n\n\n\n\n 8\u00d71 Array{Complex{Float64},2}:\n 0.0 + 0.0im\n 0.0 + 0.0im\n 1.0 + 0.0im\n 0.0 + 0.0im\n 0.0 + 0.0im\n 0.0 + 0.0im\n 1.0 + 0.0im\n 0.0 + 0.0im\n\n\n\n\u800c\u91cf\u5b50\u6001\u53ef\u4ee5\u5728\u7269\u7406\u8fc7\u7a0b\u4e0a\u7528Unitary\u77e9\u9635\u8fdb\u884c\u6f14\u5316\u3002\n\n\u6240\u4ee5\u6211\u4eec\u8981\u6c42\u91cf\u5b50\u7ebf\u8def\u4e2d\u7684\u95e8\u4e5f\u90fd\u662funitary\uff0c\u56de\u60f3\u4e0a\u4e00\u8282\u7684\u7ed3\u8bba\uff1a**\u91cf\u5b50\u8ba1\u7b97\u7684\u662f\u4e00\u79cd\u53ef\u9006\u8ba1\u7b97**\uff0c\u6bcf\u4e2a\u91cf\u5b50\u95e8\u7684\u9006\u5c31\u662f\u5b83\u7684\u5171\u8f6d\u8f6c\u7f6e\uff08adjoint\uff0cdagger\uff09\n\n\u6070\u597d\uff0c\u6211\u4eec\u4e0a\u4e00\u8282\u4ecb\u7ecd\u7684\u51e0\u4e2a\u53ef\u9006\u7684\u95e8\u90fd\u662funitary\u77e9\u9635\uff0c\u4f8b\u5982CNOT\u548cToffli\n\n\n```julia\nisunitary(CNOT)\n```\n\n\n\n\n true\n\n\n\n\n```julia\nisunitary(ConstGate.Toffoli)\n```\n\n\n\n\n true\n\n\n\n## Bloch\u7403\n\n\u5bf9\u4e8e\u4e00\u4e2a\u91cf\u5b50\u6bd4\u7279\uff0c\u5b83\u7684\u5411\u91cf\u8868\u793a\u4e3a\n\n$$\n\\begin{pmatrix}\na\\\\\nb\n\\end{pmatrix}\n$$\n\n\u7531\u4e8e\u6211\u4eec\u8981\u6c42\u5b83\u7684\u6a21\u4e3a1\uff0c\u5f88\u81ea\u7136\u7684\uff0c\u7531\u4e8e $a^2 + b^2 = 1$ \u6211\u4eec\u6709\n\n$$\na = cos{\\theta} e^{i\\delta}, \\quad b = sin{\\theta} e^{i(\\phi + \\delta)}\n$$\n\n\u5ffd\u7565\u5168\u5c40global phase \uff08\u4e3a\u4ec0\u4e48\uff1f\uff09\u5c31\u53ef\u4ee5\u5f97\u5230\n\n$$\n\\Psi = cos{\\theta} |0\\rangle + sin{\\theta} e^{i\\phi} |1\\rangle\n$$\n\n\u8fd9\u4e5f\u5c31\u610f\u5473\u7740\uff0c\u4efb\u4f55\u5bf9\u5355\u6bd4\u7279\u7684\u64cd\u4f5c\u90fd\u53ef\u4ee5\u770b\u6210\u662fBloch\u7403\u4e0a\u7684\u65cb\u8f6c\u3002\u8ba9\u6211\u4eec\u6765\u8bd5\u51e0\u4e2a\u5355\u6bd4\u7279\u95e8\n\n(\u6253\u5f00[bloch_sphere.jl](https://github.com/QuantumBFS/SSSS/blob/master/4_quantum/bloch_sphere.jl))\n\n### 1. Pauli\u95e8\n\n\n```julia\nmat(X)\n```\n\n\n\n\n 2\u00d72 LuxurySparse.PermMatrix{Complex{Float64},Int64,Array{Complex{Float64},1},Array{Int64,1}}:\n 0 1.0+0.0im\n 1.0+0.0im 0 \n\n\n\nPauli\u95e8\u662f\u5e38\u6570\u95e8\uff0cYao\u4f1a\u63d0\u524d\u5206\u914d\u5e38\u6570\u95e8\u7684\u77e9\u9635\u6240\u9700\u7684\u5185\u5b58\uff0c\u4e0d\u9700\u8981\u62c5\u5fc3\u4f7f\u7528\u5e38\u6570\u95e8\u8fdb\u884c\u8ba1\u7b97\u4f1a\u53d1\u751f\u989d\u5916\u7684\u5185\u5b58\u5206\u914d\n\n\n```julia\n@allocated mat(X)\n```\n\n\n\n\n 0\n\n\n\n\u800c\u5b9a\u4e49\u4e00\u4e2a\u65b0\u7684\u5e38\u6570\u95e8\u4e5f\u4ec5\u4ec5\u53ea\u9700\u8981\u4e00\u884c\n\n\n```julia\n@const_gate MyConstGate = rand(4, 4)\n```\n\n\n```julia\n@allocated mat(MyConstGate)\n```\n\n\n\n\n 0\n\n\n\n\n```julia\nnqubits(MyConstGate)\n```\n\n\n\n\n 2\n\n\n\n### 2. Phase\u95e8\n\n### 3. \u65cb\u8f6c\u95e8\n\n1. Rx\n\n2. Ry\n\n3. Rz\n\n\u65cb\u8f6c\u95e8\u7684\u77e9\u9635\u5f62\u5f0f\u662f\uff1a\n\n$$\ncos{\\frac{\\theta}{2}} \\mathbf{I} - i sin{\\frac{\\theta}{2}} \\mathbf{U}\n$$\n\n\n\n```julia\nmat(Rx(0.1))\n```\n\n\n\n\n 2\u00d72 StaticArrays.SArray{Tuple{2,2},Complex{Float64},2,4}:\n 0.99875+0.0im 0.0-0.0499792im\n 0.0-0.0499792im 0.99875+0.0im \n\n\n\n### 4. Hadmard\u95e8\n\n## 2.1 \u5236\u5907\u4e00\u4e2aGHZ\u6001\n\n\n\n\u5728Yao\u91cc\u6211\u4eec\u7528block\u6765\u63cf\u8ff0\u91cf\u5b50\u7ebf\u8def\uff0c\u6700\u57fa\u672c\u7684\u95e8\u662fprimitive block\uff0c\u4f8b\u5982\u6211\u4eec\u8fd9\u91cc\u8981\u7528\u5230\u7684X\u95e8\u548cH\u95e8\uff0cprimitive block\u662f\u6307\u6ca1\u6709subblock\uff08\u5b50block\uff09\u7684block\n\n\n```julia\nusing Yao\nsubblocks(X)\n```\n\n\n\n\n ()\n\n\n\n\u800c\u6211\u4eec\u4f7f\u7528\u4e0d\u540c\u7684composite block\u5c06block\u7ec4\u88c5\u8d77\u6765\u5c31\u53ef\u4ee5\u6784\u5efa\u51fa\u66f4\u5927\u7684quantum circuit\u3002\u4f8b\u5982\u6211\u4eec\u53ef\u4ee5\u5c06H\u95e8\u4e32\u8d77\u6765\n\n\n```julia\nchain(H, H, H, H)\n```\n\n\n\n\n \u001b[36mnqubits: 1, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 H gate\n \u251c\u2500 H gate\n \u251c\u2500 H gate\n \u2514\u2500 H gate\n\n\n\n\u8fd9\u5c31\u6784\u6210\u6765\u4e00\u4e2a\u975e\u5e38\u7b80\u5355\u7684\u91cf\u5b50\u7ebf\u8def\uff0c\u5b83\u4f1a\u4e0d\u65ad\u5bf9\u8f93\u5165\u7684\u5355\u6bd4\u7279\u4f5c\u7528Hadmard\u95e8\n\n\u800c\u6709\u4e86\u6a2a\u5411\u7ec4\u88c5\u95e8\u7684\u65b9\u6cd5\uff0c\u6211\u4eec\u5982\u4f55\u5728\u7eb5\u5411\u7ec4\u88c5\u5462\uff1f\u4f60\u53ef\u4ee5\u7528kron\uff0c\u7eb5\u5411\u6392\u5217\u7684\u95e8\u76f8\u5f53\u4e8e\u8fd9\u4e9b\u95e8\u7684\u5f20\u91cf\u79ef\u3002\u522b\u5fd8\u4e86\u5229\u7528\u8bed\u6cd5\u7cd6\n\n\n```julia\nkron(H for _ in 1:4)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[36m\u001b[1mkron\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[37m\u001b[1m1\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[37m\u001b[1m2\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[37m\u001b[1m3\u001b[22m\u001b[39m=>H gate\n \u2514\u2500 \u001b[37m\u001b[1m4\u001b[22m\u001b[39m=>H gate\n\n\n\n\u5f53\u7136\u4f60\u4e5f\u53ef\u4ee5\u6307\u5b9a\u4f4d\u7f6e\uff0c\u4f46\u662f\u8fd9\u4e2a\u65f6\u5019\u5c31\u8981\u8bb0\u5f97\u8f93\u5165\u6bd4\u7279\u7684\u6570\u91cf\uff0c\u56e0\u4e3a\u7a0b\u5e8f\u5e76\u4e0d\u80fd\u591f\u81ea\u5df1\u63a8\u5bfc\u51fa\u603b\u5171\u7684\u6bd4\u7279\u6570\u76ee\n\n\n```julia\nkron(4, 1=>X, 3=>H)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[36m\u001b[1mkron\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[37m\u001b[1m1\u001b[22m\u001b[39m=>X gate\n \u2514\u2500 \u001b[37m\u001b[1m3\u001b[22m\u001b[39m=>H gate\n\n\n\n\u4f46\u662f\u52a0\u5165\u6211\u5fd8\u8bb0\u8f93\u5165\u6bd4\u7279\u6570\u76ee\uff0c\u6216\u8005\u6211\u6682\u65f6\u8fd8\u6ca1\u60f3\u597d\u8981\u7528\u591a\u5c11\u4e2a\u6bd4\u7279\u600e\u4e48\u529e\uff1f\u6ca1\u5173\u7cfb\uff01\n\n\n```julia\nkron(1=>X, 3=>H)\n```\n\n\n\n\n (n -> kron(n, 1 => X gate, 3 => H gate))\n\n\n\nYao\u5728\u65e0\u6cd5\u63a8\u5bfc\u51fa\u6bd4\u7279\u6570\u76ee\u7684\u65f6\u5019\u4f1a\u8fd4\u56de\u4e00\u4e2a\u4ee5\u603b\u6bd4\u7279\u6570\u4e3a\u8f93\u5165\u7684\u533f\u540d\u51fd\u6570\uff0c\u4f60\u4f9d\u7136\u53ef\u4ee5\u628a\u5b83\u5f53\u4f5c\u6b63\u5e38\u7684block\u6765\u4f7f\u7528\uff0cYao\u4f1a\u5728\u80fd\u591f\u63a8\u5bfc\u51fa\u6bd4\u7279\u6570\u76ee\u7684\u65f6\u5019\u81ea\u52a8\u628a\u5b83\u586b\u8fdb\u53bb\uff0c\u4f8b\u5982\n\n\n```julia\nchain(kron(1=>X, 3=>H), kron(H for _ in 1:4))\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[36m\u001b[1mkron\u001b[22m\u001b[39m\n \u2502 \u251c\u2500 \u001b[37m\u001b[1m1\u001b[22m\u001b[39m=>X gate\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m3\u001b[22m\u001b[39m=>H gate\n \u2514\u2500 \u001b[36m\u001b[1mkron\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[37m\u001b[1m1\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[37m\u001b[1m2\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[37m\u001b[1m3\u001b[22m\u001b[39m=>H gate\n \u2514\u2500 \u001b[37m\u001b[1m4\u001b[22m\u001b[39m=>H gate\n\n\n\n\n\u4f46\u662fGHZ\u91cc\u8fd8\u6709\u63a7\u5236\u95e8\uff0c\u8fd9\u4e2a\u8981\u600e\u4e48\u5199\u5462\uff1f\u7c7b\u4f3c\u7684\u5728Yao\u91cc\u6211\u4eec\u4f7f\u7528Julia\u81ea\u5e26\u7684 `Pair` \u7c7b\u578b\u6765\u6307\u5b9a\u4f4d\u7f6e\uff0c\u4e00\u4e2a1\u53f7\u6bd4\u7279\u63a7\u52362\u53f7\u6bd4\u7279\u4e0aX\u95e8\u7684\u63a7\u5236\u6a21\u5757\u53ef\u4ee5\u8fd9\u4e48\u5199\n\n\n```julia\ncontrol(4, 1, 2=>X)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m1\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 \u001b[37m\u001b[1m(2,)\u001b[22m\u001b[39m X gate\n\n\n\n\u6ce8\u610f\uff0c\u5982\u679c\u4f60\u4e0d\u5199\u603b\u6bd4\u7279\u6570\uff0c\u5c31\u4f1a\u8fd4\u56de\u4e00\u4e2a\u51fd\u6570\n\n\n```julia\ncontrol(1, 2=>X)\n```\n\n\n\n\n (n -> control(n, 1, 2 => X gate))\n\n\n\n\u90a3\u4e48\u5230\u8fd9\u91cc\u6211\u4eec\u5c31\u53ef\u4ee5\u6784\u5efa\u4e0a\u9762\u753b\u7684\u8fd9\u4e2aGHZ\u7684\u7ebf\u8def\u4e86\n\n\n```julia\ncircuit = chain(\n kron(1=>X, (k=>H for k in 2:4)...), # \u7b2c\u4e00\u5c42\n control(2, 1=>X), # \u7b2c\u4e00\u4e2aCNOT\n control(4, 3=>X),\n control(3, 1=>X),\n control(4, 3=>X),\n kron(H for _ in 1:4), # \u6700\u540e\u4e00\u5c42H\n)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[36m\u001b[1mkron\u001b[22m\u001b[39m\n \u2502 \u251c\u2500 \u001b[37m\u001b[1m1\u001b[22m\u001b[39m=>X gate\n \u2502 \u251c\u2500 \u001b[37m\u001b[1m2\u001b[22m\u001b[39m=>H gate\n \u2502 \u251c\u2500 \u001b[37m\u001b[1m3\u001b[22m\u001b[39m=>H gate\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m4\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m2\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m X gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(3,)\u001b[22m\u001b[39m X gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m3\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m X gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(3,)\u001b[22m\u001b[39m X gate\n \u2514\u2500 \u001b[36m\u001b[1mkron\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[37m\u001b[1m1\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[37m\u001b[1m2\u001b[22m\u001b[39m=>H gate\n \u251c\u2500 \u001b[37m\u001b[1m3\u001b[22m\u001b[39m=>H gate\n \u2514\u2500 \u001b[37m\u001b[1m4\u001b[22m\u001b[39m=>H gate\n\n\n\n\n\u6ce8\u610f\u4e0a\u9762\u7684\u95e8\u7684\u4f5c\u7528\u987a\u5e8f\u548c\u7ebf\u8def\u56fe\u7565\u6709\u4e0d\u540c\uff08\u7ebf\u8def\u91cc\u4e24\u4e2acontrol\u662f\u540c\u65f6\u4f5c\u7528\u7684\uff09\uff0c\u4f46\u662f\u662f\u7b49\u4ef7\u7684\u3002\n\n\u63a5\u4e0b\u6765\u8ba9\u6211\u4eec\u6765\u9a8c\u8bc1\u4e00\u4e0b\uff0c\u8fd9\u4e2a\u7ebf\u8def\u662f\u5426\u80fd\u591f\u4ece000\u5236\u5907\u51faGHZ\u6001\n\n\n```julia\nr = ArrayReg(bit\"0000\") |> circuit\n```\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 4/4\n\n\n\n\n```julia\nusing Plots\n\nresults = measure(r; nshots=2000);\nhistogram(results; nbins=16, legend=nothing, xlabel=\"bit configuration\", xticks=((0:15).+0.5, 0:15))\n```\n\n\n\n\n \n\n \n\n\n\n\u867d\u7136\u4e00\u822c\u7684\u6765\u8bf4\uff0c\u6784\u5efa\u7ebf\u8def\u53ea\u9700\u8981 `control`, `chain`, `kron` \u4e09\u79cd\u590d\u5408\u6a21\u5757\u5c31\u53ef\u4ee5\u4e86\uff0c\u4f46\u662f\u5bf9\u4e8e\u6570\u503c\u6a21\u62df\uff0c\u4f46\u662f\u5b9e\u9645\u4f7f\u7528\u4e0a\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86\u5f88\u591a\u4e0d\u540c\u7684\u590d\u5408\u6a21\u5757\u6765\u5e2e\u52a9\u4f60\u5feb\u901f\u5b9a\u4e49\u6a21\u578b\uff0c\u6bd4\u5982\u4e0a\u9762\u4e00\u6392Hadmard\u7684\u4f8b\u5b50\uff0c\u4f60\u53ef\u4ee5\u76f4\u63a5\u4f7f\u7528repeat\u6a21\u5757\u6765\u6784\u9020\n\n\n```julia\nrepeat(4, H)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[36m\u001b[1mrepeat on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m2\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m3\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m4\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 H gate\n\n\n\n\u6211\u4eec\u4e0d\u59a8\u6765\u770b\u770brepeat\u7684\u6587\u6863\uff0cYao\u7684\u5927\u90e8\u5206\u51fd\u6570\u90fd\u53ef\u4ee5\u5728help\u6a21\u5f0f\u4e2d\u67e5\u770b\u6587\u6863\u548c\u7528\u4f8b\n\n\n```julia\n?repeat(4, H)\n```\n\n\n\n\n```\nrepeat(n, x::AbstractBlock[, locs]) -> RepeatedBlock{n}\n```\n\nCreate a [`RepeatedBlock`](@ref) with total number of qubits `n` and the block to repeat on given location or on all the locations.\n\n# Example\n\nThis will create a repeat block which puts 4 X gates on each location.\n\n```jldoctest\njulia> repeat(4, X)\nnqubits: 4, datatype: Complex{Float64}\nrepeat on (1, 2, 3, 4)\n\u2514\u2500 X gate\n```\n\nYou can also specify the location\n\n```jldoctest\njulia> repeat(4, X, (1, 2))\nnqubits: 4, datatype: Complex{Float64}\nrepeat on (1, 2)\n\u2514\u2500 X gate\n```\n\nBut repeat won't copy the gate, thus, if it is a gate with parameter, e.g a `phase(0.1)`, the parameter will change simultaneously.\n\n```jldoctest\njulia> g = repeat(4, phase(0.1))\nnqubits: 4, datatype: Complex{Float64}\nrepeat on (1, 2, 3, 4)\n\u2514\u2500 phase(0.1)\n\njulia> g.content\nphase(0.1)\n\njulia> g.content.theta = 0.2\n0.2\n\njulia> g\nnqubits: 4, datatype: Complex{Float64}\nrepeat on (1, 2, 3, 4)\n\u2514\u2500 phase(0.2)\n```\n\n\n\n\n\u7136\u540e\u5bf9\u5355\u4e2a\u7684\u6a21\u5757\u6211\u4eec\u8fd8\u53ef\u4ee5\u76f4\u63a5\u7528put\uff0c\u800c\u4e0d\u7528\u4e13\u95e8kron\n\n\n```julia\nput(4, 1=>X)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 X gate\n\n\n\nput\u867d\u7136\u53ef\u4ee5\u4f5c\u7528\u4efb\u610f\u5927\u5c0f\u7684block\uff0c\u4f46\u662f\u5b83\u5bf9\u5c0f\u7684block\u66f4\u9ad8\u6548\uff0c\u5bf9\u5927\u7684block\uff0c\u6211\u4eec\u9700\u8981\u7a0d\u540e\u4f1a\u4ecb\u7ecd\u5230\u7684concentrator\u3002\u4f46\u662f\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u628a\u4e0a\u9762\u7684\u7ebf\u8def\u91cd\u5199\u4e3a\n\n\n```julia\ncircuit2 = chain(\n 4,\n put(1=>X),\n repeat(H, 2:4),\n control(2, 1=>X),\n control(4, 3=>X),\n control(3, 1=>X),\n control(4, 3=>X),\n repeat(H, 1:4),\n)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 X gate\n \u251c\u2500 \u001b[36m\u001b[1mrepeat on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m2\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m3\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m4\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 H gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m2\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m X gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(3,)\u001b[22m\u001b[39m X gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m3\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m X gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(3,)\u001b[22m\u001b[39m X gate\n \u2514\u2500 \u001b[36m\u001b[1mrepeat on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m2\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m3\u001b[22m\u001b[39m\u001b[36m\u001b[1m, \u001b[22m\u001b[39m\u001b[36m\u001b[1m4\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 H gate\n\n\n\n\n## 2.2 \u91cf\u5b50\u5085\u7acb\u53f6\u53d8\u6362\n\n\u91cf\u5b50\u5085\u7acb\u53f6\u53d8\u6362\u53ef\u4ee5\u7528\u4e0b\u9762\u7684\u7ebf\u8def\u8868\u793a\uff0c\u8bd5\u8bd5\u81ea\u5df1\u7528Yao\u6765\u5b9a\u4e49\u8fd9\u4e2a\u7ebf\u8def\uff1f\n\n\n\n\u91cf\u5b50\u5085\u7acb\u53f6\u53d8\u6362\u662f\u5f88\u591a\u91cf\u5b50\u7b97\u6cd5\u7684\u57fa\u7840\u6a21\u5757\uff0c\u5b83\u662f\u7ecf\u5178\u7684\u5feb\u901f\u5085\u7acb\u53f6\u53d8\u6362\u7684\u91cf\u5b50\u7248\u672c\uff0c\u7ecf\u5178\u7684\u5085\u7acb\u53f6\u53d8\u6362\u53ef\u4ee5\u5199\u6210\n\n\n$$\ny_k = \\sum_{j=0}^{N-1} e^{\\frac{2\\pi i k j}{N}} x_j\n$$\n\n\u800c\u91cf\u5b50\u5085\u7acb\u53f6\u53d8\u6362\u5219\u5b9a\u4e49\u4e3a\n\n$$\n\\sum_j \\alpha_j |j\\rangle \\rightarrow \\sum_k \\hat{\\alpha}_k |k\\rangle, \\quad where \\quad \\hat{\\alpha}_k = \\frac{1}{\\sqrt{N}}\\sum_{j}^{N-1} e^{2\\pi ijk/N}\\alpha_j\n$$\n\n\u91cf\u5b50\u5085\u7acb\u53f6\u53d8\u6362\u53ef\u4ee5\u7528\u4e0b\u9762\u7684\u7ebf\u8def\u8868\u793a\uff0c\u8bd5\u8bd5\u81ea\u5df1\u7528Yao\u6765\u5b9a\u4e49\u8fd9\u4e2a\u7ebf\u8def\uff1f\n\n\n\n\u5b83\u4f1a\u5c06\u4e00\u4e2a\u51fd\u6570\u53d8\u6362\u5230\u9891\u7387\u7a7a\u95f4\n\n\n```julia\nusing FFTW, Interact, Plots\n\nxs = LinRange(-5, 5, 10000)\nl = @layout (1, 2)\n\nf(x) = sin(x^2) * exp(x^2/10)\n\n\n@manipulate for k in 1:0.1:2\n ys = f.(k * xs)\n plot(xs, [ys, abs.(fft(ys))], layout=l, ylims=(-10, 10), size=(1000, 200))\nend\n```\n\n\n\n\n Unable to load WebIO. Please make sure WebIO works for your Jupyter client.\n \n
\n\n\n\n\n\n\n\n \n\n\n\n\n\n\u5f88\u81ea\u7136\u7684\u6211\u4eec\u53ef\u4ee5\u7528\u76f8\u4f4d\u504f\u79fb\u6765\u5b9e\u73b0\u8fd9\u4e2a\u64cd\u4f5c\uff0c\u56e0\u4e3a\u76f8\u4f4d\u504f\u79fb\uff08phase shift\uff09\u95e8\u6709\u5982\u4e0b\u7684\u5f62\u5f0f\uff1a\n\n$$\n\\begin{pmatrix}\n1 & 0\\\\\n0 & e^{i\\theta}\n\\end{pmatrix}\n$$\n\n\u800c\u6211\u4eec\u4e0a\u9762\u5b9a\u4e49\u7684\uff08\u91cf\u5b50\uff09\u5085\u7acb\u53f6\u53d8\u6362\u53ef\u4ee5\u5199\u6210\u4e0b\u9762\u7684\u77e9\u9635\u5f62\u5f0f\n\n$$\n\\begin{pmatrix}\n1 & 1 & 1 & \\dots & 1 \\\\\n1 & \\omega & \\omega^2 & \\dots & \\omega^{N-1} \\\\\n1 & \\omega^2 & \\omega^4 & \\dots & \\omega^{2N-2} \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n1 & \\omega^N-1 & \\omega^{2N-2} & \\vdots & \\omega^{(N-1)(N-1)}\n\\end{pmatrix} \\quad \\omega^N = 1, \\quad \\omega = e^{2\\pi i/N}\n$$\n\n\u6211\u4eec\u5b9a\u4e49\u4e0b\u9762\u7684\u8bb0\u53f7\uff0c\u5bf9\u4e8c\u8fdb\u5236\u6574\u6570\u6709\u8fd9\u6837\u4e24\u79cd\u8bb0\u53f7\uff1a\n\n$$\n\\begin{aligned}\n(k)_{(10)} &= \\text{LSB}[k_1 k_2 \\dots k_{n}]_{2} \\\\\n&= k_1 2^{n-1} + k_2 2^{n-2} \\dots k_{n} 2^{0}\\\\\n&= \\sum_{l=1}^{n} k_l 2^{n - l}\n\\end{aligned}\n$$\n\n\u6216\u8005\n\n$$\n\\begin{aligned}\n(k)_{(10)} &= \\text{MSB}[k_1 k_2 \\dots k_n]_{2}\\\\\n&= k_1 2^{0} + k_2 2^{1} \\dots k_{n} 2^{n-1}\\\\\n&= \\sum_{l=1}^{n} k_l 2^{l-1}\n\\end{aligned}\n$$\n\n\u6211\u4eec\u8bb0\u540e\u8005\u4e3a MSB (Most Significant Bit numbering) \u524d\u8005\u4e3a LSB (Least Significant Bit numbering)\n\n\u7c7b\u4f3c\u7684\u5bf9\u4e8c\u8fdb\u5236\u5c0f\u6570\u4e5f\u6709\u7c7b\u4f3c\u7684\u8bb0\u53f7\uff1a\n\n$$\n\\begin{aligned}\n(0.k)_{(10)} &= \\text{LSB}[k_1 k_2 \\dots k_n]_{(2)} / 2^n\\\\\n&= \\text{MSB} [0.k_1 k_2 \\dots k_n]_{(2)}\\\\\n&= k_1 2^{-1} + k_2 2^{-2} \\cdots k_n 2^{-n}\\\\ \n&= \\sum_{l=1}^{n} k_l 2^{-l} \\\\\n\\end{aligned}\n$$\n\n$$\n\\begin{aligned}\n(0.k)_{(10)} &= \\text{MSB}[k_1 k_2 \\cdots k_n]_{(2)} / 2^n \\\\\n&= \\text{LSB} [0.k_1 k_2 \\dots k_n]_{(2)}\\\\\n&= k_1 2^{-n} + k_2 2^{-(n-1)} \\cdots k_n 2^{-1}\\\\\n&= \\sum_{l=1}^{n} k_l 2^{n-l+1}\n\\end{aligned}\n$$\n\n\u800c\u4e00\u822c\u5730\uff0c\u6211\u4eec\u53ef\u4ee5\u5b9a\u4e49\n\n$$\nLSB[0.k_{j+1} \\cdots k_{j+n}]_{(2)} = \\sum_{l=1}^n k_{j+l} 2^{-(j+l)} = LSB[0.k_1 \\cdots k_{n}]_{(2)} / 2^j\\\\\nMSB[0.k_{j+1} \\cdots k_{j+n}]_{(2)} = \\sum_{l=1}^n k_{j+l} 2^{n-(j+l)+1} = MSB[0.k_1 \\cdots k_{n}]_{(2)} / 2^j\n$$\n\n\u6211\u4eec\u89c2\u5bdf\u5230\u5bf9\u4e0a\u9762\u7684\u91cf\u5b50\u5085\u7acb\u53f6\u53d8\u6362\uff0c\u4e0d\u59a8\u5047\u8bbe $j$ \u548c $k$ \u90fd\u662f $n$ \u4f4d\u4e8c\u8fdb\u5236\u6570\n\n$$\n\\begin{aligned}\n|x\\rangle &= \\frac{1}{2^{n/2}} \\sum_{k=0}^{2^n-1} e^{2\\pi i \\cdot \\text{LSB}[x_1 x_2\\cdots x_n]_{(2)} k/2^n} |k\\rangle\\\\\n&= \\frac{1}{2^{n/2}} \\sum_{k=0}^{2^n-1} e^{2\\pi i \\cdot \\text{LSB}[0.x_1 x_2 \\cdots x_n]_{(2)} k} |k\\rangle\\\\\n&= \\frac{1}{2^{n/2}} \\sum_{\\{k_1, k_2, \\cdots k_n\\}\\in \\{0, 1\\}^n} e^{2\\pi i \\cdot \\text{LSB}[0.x_1 x_2 \\cdots x_n]_{(2)} \\text{MSB}[k_1 k_2 \\cdots k_n]_{(2)}} |MSB[k_1 k_2 \\cdots k_n]_{(2)} \\rangle \\\\\n\\end{aligned}\n$$\n\n$$\n\\begin{aligned}\n|x\\rangle &= \\frac{1}{2^{n/2}} \\sum_{\\{k_1, k_2, \\cdots k_n\\}\\in \\{0, 1\\}^n} e^{2\\pi i \\cdot \\sum_{l=1}^n k_l 2^{l-1} \\cdot \\text{LSB}[0.x_1 x_2 \\cdots x_n]_{(2)}} |MSB[k_1 k_2 \\cdots k_n]_{(2)} \\rangle \\\\\n&= \\frac{1}{2^{n/2}} \\sum_{\\{k_1, k_2, \\cdots k_n\\}\\in \\{0, 1\\}^n} e^{2\\pi i \\cdot \\sum_{l=1}^n k_{l} \\cdot \\text{LSB}[0.x_{l} x_{l+1} \\cdots x_{n}]_{(2)}} |MSB[k_1 k_2 \\cdots k_n]_{(2)}\\rangle\n\\end{aligned}\n$$\n\n$$\n\\begin{aligned}\n|x\\rangle &= \\frac{1}{2^{n/2}} \\sum_{\\{k_1, k_2, \\cdots k_n\\}\\in \\{0, 1\\}^n} \\bigotimes_{l=1}^n e^{2\\pi i k_{l} \\cdot \\text{LSB}[0.x_{l} x_{l+1} \\cdots x_{n}]_{(2)}} |k_l^{\\text{MSB}}\\rangle\\\\\n&= \\frac{1}{2^{n/2}} \\bigotimes_{l=1}^n \\sum_{k_{l} \\in \\{0, 1\\}} e^{2\\pi i k_{l} \\cdot \\text{LSB}[0.x_{l} x_{l+1} \\cdots x_{n}]_{(2)}} |k_l^{\\text{MSB}}\\rangle\\\\\n&= \\frac{1}{2^{n/2}} \\bigotimes_{l=1}^n [|0^{\\text{MSB}}\\rangle + e^{2\\pi i \\cdot \\text{LSB}[0.x_{l} x_{l+1} \\cdots x_{n}]_{(2)}} |1^{\\text{MSB}}\\rangle ] \\\\\n&= \\frac{1}{2^{n/2}} \\bigotimes_{l=1}^n [|0^{\\text{MSB}}\\rangle + \\omega_{n-l+1}^{\\text{LSB}[x_{l} x_{l+1} \\cdots x_{n}]_{(2)}} |1^{\\text{MSB}}\\rangle ] \\quad where \\quad \\omega_k = e^{2\\pi i / 2^k}\n\\end{aligned}\n$$\n\nYao\u91cc\u9ed8\u8ba4\u4f7f\u7528\u548c\u6570\u7ec4\u89d2\u6807\u987a\u5e8f\u4e00\u81f4\u7684MSB\uff0c\u4e0b\u9762\u6211\u4eec\u5148\u4ece $|\\text{MSB}[01]\\rangle$ \u6765\u770b\u770b\u8981\u600e\u4e48\u505a\n\n$$\n\\begin{aligned}\n|\\text{LSB}[10] \\rangle &= \\frac{1}{2} [|0^{MSB}\\rangle + \\omega_2^2 |1^{MSB}\\rangle] \\otimes [|0^{MSB}\\rangle + |1^{MSB}\\rangle] = \\frac{1}{2} ( |0\\rangle - |1\\rangle + |2\\rangle - |3\\rangle )\n\\end{aligned}\n$$\n\n\u8fd9\u4e2a\u7ed3\u6784\u53ef\u4ee5\u7528\u8fd9\u6837\u4e00\u4e2aH\u95e8+\u63a7\u5236shift\u95e8+H\u95e8\u6765\u5b9e\u73b0\n\n\n```julia\nc = chain(2, put(1=>H), control(2, 1=>shift(2\u03c0 / 1 << 2)), put(2=>H))\n```\n\n\n\n\n \u001b[36mnqubits: 2, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 H gate\n \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m2\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m shift(1.5707963267948966)\n \u2514\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m2\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 H gate\n\n\n\n\n\n```julia\nArrayReg(bit\"01\") |> c |> state\n```\n\n\n\n\n 4\u00d71 Array{Complex{Float64},2}:\n 0.4999999999999999 + 0.0im\n -0.4999999999999999 + 0.0im\n 0.4999999999999999 + 0.0im\n -0.4999999999999999 + 0.0im\n\n\n\n\u5bf9\u4efb\u610f\u7684QFT\uff0c\u6211\u4eec\u53ef\u4ee5\u7528\u4e0b\u9762\u8fd9\u6837\u9012\u5f52\u7684\u65b9\u5f0f\u5b9e\u73b0\n\n\n\n\n```julia\nA(i, j) = control(i, j=>shift(2\u03c0/(1<<(i-j+1))))\nB(n, i) = chain(n, i==j ? put(i=>H) : A(j, i) for j in i:n)\nqft(n) = chain(B(n, i) for i in 1:n)\n```\n\n\n\n\n qft (generic function with 1 method)\n\n\n\n\n```julia\nqft(4)\n```\n\n\n\n\n \u001b[36mnqubits: 4, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u2502 \u251c\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2502 \u2514\u2500 H gate\n \u2502 \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m2\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m shift(1.5707963267948966)\n \u2502 \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m3\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m shift(0.7853981633974483)\n \u2502 \u2514\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(1,)\u001b[22m\u001b[39m shift(0.39269908169872414)\n \u251c\u2500 \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u2502 \u251c\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m2\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2502 \u2514\u2500 H gate\n \u2502 \u251c\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m3\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2502 \u2514\u2500 \u001b[37m\u001b[1m(2,)\u001b[22m\u001b[39m shift(1.5707963267948966)\n \u2502 \u2514\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(2,)\u001b[22m\u001b[39m shift(0.7853981633974483)\n \u251c\u2500 \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u2502 \u251c\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m3\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2502 \u2514\u2500 H gate\n \u2502 \u2514\u2500 \u001b[31m\u001b[1mcontrol(\u001b[22m\u001b[39m\u001b[31m\u001b[1m4\u001b[22m\u001b[39m\u001b[31m\u001b[1m)\u001b[22m\u001b[39m\n \u2502 \u2514\u2500 \u001b[37m\u001b[1m(3,)\u001b[22m\u001b[39m shift(1.5707963267948966)\n \u2514\u2500 \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u2514\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m4\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 H gate\n\n\n\n\n### FFT\u548cQFT\u7684\u5173\u7cfb\n\n\u901a\u8fc7\u4e0a\u9762\u7684\u63a8\u5bfc\u6211\u4eec\u53d1\u73b0\u5b9e\u9645\u4e0aQFT\u76f8\u5f53\u4e8e\u98a0\u5012\u4e86bit numbering\u7684\u79bb\u6563FFT\uff08DFT\uff09\uff0c\u800c\u6211\u4eec\u4ece\u4e0a\u9762\u7684\u7ebf\u8def\u56fe\u91cc\u53ef\u4ee5\u5f88\u5bb9\u6613\u6570\u51fa\u6765\uff0cQFT\u7684\u590d\u6742\u5ea6\u662f $O(n^2)$ \u91cf\u7ea7\u7684\u3002\u800c\u5bf9\u4e8en\u4e2a\u6bd4\u7279\uff0c\u79bb\u6563\u5085\u7acb\u53f6\u53d8\u6362\u9700\u8981 $n 2^n$ \u7684\u590d\u6742\u5ea6\u3002\u4f46\u662f\u5b9e\u9645\u4e0a\u7531\u4e8e\u76ee\u524d\u5e76\u6ca1\u6709\u771f\u6b63\u7684\u91cf\u5b50\u786c\u4ef6\u4f7f\u7528\u6211\u4eec\u5728\u7ecf\u5178\u6a21\u62df\u7684\u65f6\u5019\u53ef\u4ee5\u7528DFT\u6765\u8fdb\u884c\u6a21\u62df\u3002Yao\u7684\u53ef\u6269\u5c55\u6027\u63d0\u4f9b\u4e86\u5f88\u65b9\u4fbf\u7684\u63a5\u53e3\u6765\u505a\u5230\u8fd9\u4e00\u70b9\u3002\n\n\u6211\u4eec\u9996\u5148\u5b9a\u4e49\u4e00\u4e2a `AbstractBlock` \u7684\u5b50\u7c7b\u578b\n\n\n```julia\nstruct QFT{N, T} <: PrimitiveBlock{N, T} end\n\nQFT(::Type{T}, n::Int) where T = QFT{n, T}()\nQFT(n::Int) = QFT(ComplexF64, n)\n```\n\n\n\n\n QFT\n\n\n\n\u7136\u540e\u8ba9\u6211\u4eec\u5b9a\u4e49\u5b83\u7684\u77e9\u9635\u5f62\u5f0f\uff0c\u800c\u5b9e\u9645\u4e0a\u8fd9\u4e2a\u65f6\u5019 `QFT` \u4f5c\u4e3ablock\u5df2\u7ecf\u53ef\u4ee5\u4f7f\u7528\u4e86\uff0c\u5c3d\u7ba1\u6211\u4eec\u6ca1\u6709\u4e3aQFT\u5b9a\u4e49\u7279\u522b\u7684\u8ba1\u7b97\u65b9\u5f0f\uff0cYao\u4f1a\u4f7f\u7528\u5b83\u7684\u77e9\u9635\u8fdb\u884c\u8ba1\u7b97\u3002\n\n\n```julia\nqft_circuit(x::QFT{N}) where N = qft(N)\nYaoBlocks.mat(x::QFT) = mat(qft_circuit(x))\n```\n\n\u6240\u4ee5\u63a5\u4e0b\u6765\u6211\u4eec\u6765\u5b9a\u4e49\u5177\u4f53\u5982\u4f55\u8ba1\u7b97QFT\n\n\n```julia\nusing FFTW, LinearAlgebra\n\nfunction YaoBlocks.apply!(r::ArrayReg, x::QFT)\n \u03b1 = sqrt(length(statevec(r)))\n invorder!(r)\n lmul!(\u03b1, ifft!(statevec(r)))\n return r\nend\n```\n\n\u5728Yao\u91cc\uff0c\u4e00\u4e2ablock\u5982\u4f55\u4f5c\u7528\u5728register\u4e0a\u7684\u884c\u4e3a\u662f\u7528 `apply!` \u51fd\u6570\u786e\u5b9a\u7684\uff0c\u4e5f\u5c31\u662f\u8bf4\u53ea\u8981\u4e3aQFT\u5b9a\u4e49\u4e86apply\u51fd\u6570\uff0c\u6211\u4eec\u5c31\u53ef\u4ee5\u7528FFT\u6765\u6a21\u62dfQFT\u3002\n\n\u73b0\u5728\u8ba9\u6211\u4eec\u6765\u68c0\u67e5\u4e00\u4e0b\u6211\u4eec\u5b9a\u4e49\u7684QFT\u548c\u76f4\u63a5\u7528\u91cf\u5b50\u7ebf\u8def\u8ba1\u7b97\u51fa\u6765\u7684\u7ed3\u679c\u662f\u5426\u4e00\u81f4\n\n\n```julia\nr = rand_state(5)\nr1 = r |> copy |> QFT(5)\nr2 = r |> copy |> qft(5)\nr1 \u2248 r2\n```\n\n\n\n\n true\n\n\n\n\u5230\u6b64\u4e3a\u6b62\u6211\u4eec\u5c31\u5b9a\u4e49\u597d\u6240\u6709\u5fc5\u8981\u7684\u65b9\u6cd5\u4e86\uff0c\u662f\u4e0d\u662f\u975e\u5e38\u7b80\u5355\uff1f\n\n\u4e0d\u8fc7\u4e5f\u8bb8\u4f60\u662f\u4e00\u4e2a\u8ffd\u6c42\u5b8c\u7f8e\u7684\u4eba\uff0c\u60f3\u8ba9\u4f60\u7684QFT block\u7684\u6253\u5370\u4fe1\u606f\u597d\u770b\u4e00\u4e9b\uff1f\u5f53\u7136\u6ca1\u95ee\u9898\uff0c\u53ea\u8981\u5b9a\u4e49\u4e0b\u9762\u8fd9\u4e2a\u65b9\u6cd5\u5c31\u53ef\u4ee5\u4e86\n\n\n```julia\nYaoBlocks.print_block(io::IO, x::QFT{N}) where N = print(io, \"QFT($N)\")\n```\n\n\n```julia\nQFT(5)\n```\n\n\n\n\n QFT(5)\n\n\n\nYao\u4f1a\u81ea\u5df1\u5904\u7406\u548c\u5176\u5b83block\u7ec4\u5408\u7684\u95ee\u9898\n\n\n```julia\nchain(QFT(5), put(1=>H))\n```\n\n\n\n\n \u001b[36mnqubits: 5, datatype: Complex{Float64}\u001b[39m\n \u001b[34m\u001b[1mchain\u001b[22m\u001b[39m\n \u251c\u2500 QFT(5)\n \u2514\u2500 \u001b[36m\u001b[1mput on (\u001b[22m\u001b[39m\u001b[36m\u001b[1m1\u001b[22m\u001b[39m\u001b[36m\u001b[1m)\u001b[22m\u001b[39m\n \u2514\u2500 H gate\n\n\n\n\n\u4ea6\u6216\u8005\u662fQFT\u7684\u9006\u53d8\u6362\uff08\u4e3a\u4ec0\u4e48\uff1f\u60f3\u60f3\u6211\u4eec\u4e4b\u524d\u8bb2\u8fc7\u7684\uff09\n\n\n```julia\nQFT(5)'\n```\n\n\n\n\n \u001b[33m\u001b[1m [\u2020]\u001b[22m\u001b[39mQFT(5)\n\n\n\n## 2.3 \u76f8\u4f4d\u4f30\u8ba1 \uff08Phase Estimation\uff09\n\n\u6211\u4eec\u5728\u4e0a\u4e00\u8282\u5c01\u88c5\u597d\u4e86\u4e00\u4e2aQFT block\uff0c\u90a3\u4e48\u6211\u4eec\u63a5\u4e0b\u6765\u5c31\u53ef\u4ee5\u7528\u5b83\u505a\u4e00\u4e9b\u6709\u7528\u7684\u4e8b\u60c5\u4e86\u6bd4\u5982\u76f8\u4f4d\u4f30\u8ba1\uff0c\u76f8\u4f4d\u4f30\u8ba1\u662f\u6307\u4e0b\u9762\u8fd9\u4e2a\u95ee\u9898\uff1a\n\n\u7ed9\u5b9a\u4e00\u4e2a\u4f5c\u7528\u5728m\u4e2aqubit\u4e0a\u7684unitary $U$\uff0c\u6709 $U |\\psi\\rangle = e^{2\\pi i \\theta}|\\psi\\rangle, \\quad 0 \\leq \\theta < 1$\n\n\u8fd9\u4e2a\u95ee\u9898\u53ef\u4ee5\u7528\u8fd9\u6837\u4e00\u4e2a\u7ebf\u8def\u6765\u89e3\u51b3\uff0c\u8bd5\u8bd5\u81ea\u5df1\u7528Yao\u5b9a\u4e49\u51fa\u6765\uff1f\n\n\n\n\u76f8\u4f4d\u4f30\u8ba1\u5176\u5b9e\u9700\u8981\u4e24\u4e2aregister\uff0c\u6211\u4eec\u6309\u7167\u4e0a\u9762\u7684\u7ebf\u8def\u6765\u770b\u770b\u5b83\u4eec\u90fd\u505a\u4e86\u4ec0\u4e48\u3002\n\n\u9996\u5148\u5bf9register 1\uff0c\u6211\u4eec\u4f5c\u7528\u4e86\u4e00\u6392H\u95e8\uff0c\u8fd9\u4e2a\u600e\u4e48\u5199\uff1f\n\n\n```julia\nPE(n) = chain(n, repeat(H))\n```\n\n\n\n\n PE (generic function with 1 method)\n\n\n\n\u4f5c\u7528\u4e00\u4e0b\n\n\n```julia\nArrayReg(bit\"00\") |> PE(2) |> state\n```\n\n\n\n\n 4\u00d71 Array{Complex{Float64},2}:\n 0.4999999999999999 + 0.0im\n 0.4999999999999999 + 0.0im\n 0.4999999999999999 + 0.0im\n 0.4999999999999999 + 0.0im\n\n\n\n\u7136\u540e\u6211\u4eec\u6765\u6784\u9020\u4e00\u4e2a\u5047\u7684\uff08\u5df2\u77e5 $\\theta$\uff09\u6001\u548c U \u6765\u770b\u770b\u90fd\u53d1\u751f\u4e86\u4ec0\u4e48\uff0c\u5206\u8fd9\u4e48\u51e0\u6b65\uff1a\n\n\n```julia\nN, M = 3, 5\n```\n\n\n\n\n (3, 5)\n\n\n\n1. \u968f\u673a\u83b7\u5f97\u4e00\u4e2aunitary\uff0c\u7136\u540e\u7528\u672c\u5f81\u503c\u6c42\u89e3\u5668\u627e\u5230\u6240\u6709\u7684\u672c\u5f81\u77e2\n\n\n```julia\nP = eigen(rand_unitary(1<\n\n\u4f46\u662f\u95ee\u9898\u6765\u4e86\uff0c\u5982\u4f55\u53ea\u5728\u524dn\u4e2a\u6bd4\u7279\u4e0a\u4f5c\u7528iQFT\u5462\uff1f\u5728Yao\u91cc\u6211\u4eec\u5b9a\u4e49\u4e86 active qubit\u7684\u6982\u5ff5\uff0c\u6240\u6709\u7684block\u90fd\u53ea\u4f5c\u7528\u5728\u5f53\u524dactive\u7684qubit\u4e0a\uff0c\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528 `focus!` \u548c `relax!` \u51fd\u6570\u6765\u8c03\u6574\u5f53\u524d\u6fc0\u6d3b\u7684qubit\uff0c\u4f8b\u5982\n\n\n```julia\nr = ArrayReg(bit\"10101\")\n```\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 5/5\n\n\n\n\n```julia\nfocus!(r, 1)\n```\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 1/5\n\n\n\n\n```julia\nr |> X\n```\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 1/5\n\n\n\n\n```julia\nrelax!(r, 1)\n```\n\n\n\n\n ArrayReg{1, Complex{Float64}, Array...}\n active qubits: 5/5\n\n\n\n\n```julia\nstate(r)[bit\"10100\"]\n```\n\n\n\n\n 1.0 + 0.0im\n\n\n\n\u4e3a\u4e86\u65b9\u4fbf\u4f60\u5b9a\u4e49\u7ebf\u8def\uff0cYao\u8fd8\u63d0\u4f9b\u4e86 `concentrate` \u8fd9\u4e2a\u6a21\u5757\uff0c\u5b83\u4f1a\u5728register\u8fdb\u5165\u8fd9\u4e2a\u6a21\u5757\u7684\u65f6\u5019\u8fdb\u884cfocus\uff0c\u4ece\u800c\u4f5c\u7528\u8f83\u5c0f\u7684\u6a21\u5757\uff0c\u7136\u540e\u5728\u4f5c\u7528\u5b8c\u6bd5\u4e4b\u540e\u518drelax\u56de\u539f\u6765\u7684\u5927\u5c0f\u548c\u987a\u5e8f\u3002\u6240\u4ee5\u6211\u4eec\u53ea\u8981\u628aiQFT\u653e\u8fdbconcentrate\u91cc\u9762\u5c31\u597d\u4e86\uff0c\u8fd9\u6837\u6211\u4eec\u7684PE\u7ebf\u8def\u5c31\u53ef\u4ee5\u8fd9\u4e48\u5b9a\u4e49\n\n\n```julia\nPE(n, m, U) =\n chain(n+m, repeat(H, 1:n), # H\n chain(control(k, n+1:n+m=>matblock(U^(2^(k-1)))) for k in 1:n), # C-U\n concentrate(QFT(n)', 1:n) # iQFT\n )\n```\n\n\n\n\n PE (generic function with 2 methods)\n\n\n\n\u6700\u540e\u8ba9\u6211\u4eec\u6765\u9a8c\u8bc1\u4e00\u4e0b\n\n\n```julia\nr = join(ArrayReg(psi), zero_state(N))\nr |> PE(N, M, U)\nresults = measure(r, 1:N; nshots=1)\n```\n\n\n\n\n 1-element Array{Int64,1}:\n 3\n\n\n\n\u522b\u5fd8\u4e86\u6211\u4eec\u83b7\u5f97\u7684\u6bd4\u7279\u662f\u53cd\u7684\uff08\u56e0\u4e3aFFT\uff09\uff0c\u6240\u4ee5\u6211\u4eec\u6765\u53cd\u8f6c\u4e00\u4e0b\n\n\n```julia\nusing BitBasis\nestimated_phase = bfloat(results[]; nbits=N)\n```\n\n\n\n\n 0.75\n\n\n\n### 3 \u52a8\u673a Motivation\n\n- \u53d8\u5206\u91cf\u5b50\u7ebf\u8def\u7b97\u6cd5\u7684\u5174\u8d77\n\n- \u57fa\u4e8ePython\u7684\u6a21\u62df\u5668\u6027\u80fd\u4e0a\u7684\u52a3\u52bf\n\n- \u6613\u7528\u6027\u548c\u6269\u5c55\u6027\n\n### 4 \u91cf\u5b50\u4f53\u7cfb\u7ed3\u6784\u6982\u89c8\n\n### 5. Yao\u7684\u8bbe\u8ba1\u601d\u8def\u548c\u67b6\u6784\n\n### 5.1 \u4e3a\u7b97\u6cd5\u7814\u7a76\u800c\u751f\n\n### 5.2 \u7ebf\u8def\u7684\u6811\u72b6\u8868\u793a\u548cIR\n\n### 5.3 \u5229\u7528\u591a\u91cd\u6d3e\u53d1\u8fdb\u884c\u5f02\u6784\u8ba1\u7b97\n\n### 5.4 \u5229\u7528\u591a\u91cd\u6d3e\u53d1\u5b9a\u5236\u7ebf\u8def\u8fd0\u884c\u884c\u4e3a\n\n### 5.5 \u672a\u6765\u548c\u6b63\u5728\u8fdb\u884c\u7684\u5de5\u4f5c\n\n### 6. \u5982\u4f55\u53c2\u4e0eYao\u7684\u5f00\u53d1\n", "meta": {"hexsha": "1033080893ab61a3c9a7cd240f8c26a1c4d087ac", "size": 822023, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4_quantum/QC-with-Yao.ipynb", "max_stars_repo_name": "Ben1008611/SSSS", "max_stars_repo_head_hexsha": "ae2932da2096216032789144e95e353f8801d4e0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 165, "max_stars_repo_stars_event_min_datetime": "2019-03-28T08:46:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T11:09:52.000Z", "max_issues_repo_path": "4_quantum/QC-with-Yao.ipynb", "max_issues_repo_name": "Ben1008611/SSSS", "max_issues_repo_head_hexsha": "ae2932da2096216032789144e95e353f8801d4e0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-03-31T12:15:55.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-09T09:59:47.000Z", "max_forks_repo_path": "4_quantum/QC-with-Yao.ipynb", "max_forks_repo_name": "Ben1008611/SSSS", "max_forks_repo_head_hexsha": "ae2932da2096216032789144e95e353f8801d4e0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 64, "max_forks_repo_forks_event_min_datetime": "2019-04-22T14:41:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T13:25:09.000Z", "avg_line_length": 200.9836185819, "max_line_length": 354082, "alphanum_fraction": 0.6852569819, "converted": true, "num_tokens": 15459, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3557748798522984, "lm_q2_score": 0.30404167496654744, "lm_q1q2_score": 0.10817039038131497}} {"text": "# Finite Elements Lecture 1\n\n\n```\nfrom IPython.core.display import HTML\ncss_file = '../ipython_notebook_styles/ngcmstyle.css'\nHTML(open(css_file, \"r\").read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n##### Georges Limbert\n\nThis will be introduction:\n\n* Fundamental aspects\n* Examples of applications\n* The direct stiffness matrix method\n* The weak form\n\n###### References:\n\n* [The Finite Element Method (Hughes, 2003)](http://www.amazon.co.uk/gp/product/B00CB2MK8K?btkr=1)\n* [Belytschko, Liu & Moran](http://www.amazon.co.uk/Nonlinear-Elements-Structures-Mechanical-Engineering/dp/0471987743)\n* [Simo and Hughes](http://www.springer.com/mathematics/computational+science+%26+engineering/book/978-0-387-97520-7)\n* [Crisfield](http://www.amazon.com/Non-Linear-Finite-Element-Analysis-Structures/dp/047197059X)\n\n## Background\n\nTypical applications for GL:\n\n* Background in applied maths and engineering mechanics\n* Nonlinear continuum mechanics of biological tissues/biomaterials\n* Multiphysics finite element techniques (theory/implementation/analysis)\n* Models used in industry, academia and the military\n\n## Aims\n\n* Introduce the general \"philosophy\"\n* Raise awareness about the applications\n* Highlight the concept through simple examples\n* Introduce variational formulations of PDEs\n\n## Computational Engineering\n\n* Modelling complex engineering systems\n* Assessment of performace and safety before physical prototypes are built and tested\n* Explore many design alternatives\n* Optimise performance and safety\n* Certification requirements\n\n## Finite Element Method\n\nPowerful method to solve PDEs over *complex* geometrical domains.\n\n* Rooted in variational methods\n* Principles developed in the 40's, driven by Boeing and large advances in the 60s and 70s.\n* Mainly driven by the need to solve elasticitiy and structural analysis problemns encountered in civil and aeronautical engineering.\n\n## Domain of applications\n\nVery wide:\n\n* Biophysics\n* Mechanics\n* EM\n* Aerodynamics\n* Structural dynamics\n* Heat transfer\n* Biology\n* Porous media\n* Hydrodynamics\n* Structural mechanics\n* Nanomechanics\n* Behavioural sciences\n* and on to any domain of natural and physical sciences.\n\n## Example codes\n\n**Commerical**\n\n* ABAQUS\n* ANSYS\n* COMSOL\n* MARC\n* NASTRAN\n* LS-DYNA\n\n**Open source**\n\n* FEnics\n* CalculiX\n* FEAPpv\n* WARP3D\n* Matlab toolboxes\n\n## Example applications\n\n**Simulated knee flexion**\n\nFibrous models of ligament - elastic and solid mechanics, stress models, etc.\n\n**Wrinkling analyses**\n\nCompression induced - calculation of first principal strains in, eg, paper crinkling, skin wrinkles in aging, etc. Note importance of interpolation techniques for this application.\n\n**Bi-layer structure**\n\nImportance of thickness and structure for results.\n\n**Simulation of in-plane compression of the epidermis**\n\nAnother skin application; the importance of topography, swelling with moisture, etc.\n\n**Oral implants**\n\nDentistry applications, in combination with imaging techniques.\n\n**Artificial organs and implants**\n\nImportance of mechanics, material, electronics and chemistry.\n\n**Biomedical (stents)**\n\nCombination of health and engineering aspects, with combinations of FEA and CFD.\n\n**Consumer goods**\n\nHow the feel of the packaging affects the way you feel about the goods: make it feel expensive to trigger an emotional response.\n\n**EM radiation**\n\nMicrowave effects on tissues, and UV-induced damage.\n\n**Movies**\n\nPhysics-based computer graphics.\n\n## FEM in a nutshell\n\n1. Transform PDE into a variational problem\n2. Introduce a piecewise approximation to the field variables (eg displacement, temperature, electric charge) in the equations.\n3. Discretise the physical domain into elements and write approximate equations for each element (**meshing** process). Local equations in each element expressed in matrix form.\n4. Assemble local equations into a global matrix.\n5. Solve.\n\n## Step 2.\n\n(Step 1 later).\n\n* In each element, choose **interpolation** (or **shape**) functions to approximate field within element in terms of **nodal values** (the **degrees of freedom**).\n* In each element, value of each variable anywhere in the element is a linear combination of the shape functions and the nodal values\n* Interpolation functions are usually polynomials (Largrange, Hermite, B-spline, ...)\n\n## Step 3 and 4\n\nStep 3:\n\n* Field approximations are injected into the variational form.\n* Ran out of time.\n\n## Direct stiffness approach\n\n### Equation for a single elastic spring\n\nEquivalent to a single element.\n\nForces $f_1$ (left end) and $f_2$ (right end).\n\nEquilibrium equation\n\n$$\n f_1 + f_2 = 0.\n$$\n\nCombine with Hooke's law for the spring\n\n$$\n\\begin{align}\n f_2 & = k (d_2 - d_1) \\\\ f_1 & = -k (d_2 - d_1) = f_2\n\\end{align}\n$$\n\nto get the equilibrium equations in matrix form\n\n$$\n\\begin{pmatrix} f_1 \\\\ f_2 \\end{pmatrix} = \\begin{pmatrix} k & -k \\\\ -k & k \\end{pmatrix} \\begin{pmatrix} d_1 \\\\ d_2 \\end{pmatrix}\n$$\n\nThe stiffness matrix is symmetric but singular. The problem is that this doesn't constrain spatial translations; can take an infinite number of positions in space. Need to remember this and work around.\n\nGo to a system with two elastic springs.\n\nEquilibrium equation\n\n$$\n F_1 + F_2 + F_3 = 0\n$$\n\nConstitutive equations\n\n\n\nSplit the original structure into elemental components\n\n$$\n{\\bf f}^{(1)} = k^{(1)} {\\bf d}^{(1)}\n$$ \n\nand similarly for the second element.\n\nUsing the link between the displacements, that $d_2^{(1)} = d_1^{(2)}$, can set up a **connectivity matrix** linking the nodes in the different elements. Do something similarly for the forces, getting\n\n$$\n\\begin{pmatrix} F_1 \\\\ F_2 \\\\ F_3 \\end{pmatrix} = \\begin{pmatrix} f_1^{(1)} \\\\ f_1^{(2)} + f_2^{(1)} \\\\ f_2^{(2)} \\end{pmatrix}\n$$\n\nExpanded local equations\n\nThe location equations for each spring can be rewritten in matrix form, or they can be expanded into larger matrices and vectors. Local versions have size 2, global version has size 3.\n\nFinal version from adding up all the expanded versions\n\n$$\n K {\\bf d} = {\\bf f}.\n$$\n\nDirect assembly of the global stiffness matrix $K$.\n\nUse the connectivity to measure the contributions or connections.\n\nSpecial case: grounded spring\n\nPartition and apply boundary condition $d_1 = 0$.\n\nGlobal equilibrium equations\n\n**Q1** What is the physical meaning of $K$?\n\nIt denotes the force felt at node $i$ due to unit desplacement at node $j$ (all others fixed)\n\n**Q2** Which elements contribute to $K$?\n\nThose between $i$ and $j$, connected to $i$.\n\nThe stiffness matrix is invertible only when suitable boundary conditions are applied.\n\n## Weak form\n\n### Turning PDEs into variational problems\n\nMultiply by an arbitrary **test function** and integrate over the domain. Perform integration by parts to get rid of second order derivatives.\n\nVery powerful technique - not restricted to conservative systems.\n\nTake the **strong form** of the initial boundary value problem.\n\n$$\n\\nabla \\cdot \\sigma + b = \\rho \\dot{v} = \\rho \\ddot{u}.\n$$\n\nAdd prescribed displacement $u = \\bar{u}$ and traction $t = \\bar{t}$ on the boundaries, together with initial condition on displacement and velocity at $t=0$.\n\nConsider arbitrary vector valued function $\\eta = \\eta(x) = \\etc(\\chi(X, t))$. This is the **test** or **weighting** function.\n\n* Time is assumed to be fixed\n* $\\eta$ vanishes on the boundary where displacements are fixed.\n\nWrite functional obtained by multiplying the strong form by the test function and integrating over the domain:\n\n$$\n f(u, \\eta) = \\int_{\\Omega} ( -\\nabla \\cdot \\sigma - b + \\rho \\ddot{u} ) \\cdot \\eta \\, dv = 0.\n$$\n\nExpand the divergence term as\n\n$$\n \\nabla \\cdot \\sigma \\cdot \\eta = \\nabla (\\sigma \\eta) - \\sigma : \\nabla \\eta = \\nabla (\\sigma \\eta) - \\sigma : \\nabla_x \\eta.\n$$\n\nHence the functional becomes\n\n$$\n f(u, \\eta) = \\int_{\\Omega} \\sigma : \\nabla \\eta - (b - \\rho \\ddot{u}) \\cdot \\eta - \\int_{\\partial \\Omega} \\sigma \\eta \\cdot n \\, ds\n$$\n\nUsing that $\\eta$ vanishes on the boundary you get\n\n$$\n \\int_{\\partial \\Omega} \\sigma \\eta \\cdot n \\, ds = \\int_{\\partial \\Omega} \\bar{t} \\cdot \\eta \\, ds.\n$$\n\n### Variational problem\n\nFinally rewrite problem as\n\n$$\n f(u, \\eta) = \\int_{\\Omega} \\left[ \\sigma : \\nabla \\eta - (b - \\rho \\ddot{u})\\cdot \\eta \\right] \\, dv - \\int_{\\partial \\Omega} \\bar{t} \\cdot \\eta \\, ds = 0,\n$$\n\nsubject to conditions on the initial data.\n\n### Special choice of the test function\n\nChoose the test function to be a **virtual** displacement, $\\eta = \\delta u$.\n\nThe we get the **principle of virtual work**, and the first term in the volume integral is the **internal mechanical virtual work**, and the second term (less the acceleration term) and the surface integral is the **external mechanical virtual work**. Hence the variational principle is equivalent to the virtual work balancing off against the acceleration.\n\n### Initial boundary value problem\n\nSummarizing, the weak form has the integral plus the initial conditions (in integral form).\n\n### Additional comments\n\nNote that 80% of the time can be spent in meshing. In cases where the interpolation functions are NURBS or B-splines, the geometry is the mesh, so using these functions can greatly speed things up.\n", "meta": {"hexsha": "f5739c2045a5cd6bca2680fdf9553f85a0df11e2", "size": 22627, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FEEG6016 Simulation and Modelling/2014/Finite Elements Lecture 1.ipynb", "max_stars_repo_name": "ngcm/training-public", "max_stars_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-06-23T05:50:49.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-22T10:29:53.000Z", "max_issues_repo_path": "FEEG6016 Simulation and Modelling/2014/Finite Elements Lecture 1.ipynb", "max_issues_repo_name": "Jhongesell/training-public", "max_issues_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-11-28T08:29:55.000Z", "max_issues_repo_issues_event_max_datetime": "2017-11-28T08:29:55.000Z", "max_forks_repo_path": "FEEG6016 Simulation and Modelling/2014/Finite Elements Lecture 1.ipynb", "max_forks_repo_name": "Jhongesell/training-public", "max_forks_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2015-04-18T21:44:48.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-09T17:35:58.000Z", "avg_line_length": 30.2904953146, "max_line_length": 365, "alphanum_fraction": 0.4951164538, "converted": true, "num_tokens": 3285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41489884579676883, "lm_q2_score": 0.25982563796098374, "lm_q1q2_score": 0.10780135729842127}} {"text": "# Homework 5\n## Due Date: Tuesday, October 3rd at 11:59 PM\n\n# Problem 1\nWe discussed documentation and testing in lecture and also briefly touched on code coverage. You must write tests for your code for your final project (and in life). There is a nice way to automate the testing process called continuous integration (CI).\n\nThis problem will walk you through the basics of CI and show you how to get up and running with some CI software.\n\n### Continuous Integration\nThe idea behind continuous integration is to automate away the testing of your code.\n\nWe will be using it for our projects.\n\nThe basic workflow goes something like this:\n\n1. You work on your part of the code in your own branch or fork\n2. On every commit you make and push to GitHub, your code is automatically tested on a fresh machine on Travis CI. This ensures that there are no specific dependencies on the structure of your machine that your code needs to run and also ensures that your changes are sane\n3. Now you submit a pull request to `master` in the main repo (the one you're hoping to contribute to). The repo manager creates a branch off `master`. \n4. This branch is also set to run tests on Travis. If all tests pass, then the pull request is accepted and your code becomes part of master.\n\nWe use GitHub to integrate our roots library with Travis CI and Coveralls. Note that this is not the only workflow people use. Google git..github..workflow and feel free to choose another one for your group.\n\n### Part 1: Create a repo\nCreate a public GitHub repo called `cs207test` and clone it to your local machine.\n\n**Note:** No need to do this in Jupyter.\n\n### Part 2: Create a roots library\nUse the example from lecture 7 to create a file called `roots.py`, which contains the `quad_roots` and `linear_roots` functions (along with their documentation).\n\nAlso create a file called `test_roots.py`, which contains the tests from lecture.\n\nAll of these files should be in your newly created `cs207test` repo. **Don't push yet!!!**\n\n### Part 3: Create an account on Travis CI and Start Building\n\n#### Part A:\nCreate an account on Travis CI and set your `cs207test` repo up for continuous integration once this repo can be seen on Travis.\n\n#### Part B:\nCreate an instruction to Travis to make sure that\n\n1. python is installed\n2. its python 3.5\n3. pytest is installed\n\nThe file should be called `.travis.yml` and should have the contents:\n```yml\nlanguage: python\npython:\n - \"3.5\"\nbefore_install:\n - pip install pytest pytest-cov\nscript:\n - pytest\n```\n\nYou should also create a configuration file called `setup.cfg`:\n```cfg\n[tool:pytest]\naddopts = --doctest-modules --cov-report term-missing --cov roots\n```\n\n#### Part C:\nPush the new changes to your `cs207test` repo.\n\nAt this point you should be able to see your build on Travis and if and how your tests pass.\n\n### Part 4: Coveralls Integration\nIn class, we also discussed code coverage. Just like Travis CI runs tests automatically for you, Coveralls automatically checks your code coverage. One minor drawback of Coveralls is that it can only work with public GitHub accounts. However, this isn't too big of a problem since your projects will be public.\n\n#### Part A:\nCreate an account on [`Coveralls`](https://coveralls.zendesk.com/hc/en-us), connect your GitHub, and turn Coveralls integration on.\n\n#### Part B:\nUpdate your the `.travis.yml` file as follows:\n```yml\nlanguage: python\npython:\n - \"3.5\"\nbefore_install:\n - pip install pytest pytest-cov\n - pip install coveralls\nscript:\n - py.test\nafter_success:\n - coveralls\n```\n\nBe sure to push the latest changes to your new repo.\n\n### Part 5: Update README.md in repo\nYou can have your GitHub repo reflect the build status on Travis CI and the code coverage status from Coveralls. To do this, you should modify the `README.md` file in your repo to include some badges. Put the following at the top of your `README.md` file:\n\n```\n[](https://travis-ci.org/dsondak/cs207testing.svg?branch=master)\n\n[](https://coveralls.io/github/dsondak/cs207testing?branch=master)\n```\n\nOf course, you need to make sure that the links are to your repo and not mine. You can find embed code on the Coveralls and Travis CI sites.\n\n---\n\n# Problem 2\nWrite a Python module for reaction rate coefficients. Your module should include functions for constant reaction rate coefficients, Arrhenius reaction rate coefficients, and modified Arrhenius reaction rate coefficients. Here are their mathematical forms:\n\\begin{align}\n &k_{\\textrm{const}} = k \\tag{constant} \\\\\n &k_{\\textrm{arr}} = A \\exp\\left(-\\frac{E}{RT}\\right) \\tag{Arrhenius} \\\\\n &k_{\\textrm{mod arr}} = A T^{b} \\exp\\left(-\\frac{E}{RT}\\right) \\tag{Modified Arrhenius}\n\\end{align}\n\nTest your functions with the following paramters: $A = 10^7$, $b=0.5$, $E=10^3$. Use $T=10^2$.\n\nA few additional comments / suggestions:\n* The Arrhenius prefactor $A$ is strictly positive\n* The modified Arrhenius parameter $b$ must be real \n* $R = 8.314$ is the ideal gas constant. It should never be changed (except to convert units)\n* The temperature $T$ must be positive (assuming a Kelvin scale)\n* You may assume that units are consistent\n* Document each function!\n* You might want to check for overflows and underflows\n\n**Recall:** A Python module is a `.py` file which is not part of the main execution script. The module contains several functions which may be related to each other (like in this problem). Your module will be importable via the execution script. For example, suppose you have called your module `reaction_coeffs.py` and your execution script `kinetics.py`. Inside of `kinetics.py` you will write something like:\n```python\nimport reaction_coeffs\n# Some code to do some things\n# :\n# :\n# :\n# Time to use a reaction rate coefficient:\nreaction_coeffs.const() # Need appropriate arguments, etc\n# Continue on...\n# :\n# :\n# :\n```\nBe sure to include your module in the same directory as your execution script.\n\n---\n\n# Problem 3\nWrite a function that returns the **progress rate** for a reaction of the following form:\n\\begin{align}\n \\nu_{A} A + \\nu_{B} B \\longrightarrow \\nu_{C} C.\n\\end{align}\nOrder your concentration vector so that \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}\n\\end{align}\n\nTest your function with\n\\begin{align}\n \\nu_{i} = \n \\begin{bmatrix}\n 2.0 \\\\\n 1.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad \n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\ \n 2.0 \\\\ \n 3.0\n \\end{bmatrix}\n \\qquad \n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n### Answer:\n\nAnswers to Problems 3-6 are saved as .py files in the cs207test repo created in Problems 1-2.\n\n\n```python\nimport doctest\nfrom typing import Iterable, List, Sequence\n\ndef progress_rate (x: Sequence[float], v: Sequence[float], k: float) -> float:\n \"\"\" Calculates progress rate for single reaction represented by inputs.\n\n Args:\n x: Concentration vector.\n v: Stoichiometric coefficients of reactants.\n k: Forward reaction rate coefficient.\n\n Notes:\n Raises ValueError if sequences x and v are not the same length.\n\n Returns:\n Reaction progress rate.\n\n Examples:\n >>> progress_rate([1.0, 2.0, 3.0], [2.0, 1.0, 1.0], 10)\n 60.0\n \"\"\"\n if len(x) != len(v):\n raise ValueError('x and v must be same length.')\n\n result = 1\n for i, x_i in enumerate(x):\n result *= pow(x_i, v[i])\n return k * result\n```\n\n\n```python\nimport unittest\n\nclass TestProgressRate(unittest.TestCase):\n \"\"\" Tests progress_rate() function under various circumstances. \"\"\"\n\n def test_unequal_dim_args (self):\n \"\"\" Ensures func raises ValueError with unequally-dimensioned args. \"\"\"\n x = [1.0, 2.0, 3.0]\n v = [2.0, 1.0]\n k = 10\n\n try:\n rate = progress_rate(x, v, k)\n except Exception as e:\n if not isinstance(e, ValueError):\n self.fail('Expected ValueError but caught different exception.')\n else:\n pass\n else:\n self.fail('No exception raised even though args unequally-dimensioned.')\n\n def test_x_not_seq (self):\n \"\"\" Ensures that TypeError is raised when x is not a sequence. \"\"\"\n x = 1.0\n v = [2.0, 1.0]\n k = 10\n try:\n rate = progress_rate(x, v, k)\n except Exception as e:\n if not isinstance(e, TypeError):\n self.fail('Expected TypeError but caught different exception.')\n else:\n pass\n else:\n self.fail('No exception raised even though x not sequence.')\n```\n\n---\n# Problem 4\nWrite a function that returns the **progress rate** for a system of reactions of the following form:\n\\begin{align}\n \\nu_{11}^{\\prime} A + \\nu_{21}^{\\prime} B \\longrightarrow \\nu_{31}^{\\prime\\prime} C \\\\\n \\nu_{12}^{\\prime} A + \\nu_{32}^{\\prime} C \\longrightarrow \\nu_{22}^{\\prime\\prime} B + \\nu_{32}^{\\prime\\prime} C\n\\end{align}\nNote that $\\nu_{ij}^{\\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\\nu_{ij}^{\\prime\\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. Therefore, in this convention, I have ordered my vector of concentrations as \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}.\n\\end{align}\n\nTest your function with \n\\begin{align}\n \\nu_{ij}^{\\prime} = \n \\begin{bmatrix}\n 1.0 & 2.0 \\\\\n 2.0 & 0.0 \\\\\n 0.0 & 2.0\n \\end{bmatrix}\n \\qquad\n \\nu_{ij}^{\\prime\\prime} = \n \\begin{bmatrix}\n 0.0 & 0.0 \\\\\n 0.0 & 1.0 \\\\\n 2.0 & 1.0\n \\end{bmatrix}\n \\qquad\n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\\n 2.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad\n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n\n```python\ndef progress_rates (x: Sequence[float], v_react: Sequence[Sequence[float]],\n k: float) -> List[float]:\n \"\"\" Calculates progress rates for system of reactions.\n\n Args:\n x: Concentration vector.\n v_react: Matrix of stoichiometric coefficients for reactants.\n k: Forward reaction rate coefficient.\n\n Notes:\n v_react is an m-by-n matrix, where the number of rows (m) must equal the number\n of species (i.e., length of concentration vector x) and the number of columns\n must equal the number of reactions.\n\n Returns:\n List of progress rates for system of reactions.\n\n Examples:\n >>> progress_rates([1.0, 2.0, 1.0], [[1.0, 2.0], [2.0, 0.0], [0.0, 2.0]], 10)\n [40.0, 10.0]\n \"\"\"\n # Test inputs\n # Ensure all rows in v_react have same number of columns (corresponding to number\n # of reactions).\n M = len(v_react[0])\n for row in v_react:\n if len(row) != M:\n raise ValueError('Not all rows in v_react share the same dimension.')\n\n if len(x) != len(v_react):\n raise ValueError('Length of x != number of rows in v_react.')\n\n # Call progress_rate() on reaction formed from concentration vector x and the\n # reactant coefficients in each column of v_react.\n result = []\n for j in range(0, M):\n v = [row[j] for row in v_react]\n rate = progress_rate(x, v, k)\n result.append(rate)\n\n return result\n```\n\n\n```python\nclass TestProgressRates(unittest.TestCase):\n \"\"\" Tests progress_rates() function under various circumstances. \"\"\"\n\n def test_unequal_dim_args (self):\n \"\"\" Ensures func raises ValueError when number of rows in stoichiometric\n coefficient matrix is fewer than len(x).\n \"\"\"\n x = [1.0, 2.0, 1.0]\n v = [[1.0, 2.0], [2.0, 0.0]]\n k = 10\n\n try:\n rate = progress_rates(x, v, k)\n except Exception as e:\n if not isinstance(e, ValueError):\n self.fail('Expected ValueError but different exception caught.')\n else:\n pass\n else:\n self.fail('No exception raised even though args unequally-dimensioned.')\n\n def test_unequal_dim_v_cols (self):\n \"\"\" Ensures func raises ValueError when not all of the rows within v share the\n same dimension.\n \"\"\"\n x = [1.0, 2.0, 1.0]\n # 3rd row has 1 fewer element than other 2 rows.\n v = [[1.0, 2.0], [2.0, 0.0], [0.0]]\n k = 10\n\n try:\n rate = progress_rates(x, v, k)\n except Exception as e:\n if not isinstance(e, ValueError):\n self.fail('Expected ValueError but different exception caught.')\n else:\n pass\n else:\n self.fail(\n 'No exception raised even though not all rows within v share same '\n 'dimension.')\n```\n\n---\n# Problem 5\nWrite a function that returns the **reaction rate** of a system of irreversible reactions of the form:\n\\begin{align}\n \\nu_{11}^{\\prime} A + \\nu_{21}^{\\prime} B &\\longrightarrow \\nu_{31}^{\\prime\\prime} C \\\\\n \\nu_{32}^{\\prime} C &\\longrightarrow \\nu_{12}^{\\prime\\prime} A + \\nu_{22}^{\\prime\\prime} B\n\\end{align}\n\nOnce again $\\nu_{ij}^{\\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\\nu_{ij}^{\\prime\\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. In this convention, I have ordered my vector of concentrations as \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}\n\\end{align}\n\nTest your function with \n\\begin{align}\n \\nu_{ij}^{\\prime} = \n \\begin{bmatrix}\n 1.0 & 0.0 \\\\\n 2.0 & 0.0 \\\\\n 0.0 & 2.0\n \\end{bmatrix}\n \\qquad\n \\nu_{ij}^{\\prime\\prime} = \n \\begin{bmatrix}\n 0.0 & 1.0 \\\\\n 0.0 & 2.0 \\\\\n 1.0 & 0.0\n \\end{bmatrix}\n \\qquad\n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\\n 2.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad\n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n\n```python\nimport numpy as np\n\ndef reaction_rate (x: Sequence[float], v_react: Sequence[Sequence[float]],\n v_prod: Sequence[Sequence[float]], k: float) -> Iterable:\n \"\"\" Calculates reaction rates for species for a given system of reactions.\n\n Args:\n x: Concentration vector.\n v_react: Matrix of stoichiometric coefficients for reactants.\n v_prod: Matrix of stoichiometric coefficients for products.\n k: Forward reaction rate coefficient.\n\n Notes:\n v_react is an m-by-n matrix, where the number of rows (m) must equal the number\n of species (i.e., length of concentration vector x) and the number of columns\n must equal the number of reactions.\n\n v_react and v_prod must be of the same dimensions.\n\n Returns:\n List of reaction rates for species represented by concentration rates in x.\n\n Examples:\n >>> reaction_rate([1.0, 2.0, 1.0], [[1.0, 0.0], [2.0, 0.0], [0.0, 2.0]], [[0.0, 1.0], [0.0, 2.0], [1.0, 0.0]])\n [-30.0, -60.0, 20.0]\n \"\"\"\n reaction_rates = progress_rates(x, v_react, k)\n\n # Convert stoichiometric coefficients for reactants and products to np matrices to\n # make matrix transformations simpler.\n if not isinstance(v_react, np.matrix):\n v_react = np.matrix(v_react)\n if not isinstance(v_prod, np.matrix):\n v_prod = np.matrix(v_prod)\n\n if v_react.shape != v_prod.shape:\n raise ValueError('Dimensions of v_react and v_prod not equal.')\n\n v_ij = v_prod - v_react\n result = v_ij.dot(reaction_rates)\n # Convert 1D matrix to a simple list.\n return np.array(result).flatten().tolist()\n```\n\n\n```python\nclass TestReactionRate(unittest.TestCase):\n \"\"\" Tests reaction_rate() function under various circumstances. \"\"\"\n\n def test_unequal_dim_stoichiometric_coeff (self):\n \"\"\" Ensures func raises ValueError when v_react and v_prod do not share the\n same dimensions, both in terms of number of rows and columns.\n \"\"\"\n x = [1.0, 2.0, 1.0]\n k = 10\n\n # First test with v_react and v_prod having different number of rows.\n v_react = [[1.0, 0.0], [2.0, 0.0], [0.0, 2.0]]\n v_prod = [[0.0, 1.0], [0.0, 2.0]] # 1 row less than v_react\n\n try:\n rate = reaction_rate(x, v_react, v_prod, k)\n except Exception as e:\n if not isinstance(e, ValueError):\n self.fail('Expected ValueError but different exception caught.')\n else:\n pass\n else:\n self.fail(\n 'No exception raised even though v_prod contains fewer rows than '\n 'v_react.')\n\n # Next test with v_react and v_prod having same number of rows but one of their\n # rows containing fewer elements than the rest.\n v_react = [[1.0, 0.0], [2.0], [0.0, 2.0]] # 2nd row only contains 1 element\n v_prod = [[0.0, 1.0], [0.0, 2.0], [1.0, 0.0]]\n\n try:\n rate = reaction_rate(x, v_react, v_prod, k)\n except Exception as e:\n if not isinstance(e, ValueError):\n self.fail('Expected ValueError but different exception caught.')\n else:\n pass\n else:\n self.fail(\n 'No exception raised even though v_react contains fewer elements than '\n 'required in 2nd row.')\n```\n\n---\n# Problem 6\nPut parts 3, 4, and 5 in a module called `chemkin`.\n\nNext, pretend you're a client who needs to compute the reaction rates at three different temperatures ($T = \\left\\{750, 1500, 2500\\right\\}$) of the following system of irreversible reactions:\n\\begin{align}\n 2H_{2} + O_{2} \\longrightarrow 2OH + H_{2} \\\\\n OH + HO_{2} \\longrightarrow H_{2}O + O_{2} \\\\\n H_{2}O + O_{2} \\longrightarrow HO_{2} + OH\n\\end{align}\n\nThe client also happens to know that reaction 1 is a modified Arrhenius reaction with $A_{1} = 10^{8}$, $b_{1} = 0.5$, $E_{1} = 5\\times 10^{4}$, reaction 2 has a constant reaction rate parameter $k = 10^{4}$, and reaction 3 is an Arrhenius reaction with $A_{3} = 10^{7}$ and $E_{3} = 10^{4}$.\n\nYou should write a script that imports your `chemkin` module and returns the reaction rates of the species at each temperature of interest.\n\nYou may assume that these are elementary reactions.\n\n### Answer:\n\nThe order for the species I chose in my vectors/matrices is:\n\n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n H_2 \\\\\n O_2 \\\\\n OH \\\\\n HO_2 \\\\\n H_2O\n \\end{bmatrix}\n\\end{align}\n\n\n```python\n\"\"\"\nclient.py\n---------\n\nSimulating client script that will use chemkin and reaction_coeffs in order to\ncalculate the reaction rates of species at various temperatures.\n\"\"\"\nimport chemkin\nimport numpy as np\nimport reaction_coeffs\n\n# Store params in lists so we can iterate through them simultaneously.\nT = [750, 1500, 2500]\n\nk1 = reaction_coeffs.modified_arrhenius(A=1E8, E=5 * 1E4, T=T[0], b=0.5)\nk2 = 1E4\nk3 = reaction_coeffs.arrhenius(A=1E7, E=1E4, T=T[2])\nk = [k1, k2, k3]\n\nv_react = [[2, 0, 0], [1, 0, 1], [0, 1, 0], [0, 1, 0], [0, 0, 1]]\nv_prod = [[1, 0, 0], [0, 1, 0], [2, 0, 1], [0, 0, 1], [0, 1, 0]]\n\n# Client must update concentration rates. Random numbers used here for demo.\nx = np.random.randint(1, 5, size=len(v_react))\n\n# Loop through temperatures and calculate reaction rate at each temperature and\n# corresponding k value.\nrates = []\nfor i, t in enumerate(T):\n rate = chemkin.reaction_rate(x, v_react, v_prod, k[i])\n rates.append(rate)\n print('At T = {0}, species reaction rates = {1}'.format(t, rate))\n```\n\n---\n# Problem 7\nGet together with your project team, form a GitHub organization (with a descriptive team name), and give the teaching staff access. You can have has many repositories as you like within your organization. However, we will grade the repository called **`cs207-FinalProject`**.\n\nWithin the `cs207-FinalProject` repo, you must set up Travis CI and Coveralls. Make sure your `README.md` file includes badges indicating how many tests are passing and the coverage of your code.\n\n\n```python\n\n```\n", "meta": {"hexsha": "a35aabdd1d0f42bd13d15a510de2777b5b5bb75c", "size": 28583, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "homeworks/HW5/HW5-final.ipynb", "max_stars_repo_name": "nate-stein/cs207_nate_stein", "max_stars_repo_head_hexsha": "f8ce68f9d839a0bd0ab4a2e1ebaa7ae985f6b5d0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "homeworks/HW5/HW5-final.ipynb", "max_issues_repo_name": "nate-stein/cs207_nate_stein", "max_issues_repo_head_hexsha": "f8ce68f9d839a0bd0ab4a2e1ebaa7ae985f6b5d0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homeworks/HW5/HW5-final.ipynb", "max_forks_repo_name": "nate-stein/cs207_nate_stein", "max_forks_repo_head_hexsha": "f8ce68f9d839a0bd0ab4a2e1ebaa7ae985f6b5d0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8082010582, "max_line_length": 424, "alphanum_fraction": 0.5369625302, "converted": true, "num_tokens": 5668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.25982562649804053, "lm_q2_score": 0.41489884579676883, "lm_q1q2_score": 0.10780135254245937}} {"text": "# Introduction\n\n**Prerequisites**\n\n- [Python Fundamentals](https://datascience.quantecon.org/../python_fundamentals/index.html) \n\n\n**Outcomes**\n\n- Understand the core pandas objects (`Series` and `DataFrame`) \n- Index into particular elements of a Series and DataFrame \n- Understand what `.dtype`/`.dtypes` do \n- Make basic visualizations \n\n\n**Data**\n\n- US regional unemployment data from Bureau of Labor Statistics \n\n## Outline\n\n- [Introduction](#Introduction) \n - [pandas](#pandas) \n - [Series](#Series) \n - [DataFrame](#DataFrame) \n - [Data Types](#Data-Types) \n - [Changing DataFrames](#Changing-DataFrames) \n - [Exercises](#Exercises) \n\n\n```python\n# Uncomment following line to install on colab\n! pip install qeds \n#qeds = quantecon data science\n```\n\n Requirement already satisfied: qeds in c:\\users\\asus\\anaconda3\\lib\\site-packages (0.6.2)\n Requirement already satisfied: quandl in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (3.5.0)\n Requirement already satisfied: plotly in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (4.5.4)\n Requirement already satisfied: pandas-datareader in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.8.1)\n Requirement already satisfied: numpy in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (1.16.5)\n Requirement already satisfied: requests in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (2.22.0)\n Requirement already satisfied: scipy in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (1.3.1)\n Requirement already satisfied: pandas in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.25.1)\n Requirement already satisfied: matplotlib in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (3.1.1)\n Requirement already satisfied: quantecon in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.4.6)\n Requirement already satisfied: openpyxl in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (3.0.0)\n Requirement already satisfied: scikit-learn in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.21.3)\n Requirement already satisfied: seaborn in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.9.0)\n Requirement already satisfied: statsmodels in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.10.1)\n Requirement already satisfied: pyarrow in c:\\users\\asus\\anaconda3\\lib\\site-packages (from qeds) (0.16.0)\n Requirement already satisfied: inflection>=0.3.1 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from quandl->qeds) (0.3.1)\n Requirement already satisfied: six in c:\\users\\asus\\anaconda3\\lib\\site-packages (from quandl->qeds) (1.12.0)\n Requirement already satisfied: more-itertools in c:\\users\\asus\\anaconda3\\lib\\site-packages (from quandl->qeds) (7.2.0)\n Requirement already satisfied: python-dateutil in c:\\users\\asus\\anaconda3\\lib\\site-packages (from quandl->qeds) (2.8.0)\n Requirement already satisfied: retrying>=1.3.3 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from plotly->qeds) (1.3.3)\n Requirement already satisfied: lxml in c:\\users\\asus\\anaconda3\\lib\\site-packages (from pandas-datareader->qeds) (4.4.1)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from requests->qeds) (1.24.2)\n Requirement already satisfied: idna<2.9,>=2.5 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from requests->qeds) (2.8)\n Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from requests->qeds) (2019.9.11)\n Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from requests->qeds) (3.0.4)\n Requirement already satisfied: pytz>=2017.2 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from pandas->qeds) (2019.3)\n Requirement already satisfied: cycler>=0.10 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from matplotlib->qeds) (0.10.0)\n Requirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from matplotlib->qeds) (1.1.0)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from matplotlib->qeds) (2.4.2)\n Requirement already satisfied: sympy in c:\\users\\asus\\anaconda3\\lib\\site-packages (from quantecon->qeds) (1.4)\n Requirement already satisfied: numba>=0.38 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from quantecon->qeds) (0.45.1)\n Requirement already satisfied: jdcal in c:\\users\\asus\\anaconda3\\lib\\site-packages (from openpyxl->qeds) (1.4.1)\n Requirement already satisfied: et-xmlfile in c:\\users\\asus\\anaconda3\\lib\\site-packages (from openpyxl->qeds) (1.0.1)\n Requirement already satisfied: joblib>=0.11 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from scikit-learn->qeds) (0.13.2)\n Requirement already satisfied: patsy>=0.4.0 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from statsmodels->qeds) (0.5.1)\n Requirement already satisfied: setuptools in c:\\users\\asus\\anaconda3\\lib\\site-packages (from kiwisolver>=1.0.1->matplotlib->qeds) (41.4.0)\n Requirement already satisfied: mpmath>=0.19 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from sympy->quantecon->qeds) (1.1.0)\n Requirement already satisfied: llvmlite>=0.29.0dev0 in c:\\users\\asus\\anaconda3\\lib\\site-packages (from numba>=0.38->quantecon->qeds) (0.29.0)\n\n\n## pandas\n\nThis lecture begins the material on `pandas`.\n\nTo start, we will import the pandas package and give it the alias\n`pd`, which is conventional practice.\n\n\n```python\nimport pandas as pd\n\n# Don't worry about this line for now!\n%matplotlib inline\n# activate plot theme\nimport qeds\nqeds.themes.mpl_style();\n```\n\nSometimes, knowing which pandas version we are\nusing is helpful.\n\nWe can check this by running the code below.\n\n\n```python\npd.__version__\n```\n\n\n\n\n '0.25.1'\n\n\n\n## Series\n\nThe first main pandas type we will introduce is called Series.\n\nA Series is a single column of data, with row labels for each\nobservation.\n\npandas refers to the row labels as the *index* of the Series.\n\n \nBelow, we create a Series which contains the US unemployment rate every\nother year starting in 1995.\n\n\n```python\nvalues = [5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8., 5.7]\nyears = list(range(1995, 2017, 2))\n\nunemp = pd.Series(data=values, index=years, name=\"Unemployment\")\n```\n\n\n```python\nunemp\n```\n\n\n\n\n 1995 5.6\n 1997 5.3\n 1999 4.3\n 2001 4.2\n 2003 5.8\n 2005 5.3\n 2007 4.6\n 2009 7.8\n 2011 9.1\n 2013 8.0\n 2015 5.7\n Name: Unemployment, dtype: float64\n\n\n\nWe can look at the index and values in our Series.\n\n\n```python\nunemp.index\n```\n\n\n\n\n Int64Index([1995, 1997, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015], dtype='int64')\n\n\n\n\n```python\nunemp.values\n```\n\n\n\n\n array([5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8. , 5.7])\n\n\n\n### What Can We Do with a Series object?\n\n#### `.head` and `.tail`\n\nOften, our data will have many rows, and we won\u2019t want to display it all\nat once.\n\nThe methods `.head` and `.tail` show rows at the beginning and end\nof our Series, respectively.\n\n\n```python\nunemp.head() #default 5, but we can put in brackets other number\n```\n\n\n\n\n 1995 5.6\n 1997 5.3\n 1999 4.3\n 2001 4.2\n 2003 5.8\n Name: Unemployment, dtype: float64\n\n\n\n\n```python\nunemp.tail() #default 5, but we can put in brackets other number\n```\n\n\n\n\n 2007 4.6\n 2009 7.8\n 2011 9.1\n 2013 8.0\n 2015 5.7\n Name: Unemployment, dtype: float64\n\n\n\n#### Basic Plotting\n\nWe can also plot data using the `.plot` method.\n\n\n```python\nunemp.plot() #check if there's a way to do the graph for a period of time, etc.\n```\n\n>**Note**\n>\n>This is why we needed the `%matplotlib inline` \u2014 it tells the notebook\nto display figures inside the notebook itself. Also, pandas has much greater visualization functionality than this, but we will study that later on.\n\n#### Unique Values\n\nThough it doesn\u2019t make sense in this data set, we may want to find the\nunique values in a Series \u2013 which can be done with the `.unique` method.\n\n\n```python\nunemp.unique()\n```\n\n\n\n\n array([5.6, 5.3, 4.3, 4.2, 5.8, 4.6, 7.8, 9.1, 8. , 5.7])\n\n\n\n#### Indexing\n\nSometimes, we will want to select particular elements from a Series.\n\nWe can do this using `.loc[index_items]`; where `index_items` is\nan item from the index, or a list of items in the index.\n\nWe will see this more in-depth in a coming lecture, but for now, we\ndemonstrate how to select one or multiple elements of the Series.\n\n\n```python\nunemp.loc[1995]\n```\n\n\n\n\n 5.6\n\n\n\n\n```python\nunemp.loc[[1995, 2005, 2015]]\n```\n\n\n\n\n 1995 5.6\n 2005 5.3\n 2015 5.7\n Name: Unemployment, dtype: float64\n\n\n\n## DataFrame\n\nA DataFrame is how pandas stores one or more columns of data.\n\nWe can think a DataFrames a multiple Series stacked side by side as\ncolumns.\n\nThis is similar to a sheet in an Excel workbook or a table in a SQL\ndatabase.\n\nIn addition to row labels (an index), DataFrames also have column labels.\n\nWe refer to these column labels as the columns or column names.\n\n \nBelow, we create a DataFrame that contains the unemployment rate every\nother year by region of the US starting in 1995.\n\n\n```python\ndata = {\n \"NorthEast\": [5.9, 5.6, 4.4, 3.8, 5.8, 4.9, 4.3, 7.1, 8.3, 7.9, 5.7],\n \"MidWest\": [4.5, 4.3, 3.6, 4. , 5.7, 5.7, 4.9, 8.1, 8.7, 7.4, 5.1],\n \"South\": [5.3, 5.2, 4.2, 4. , 5.7, 5.2, 4.3, 7.6, 9.1, 7.4, 5.5],\n \"West\": [6.6, 6., 5.2, 4.6, 6.5, 5.5, 4.5, 8.6, 10.7, 8.5, 6.1],\n \"National\": [5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8., 5.7]\n}\n\nunemp_region = pd.DataFrame(data, index=years)\nunemp_region\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNational
19955.94.55.36.65.6
19975.64.35.26.05.3
19994.43.64.25.24.3
20013.84.04.04.64.2
20035.85.75.76.55.8
20054.95.75.25.55.3
20074.34.94.34.54.6
20097.18.17.68.67.8
20118.38.79.110.79.1
20137.97.47.48.58.0
20155.75.15.56.15.7
\n
\n\n\n\nWe can retrieve the index and the DataFrame values as we\ndid with a Series.\n\n\n```python\nunemp_region.index\n```\n\n\n\n\n Int64Index([1995, 1997, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015], dtype='int64')\n\n\n\n\n```python\nunemp_region.values\n```\n\n\n\n\n array([[ 5.9, 4.5, 5.3, 6.6, 5.6],\n [ 5.6, 4.3, 5.2, 6. , 5.3],\n [ 4.4, 3.6, 4.2, 5.2, 4.3],\n [ 3.8, 4. , 4. , 4.6, 4.2],\n [ 5.8, 5.7, 5.7, 6.5, 5.8],\n [ 4.9, 5.7, 5.2, 5.5, 5.3],\n [ 4.3, 4.9, 4.3, 4.5, 4.6],\n [ 7.1, 8.1, 7.6, 8.6, 7.8],\n [ 8.3, 8.7, 9.1, 10.7, 9.1],\n [ 7.9, 7.4, 7.4, 8.5, 8. ],\n [ 5.7, 5.1, 5.5, 6.1, 5.7]])\n\n\n\n### What Can We Do with a DataFrame?\n\nPretty much everything we can do with a Series.\n\n#### `.head` and `.tail`\n\nAs with Series, we can use `.head` and `.tail` to show only the\nfirst or last `n` rows.\n\n\n```python\nunemp_region.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNational
19955.94.55.36.65.6
19975.64.35.26.05.3
19994.43.64.25.24.3
20013.84.04.04.64.2
20035.85.75.76.55.8
\n
\n\n\n\n\n```python\nunemp_region.tail(3)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNational
20118.38.79.110.79.1
20137.97.47.48.58.0
20155.75.15.56.15.7
\n
\n\n\n\n#### Plotting\n\nWe can generate plots with the `.plot` method.\n\nNotice we now have a separate line for each column of data.\n\n\n```python\nunemp_region.plot()\n```\n\n#### Indexing\n\nWe can also do indexing using `.loc`.\n\nThis is slightly more advanced than before because we can choose\nsubsets of both row and columns.\n\n\n```python\nunemp_region.loc[1995, \"NorthEast\"]\n```\n\n\n\n\n 5.9\n\n\n\n\n```python\nunemp_region.loc[[1995, 2005], \"South\"]\n```\n\n\n\n\n 1995 5.3\n 2005 5.2\n Name: South, dtype: float64\n\n\n\n\n```python\nunemp_region.loc[1995, [\"NorthEast\", \"National\"]]\n```\n\n\n\n\n NorthEast 5.9\n National 5.6\n Name: 1995, dtype: float64\n\n\n\n\n```python\nunemp_region.loc[:, \"NorthEast\"]\n```\n\n\n\n\n 1995 5.9\n 1997 5.6\n 1999 4.4\n 2001 3.8\n 2003 5.8\n 2005 4.9\n 2007 4.3\n 2009 7.1\n 2011 8.3\n 2013 7.9\n 2015 5.7\n Name: NorthEast, dtype: float64\n\n\n\n\n```python\n# `[string]` with no `.loc` extracts a whole column\nunemp_region[\"MidWest\"]\n```\n\n\n\n\n 1995 4.5\n 1997 4.3\n 1999 3.6\n 2001 4.0\n 2003 5.7\n 2005 5.7\n 2007 4.9\n 2009 8.1\n 2011 8.7\n 2013 7.4\n 2015 5.1\n Name: MidWest, dtype: float64\n\n\n\n### Computations with Columns\n\npandas can do various computations and mathematical operations on\ncolumns.\n\nLet\u2019s take a look at a few of them.\n\n\n```python\n# Divide by 100 to move from percent units to a rate\nunemp_region[\"West\"] / 100\n```\n\n\n\n\n 1995 0.066\n 1997 0.060\n 1999 0.052\n 2001 0.046\n 2003 0.065\n 2005 0.055\n 2007 0.045\n 2009 0.086\n 2011 0.107\n 2013 0.085\n 2015 0.061\n Name: West, dtype: float64\n\n\n\n\n```python\n# Find maximum\nunemp_region[\"West\"].max()\n```\n\n\n\n\n 10.7\n\n\n\n\n```python\n# Find the difference between two columns\n# Notice that pandas applies `-` to _all rows_ at once\n# We'll see more of this throughout these materials\nunemp_region[\"West\"] - unemp_region[\"MidWest\"]\n```\n\n\n\n\n 1995 2.1\n 1997 1.7\n 1999 1.6\n 2001 0.6\n 2003 0.8\n 2005 -0.2\n 2007 -0.4\n 2009 0.5\n 2011 2.0\n 2013 1.1\n 2015 1.0\n dtype: float64\n\n\n\n\n```python\n# Find correlation between two columns\nunemp_region.West.corr(unemp_region[\"MidWest\"])\n```\n\n\n\n\n 0.9006381255384481\n\n\n\n\n```python\n# find correlation between all column pairs\nunemp_region.corr()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNational
NorthEast1.0000000.8756540.9644150.9678750.976016
MidWest0.8756541.0000000.9513790.9006380.952389
South0.9644150.9513791.0000000.9872590.995030
West0.9678750.9006380.9872591.0000000.981308
National0.9760160.9523890.9950300.9813081.000000
\n
\n\n\n\n## Data Types\n\nWe asked you to run the commands `unemp.dtype` and\n`unemp_region.dtypes` and think about the outputs.\n\nYou might have guessed that they return the type of the values inside\neach column.\n\nOccasionally, you might need to investigate what types you have in your\nDataFrame when an operation isn\u2019t behaving as expected.\n\n\n```python\nunemp.dtype\n```\n\n\n\n\n dtype('float64')\n\n\n\n\n```python\nunemp_region.dtypes\n```\n\n\n\n\n NorthEast float64\n MidWest float64\n South float64\n West float64\n National float64\n dtype: object\n\n\n\nDataFrames will only distinguish between a few types.\n\n- Booleans (`bool`) \n- Floating point numbers (`float64`) \n- Integers (`int64`) \n- Dates (`datetime`) \u2014 we will learn this soon \n- Categorical data (`categorical`) \n- Everything else, including strings (`object`) \n\n\nIn the future, we will often refer to the type of data stored in a\ncolumn as its `dtype`.\n\nLet\u2019s look at an example for when having an incorrect `dtype` can\ncause problems.\n\nSuppose that when we imported the data the `South` column was\ninterpreted as a string.\n\n\n```python\nstr_unemp = unemp_region.copy()\nstr_unemp[\"South\"] = str_unemp[\"South\"].astype(str)\nstr_unemp.dtypes\n```\n\n\n\n\n NorthEast float64\n MidWest float64\n South object\n West float64\n National float64\n dtype: object\n\n\n\nEverything *looks* ok\u2026\n\n\n```python\nstr_unemp.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNational
19955.94.55.36.65.6
19975.64.35.26.05.3
19994.43.64.25.24.3
20013.84.04.04.64.2
20035.85.75.76.55.8
\n
\n\n\n\nBut if we try to do something like compute the sum of all the columns,\nwe get unexpected results\u2026\n\n\n```python\nstr_unemp.sum()\n```\n\n\n\n\n NorthEast 63.7\n MidWest 62\n South 5.35.24.24.05.75.24.37.69.17.45.5\n West 72.8\n National 65.7\n dtype: object\n\n\n\nThis happened because `.sum` effectively calls `+` on all rows in\neach column.\n\nRecall that when we apply `+` to two strings, the result is the two\nstrings concatenated.\n\nSo, in this case, we saw that the entries in all rows of the South\ncolumn were stitched together into one long string.\n\n## Changing DataFrames\n\nWe can change the data inside of a DataFrame in various ways:\n\n- Adding new columns \n- Changing index labels or column names \n- Altering existing data (e.g. doing some arithmetic or making a column\n of strings lowercase) \n\n\nSome of these \u201cmutations\u201d will be topics of future lectures, so we will\nonly briefly discuss a few of the things we can do below.\n\n### Creating New Columns\n\nWe can create new data by assigning values to a column similar to how\nwe assign values to a variable.\n\nIn pandas, we create a new column of a DataFrame by writing:\n\n```python\ndf[\"New Column Name\"] = new_values\n```\n\n\nBelow, we create an unweighted mean of the unemployment rate across the\nfour regions of the US \u2014 notice that this differs from the national\nunemployment rate.\n\n\n```python\nunemp_region[\"UnweightedMean\"] = (unemp_region[\"NorthEast\"] +\n unemp_region[\"MidWest\"] +\n unemp_region[\"South\"] +\n unemp_region[\"West\"])/4\n```\n\n\n```python\nunemp_region.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNationalUnweightedMean
19955.94.55.36.65.65.575
19975.64.35.26.05.35.275
19994.43.64.25.24.34.350
20013.84.04.04.64.24.100
20035.85.75.76.55.85.925
\n
\n\n\n\n### Changing Values\n\nChanging the values inside of a DataFrame should be done sparingly.\n\nHowever, it can be done by assigning a value to a location in the\nDataFrame.\n\n`df.loc[index, column] = value`\n\n\n```python\nunemp_region.loc[1995, \"UnweightedMean\"] = 0.0\n```\n\n\n```python\nunemp_region.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNationalUnweightedMean
19955.94.55.36.65.60.000
19975.64.35.26.05.35.275
19994.43.64.25.24.34.350
20013.84.04.04.64.24.100
20035.85.75.76.55.85.925
\n
\n\n\n\n### Renaming Columns\n\nWe can also rename the columns of a DataFrame, which is helpful because the names that sometimes come with datasets are\nunbearable\u2026\n\nFor example, the original name for the North East unemployment rate\ngiven by the Bureau of Labor Statistics was `LASRD910000000000003`\u2026\n\nThey have their reasons for using these names, but it can make our job\ndifficult since we often need to type it repeatedly.\n\nWe can rename columns by passing a dictionary to the `rename` method.\n\nThis dictionary contains the old names as the keys and new names as the\nvalues.\n\nSee the example below.\n\n\n```python\nnames = {\"NorthEast\": \"NE\",\n \"MidWest\": \"MW\",\n \"South\": \"S\",\n \"West\": \"W\"}\nunemp_region.rename(columns=names)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NEMWSWNationalUnweightedMean
19955.94.55.36.65.60.000
19975.64.35.26.05.35.275
19994.43.64.25.24.34.350
20013.84.04.04.64.24.100
20035.85.75.76.55.85.925
20054.95.75.25.55.35.325
20074.34.94.34.54.64.500
20097.18.17.68.67.87.850
20118.38.79.110.79.19.200
20137.97.47.48.58.07.800
20155.75.15.56.15.75.600
\n
\n\n\n\n\n```python\nunemp_region.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NorthEastMidWestSouthWestNationalUnweightedMean
19955.94.55.36.65.60.000
19975.64.35.26.05.35.275
19994.43.64.25.24.34.350
20013.84.04.04.64.24.100
20035.85.75.76.55.85.925
\n
\n\n\n\nWe renamed our columns\u2026 Why does the DataFrame still show the old\ncolumn names?\n\nMany pandas operations create a copy of your data by\ndefault to protect your data and prevent you from overwriting\ninformation you meant to keep.\n\nWe can make these operations permanent by either:\n\n1. Assigning the output back to the variable name\n `df = df.rename(columns=rename_dict)` \n1. Looking into whether the method has an `inplace` option. For\n example, `df.rename(columns=rename_dict, inplace=True)` \n\n\nSetting `inplace=True` will sometimes make your code faster\n(e.g. if you have a very large DataFrame and you don\u2019t want to copy all\nthe data), but that doesn\u2019t always happen.\n\nWe recommend using the first option until you get comfortable with\npandas because operations that don\u2019t alter your data are (usually)\nsafer.\n\n\n```python\nnames = {\"NorthEast\": \"NE\",\n \"MidWest\": \"MW\",\n \"South\": \"S\",\n \"West\": \"W\"}\n\nunemp_shortname = unemp_region.rename(columns=names)\nunemp_shortname.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NEMWSWNationalUnweightedMean
19955.94.55.36.65.60.000
19975.64.35.26.05.35.275
19994.43.64.25.24.34.350
20013.84.04.04.64.24.100
20035.85.75.76.55.85.925
\n
\n\n\n", "meta": {"hexsha": "43fe21922867b808a6253bd6a0e1ba25f5ccf3d7", "size": 130861, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Session_7/1_intro.ipynb", "max_stars_repo_name": "remi-sudo/Classes", "max_stars_repo_head_hexsha": "71497927ed4d54ddf6fd5abe2ddabb5966eb0304", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Session_7/1_intro.ipynb", "max_issues_repo_name": "remi-sudo/Classes", "max_issues_repo_head_hexsha": "71497927ed4d54ddf6fd5abe2ddabb5966eb0304", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Session_7/1_intro.ipynb", "max_forks_repo_name": "remi-sudo/Classes", "max_forks_repo_head_hexsha": "71497927ed4d54ddf6fd5abe2ddabb5966eb0304", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8453478625, "max_line_length": 46260, "alphanum_fraction": 0.6897242112, "converted": true, "num_tokens": 11932, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.37754066879814546, "lm_q2_score": 0.28457599814899737, "lm_q1q2_score": 0.10743901266507228}} {"text": "```python\nfrom IPython.display import Image \nImage('../../../python_for_probability_statistics_and_machine_learning.jpg')\n```\n\n\n\n\n \n\n \n\n\n\n[Python for Probability, Statistics, and Machine Learning](https://www.springer.com/fr/book/9783319307152)\n\n\n```python\nfrom __future__ import division\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n# Monte Carlo Sampling Methods\n\nSo far, we have studied analytical ways to transform random variables and how\nto augment these methods using Python. In spite of all this, we frequently must\nresort to purely numerical methods to solve real-world problems. Hopefully,\nnow that we have seen the deeper theory, these numerical methods feel more\nconcrete. Suppose we want to generate samples of a given density, $f(x)$,\ngiven we already can generate samples from a uniform distribution,\n$\\mathcal{U}[0,1]$. How do we know a random sample $v$ comes from the $f(x)$\ndistribution? One approach is to look at how a histogram of samples of $v$\napproximates $f(x)$. Specifically,\n\n\n
\n\n$$\n\\begin{equation}\n\\mathbb{P}( v \\in N_{\\Delta}(x) ) = f(x) \\Delta x \n\\end{equation}\n\\label{eq:mc01} \\tag{1}\n$$\n\n\n\n
\n\n

The histogram approximates the target probability density.

\n\n\n\n\n\n which says that the probability that a sample is in some $N_\\Delta$\nneighborhood of $x$ is approximately $f(x)\\Delta x$. [Figure](#fig:Sampling_Monte_Carlo_000) shows the target probability density function\n$f(x)$ and a histogram that approximates it. The histogram is generated from\nsamples $v$. The hatched rectangle in the center illustrates Equation\nref{eq:mc01}. The area of this rectangle is approximately $f(x)\\Delta x$ where\n$x=0$, in this case. The width of the rectangle is $N_{\\Delta}(x)$ The quality\nof the approximation may be clear visually, but to know that $v$ samples are\ncharacterized by $f(x)$, we need the statement of Equation ref{eq:mc01}, which\nsays that the proportion of samples $v$ that fill the hatched rectangle is\napproximately equal to $f(x)\\Delta x$.\n\nNow that we know how to evaluate samples $v$ that are characterized by the density\n$f(x)$, let's consider how to create these samples for both discrete and\ncontinuous random variables.\n\n## Inverse CDF Method for Discrete Variables\n\nSuppose we want to generate samples from a fair six-sided die. Our workhouse\nuniform random variable is defined continuously over the unit interval and the\nfair six-sided die is discrete. We must first create a mapping between the\ncontinuous random variable $u$ and the discrete outcomes of the die. This\nmapping is shown in [Figure](#fig:Sampling_Monte_Carlo_0001) where the unit\ninterval is broken up into segments, each of length $1/6$. Each individual\nsegment is assigned to one of the die outcomes. For example, if $u \\in\n[1/6,2/6)$, then the outcome for the die is $2$. Because the die is fair, all\nsegments on the unit interval are the same length. Thus, our new random\nvariable $v$ is derived from $u$ by this assignment.\n\n\n\n
\n\n

A uniform distribution random variable on the unit interval is assigned to the six outcomes of a fair die using these segements.

\n\n\n\n\n\nFor example, for $v=2$, we have,\n\n$$\n\\mathbb{P}(v=2) = \\mathbb{P}(u\\in [1/6,2/6)) = 1/6\n$$\n\n where, in the language of the Equation ref{eq:mc01}, $f(x)=1$\n(uniform distribution), $\\Delta x = 1/6$, and $N_\\Delta (2)=[1/6,2/6)$.\nNaturally, this pattern holds for all the other die outcomes in\n$\\left\\{1,2,3,..,6\\right\\}$. Let's consider a quick simulation to make this\nconcrete. The following code generates uniform random samples and stacks them\nin a Pandas dataframe.\n\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom pandas import DataFrame\nu= np.random.rand(100)\ndf = DataFrame(data=u,columns=['u'])\n```\n\n The next block uses `pd.cut` to map the individual samples to\nthe set $\\left\\{1,2,\\ldots,6\\right\\}$ labeled `v`.\n\n\n```python\nlabels = [1,2,3,4,5,6]\ndf['v']=pd.cut(df.u,np.linspace(0,1,7),\n include_lowest=True,labels=labels)\n```\n\n This is what the dataframe contains. The `v` column contains\nthe samples drawn from the fair die.\n\n\n```python\n>>> df.head()\n \n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
uv
00.6053684
10.5259414
20.1419851
30.7295105
40.2405052
\n
\n\n\n\n The following is a count of the number of samples in each group. There\nshould be roughly the same number of samples in each group because the die is fair.\n\n\n```python\n>>> df.groupby('v').count()\n \n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
u
v
119
215
315
416
523
612
\n
\n\n\n\n So far, so good. We now have a way to simulate a fair\ndie from a uniformly distributed random variable.\n\nTo extend this to unfair die, we need only make some small adjustments to this\ncode. For example, suppose that we want an unfair die so that\n$\\mathbb{P}(1)=\\mathbb{P}(2)=\\mathbb{P}(3)=1/12$ and\n$\\mathbb{P}(4)=\\mathbb{P}(5)=\\mathbb{P}(6)=1/4$. The only change we have to\nmake is with `pd.cut` as follows,\n\n\n```python\ndf['v']=pd.cut(df.u,[0,1/12,2/12,3/12,2/4,3/4,1],\n include_lowest=True,labels=labels)\n```\n\n\n```python\n>>> df.groupby('v').count()/df.shape[0]\n \n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
u
v
10.05
20.14
30.07
40.23
50.32
60.19
\n
\n\n\nu\nv \n1 0.08\n2 0.10\n3 0.09\n4 0.23\n5 0.19\n6 0.31\n where now these are the individual probabilities of each digit. You\ncan take more than `100` samples to get a clearer view of the individual\nprobabilities but the mechanism for generating them is the same. The method is\ncalled the inverse CDF [^CDF] method because the CDF\n(namely,$\\texttt{[0,1/12,2/12,3/12,2/4,3/4,1]}$) in the last example has been\ninverted (using the `pd.cut` method) to generate the samples. \nThe inversion is easier to see for continuous variables, which we consider\nnext.\n\n[^CDF]: Cumulative density function. Namely, $F(x)=\\mathbb{P}(X < x)$.\n\n\n## Inverse CDF Method for Continuous Variables\n\nThe method above applies to continuous random variables, but now we have to use\nsqueeze the intervals down to individual points. In the example above, our\ninverse function was a piecewise function that operated on uniform random\nsamples. In this case, the piecewise function collapses to a continuous inverse\nfunction. We want to generate random samples for a CDF that is invertible.\nAs before, the criterion for generating an appropriate sample $v$ is the\nfollowing,\n\n$$\n\\mathbb{P}(F(x) < v < F(x+\\Delta x)) = F(x+\\Delta x) - F(x) = \\int_x^{x+\\Delta x} f(u) du \\approx f(x) \\Delta x\n$$\n\n which is saying that the probability that the sample $v$ is contained\nin a $\\Delta x$ interval is approximately equal to the density function, $f(x)\n\\Delta x$, at that point. Once again, the trick is to use a uniform random\nsample $u$ and an invertible CDF $F(x)$ to construct these samples. Note\nthat for a uniform random variable $u \\sim \\mathcal{U}[0,1]$, we have,\n\n$$\n\\begin{align*}\n\\mathbb{P}(x < F^{-1}(u) < x+\\Delta x) & = \\mathbb{P}(F(x) < u < F(x+\\Delta x)) \\\\\\\n & = F(x+\\Delta x) - F(x) \\\\\\\n & = \\int_x^{x+\\Delta x} f(p) dp \\approx f(x) \\Delta x\n\\end{align*}\n$$\n\n This means that $ v=F^{-1}(u) $ is distributed according to $f(x)$,\nwhich is what we want. \n\nLet's try this to generate samples from the\nexponential distribution,\n\n$$\nf_{\\alpha}(x) = \\alpha e^{ -\\alpha x }\n$$\n\n which has the following CDF,\n\n$$\nF(x) = 1-e^{ -\\alpha x }\n$$\n\n and corresponding inverse,\n\n$$\nF^{-1}(u) = \\frac{1}{\\alpha}\\ln \\frac{1}{(1-u)}\n$$\n\n Now, all we have to do is generate some uniformly distributed\nrandom samples and then feed them into $F^{-1}$.\n\n\n```python\nfrom numpy import array, log\nimport scipy.stats\nalpha = 1. # distribution parameter\nnsamp = 1000 # num of samples\n# define uniform random variable\nu=scipy.stats.uniform(0,1)\n# define inverse function\nFinv=lambda u: 1/alpha*log(1/(1-u))\n# apply inverse function to samples\nv = array(map(Finv,u.rvs(nsamp)))\n```\n\n Now, we have the samples from the exponential distribution, but how\ndo we know the method is correct with samples distributed accordingly?\nFortunately, `scipy.stats` already has a exponential distribution, so we can\ncheck our work against the reference using a *probability plot* (i.e., also\nknown as a *quantile-quantile* plot). The following code sets up the\nprobability plot from `scipy.stats`.\n\n\n```python\n%matplotlib inline\n\nfrom matplotlib.pylab import setp, subplots\nfig,ax = subplots()\nfig.set_size_inches((7,5))\n_=scipy.stats.probplot(v,(1,),dist='expon',plot=ax)\nline=ax.get_lines()[0]\n_=setp(line,'color','k')\n_=setp(line,'alpha',.1)\nline=ax.get_lines()[1]\n_=setp(line,'color','gray')\n_=setp(line,'lw',3.0)\n_=setp(ax.yaxis.get_label(),'fontsize',18)\n_=setp(ax.xaxis.get_label(),'fontsize',18)\n_=ax.set_title('Probability Plot',fontsize=18)\n_=ax.grid()\nfig.tight_layout()\n#fig.savefig('fig-probability/Sampling_Monte_Carlo_005.png')\n```\n\n\n```python\nfig,ax=subplots()\nscipy.stats.probplot(v,(1,),dist='expon',plot=ax)\n```\n\n Note that we have to supply an axes object (`ax`) for it to draw on.\nThe result is [Figure](#fig:Sampling_Monte_Carlo_005). The more the samples\nline match the diagonal line, the more they match the reference distribution\n(i.e., exponential distribution in this case). You may also want to try\n`dist=norm` in the code above To see what happens when the normal distribution\nis the reference distribution.\n\n\n\n
\n\n

The samples created using the inverse cdf method match the exponential reference distribution.

\n\n\n\n\n\n## Rejection Method\n\nIn some cases, inverting the CDF may be impossible. The *rejection*\nmethod can handle this situation. The idea is to pick two uniform random\nvariables $u_1,u_2 \\sim \\mathcal{U}[a,b]$ so that\n\n$$\n\\mathbb{P}\\left(u_1 \\in N_{\\Delta}(x) \\bigwedge u_2 < \\frac{f(u_1)}{M} \\right) \\hspace{0.5em} \\approx \\frac{\\Delta x}{b-a} \\frac{f(u_1)}{M}\n$$\n\n where we take $x=u_1$ and $f(x) < M $. This is a two-step process.\nFirst, draw $u_1$ uniformly from the interval $[a,b]$. Second, feed it into\n$f(x)$ and if $u_2 < f(u_1)/M$, then you have a valid sample for $f(x)$. Thus,\n$u_1$ is the proposed sample from $f$ that may or may not be rejected depending\non $u_2$. The only job of the $M$ constant is to scale down the $f(x)$ so that\nthe $u_2$ variable can span the range. The *efficiency* of this method is the\nprobability of accepting $u_1$ which comes from integrating out the above\napproximation,\n\n$$\n\\int \\frac{f(x)}{M(b-a)} dx = \\frac{1}{M(b-a)} \\int f(x)dx =\\frac{1}{M(b-a)}\n$$\n\n This means that we don't want an unecessarily large $M$ because that\nmakes it more likely that samples will be discarded. \n\nLet's try this method for a density that does not have a continuous inverse [^normalization]. \n\n[^normalization]: Note that this example density does not *exactly* integrate\nout to one like a probability density function should, but the normalization\nconstant for this is distracting for our purposes here.\n\n$$\nf(x) = \\exp\\left(-\\frac{(x-1)^2}{2x} \\right) (x+1)/12\n$$\n\n where $x>0$. The following code implements the rejection plan.\n\n\n```python\nimport numpy as np\nx = np.linspace(0.001,15,100)\nf= lambda x: np.exp(-(x-1)**2/2./x)*(x+1)/12.\nfx = f(x)\nM=0.3 # scale factor\nu1 = np.random.rand(10000)*15 # uniform random samples scaled out\nu2 = np.random.rand(10000) # uniform random samples\nidx,= np.where(u2<=f(u1)/M) # rejection criterion\nv = u1[idx]\n```\n\n\n```python\nfig,ax=subplots()\nfig.set_size_inches((9,5))\n_=ax.hist(v,normed=1,bins=40,alpha=.3,color='gray')\n_=ax.plot(x,fx,'k',lw=3.,label='$f(x)$')\n_=ax.set_title('Estimated Efficency=%3.1f%%'%(100*len(v)/len(u1)),\n fontsize=18)\n_=ax.legend(fontsize=18)\n_=ax.set_xlabel('$x$',fontsize=24)\nfig.tight_layout()\n#fig.savefig('fig-probability/Sampling_Monte_Carlo_007.png')\n```\n\n [Figure](#fig:Sampling_Monte_Carlo_007) shows a histogram of the\nso-generated samples that nicely fits the probability density function. The\ntitle in the figure shows the efficiency, which is poor. It means that we threw\naway most of the proposed samples. Thus, even though there is nothing\nconceptually wrong with this result, the low efficiency must be fixed, as a\npractical matter. [Figure](#fig:Sampling_Monte_Carlo_008) shows where the\nproposed samples were rejected. Samples under the curve were retained (i.e.,\n$u_2 < \\frac{f(u_1)}{M}$) but the vast majority of the samples are outside this\numbrella.\n\n\n\n
\n\n

The rejection method generate samples in the histogram that nicely match the target distribution. Unfortunately, the efficiency is not so good.

\n\n\n\n\n\n```python\nfig,ax=subplots()\nfig.set_size_inches((9,5))\n_=ax.plot(u1,u2,'+',label='rejected',alpha=.3,color='gray')\n_=ax.plot(u1[idx],u2[idx],'.',label='accepted',alpha=.3,color='k')\n_=ax.legend(fontsize=22)\nfig.tight_layout()\n#fig.savefig('fig-probability/Sampling_Monte_Carlo_008.png')\n```\n\n\n\n
\n\n

The proposed samples under the curve were accepted and the others were not. This shows the majority of samples were rejected.

\n\n\n\n\n\nThe rejection method uses $u_1$ to select along the domain of $f(x)$ and the\nother $u_2$ uniform random variable decides whether to accept or not. One idea\nwould be to choose $u_1$ so that $x$ values are coincidentally those that are\nnear the peak of $f(x)$, instead of uniformly anywhere in the domain,\nespecially near the tails, which are low probability anyway. Now, the trick is\nto find a new density function $g(x)$ to sample from that has a similiar\nconcentration of probability density. One way it to familiarize oneself with\nthe probability density functions that have adjustable parameters and fast random\nsample generators already. There are lots of places to look and, chances are,\nthere is likely already such a generator for your problem. Otherwise, the\nfamily of $\\beta$ densities is a good place to start. \n\nTo be explicit, what we want is $u_1 \\sim g(x)$ so that, returning to our\nearlier argument,\n\n$$\n\\mathbb{P}\\left( u_1 \\in N_{\\Delta}(x) \\bigwedge u_2 < \\frac{f(u_1)}{M} \\right) \\approx g(x) \\Delta x \\frac{f(u_1)}{M}\n$$\n\n but this is *not* what we need here. The problem is with the\nsecond part of the logical $\\bigwedge$ conjunction. We need to put\nsomething there that will give us something proportional to $f(x)$.\nLet us define the following,\n\n\n
\n\n$$\n\\begin{equation}\n h(x) = \\frac{f(x)}{g(x)} \n\\end{equation}\n\\label{eq:rej01} \\tag{2}\n$$\n\n with corresponding maximum on the domain as $h_{\\max}$ and\nthen go back and construct the second part of the clause as\n\n$$\n\\mathbb{P}\\left(u_1 \\in N_{\\Delta}(x) \\bigwedge u_2 < \\frac{h(u_1)}{h_{\\max}} \\right) \\approx g(x) \\Delta x \\frac{h(u_1)}{h_{\\max}} = f(x)/h_{\\max}\n$$\n\n Recall that satisfying this criterion means that $u_1=x$. As before,\nwe can estimate the probability of acceptance of the $u_1$ as $1/h_{\\max}$.\n\nNow, how to construct the $g(x)$ function in the denominator of Equation\nref{eq:rej01}? Here's where familarity with some standard probability densities\npays off. For this case, we choose the chi-squared distribution. The following\nplots the $g(x)$ and $f(x)$ (left plot) and the corresponding $h(x)=f(x)/g(x)$\n(right plot). Note that $g(x)$ and $f(x)$ have peaks that almost coincide,\nwhich is what we are looking for.\n\n\n```python\nch=scipy.stats.chi2(4) # chi-squared\nh = lambda x: f(x)/ch.pdf(x) # h-function\n```\n\n\n```python\nfig,axs=subplots(1,2,sharex=True)\nfig.set_size_inches(12,4)\n_=axs[0].plot(x,fx,label='$f(x)$',color='k')\n_=axs[0].plot(x,ch.pdf(x),'--',lw=2,label='$g(x)$',color='gray')\n_=axs[0].legend(loc=0,fontsize=24)\n_=axs[0].set_xlabel(r'$x$',fontsize=22)\n_=axs[1].plot(x,h(x),'-k',lw=3)\n_=axs[1].set_title('$h(x)=f(x)/g(x)$',fontsize=24)\n_=axs[1].set_xlabel(r'$x$',fontsize=22)\nfig.tight_layout()\n#fig.savefig('fig-probability/Sampling_Monte_Carlo_009.png')\n```\n\n\n\n
\n\n

The plot on the right shows $h(x)=f(x)/g(x)$ and the one on the left shows $f(x)$ and $g(x)$ separately.

\n\n\n\n\n\n Now, let's generate some samples from this $\\chi^2$\ndistribution with the rejection method.\n\n\n```python\nhmax=h(x).max()\nu1 = ch.rvs(5000) # samples from chi-square distribution\nu2 = np.random.rand(5000)# uniform random samples\nidx = (u2 <= h(u1)/hmax) # rejection criterion\nv = u1[idx] # keep these only\n```\n\n\n```python\nfig,ax=subplots()\nfig.set_size_inches((7,3))\n_=ax.hist(v,normed=1,bins=40,alpha=.3,color='gray')\n_=ax.plot(x,fx,color='k',lw=3.,label='$f(x)$')\n_=ax.set_title('Estimated Efficency=%3.1f%%'%(100*len(v)/len(u1)))\n_=ax.axis(xmax=15)\n_=ax.legend(fontsize=18)\n#fig.savefig('fig-probability/Sampling_Monte_Carlo_010.png')\n```\n\n\n\n
\n\n

Using the updated method, the histogram matches the target probability density function with high efficiency.

\n\n\n\n\n\nUsing the $\\chi^2$ distribution with the rejection method results in throwing\naway less than 10% of the generated samples compared with our prior example\nwhere we threw out at least 80%. This is dramatically more\nefficient. [Figure](#fig:Sampling_Monte_Carlo_010) shows that the histogram\nand the probability density function match. For completeness, [Figure](#fig:Sampling_Monte_Carlo_011) shows the samples with the corresponding\nthreshold $h(x)/h_{\\max}$ that was used to select them.\n\n\n```python\nfig,ax=subplots()\nfig.set_size_inches((7,4))\n_=ax.plot(u1,u2,'+',label='rejected',alpha=.3,color='gray')\n_=ax.plot(u1[idx],u2[idx],'g.',label='accepted',alpha=.3,color='k')\n_=ax.plot(x,h(x)/hmax,color='k',lw=3.,label='$h(x)$')\n_=ax.legend(fontsize=16,loc=0) \n_=ax.set_xlabel('$x$',fontsize=24)\n_=ax.set_xlabel('$h(x)$',fontsize=24)\n_=ax.axis(xmax=15,ymax=1.1)\nfig.tight_layout()\n#fig.savefig('fig-probability/Sampling_Monte_Carlo_011.png')\n```\n\n\n\n
\n\n

Fewer proposed points were rejected in this case, which means better efficiency.

\n\n\n\n\n\nIn this section, we investigated how to generate random samples from a given\ndistribution, beit discrete or continuous. For the continuous case, the key\nissue was whether or not the cumulative density function had a continuous\ninverse. If not, we had to turn to the rejection method, and find an\nappropriate related density that we could easily sample from to use as part of\na rejection threshold. Finding such a function is an art, but many families of\nprobability densities have been studied over the years that already have fast\nrandom number generators.\n\nThe rejection method has many complicated extensions that involve careful\npartitioning of the domains and lots of special methods for corner cases.\nNonetheless, all of these advanced techniques are still variations on the same\nfundamental theme we illustrated here [[dunn2011exploring]](#dunn2011exploring),[[johnson1995continuous]](#johnson1995continuous).\n", "meta": {"hexsha": "9f6bd05b8f84b5934cd9261de379476b97c2b4b5", "size": 706141, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapters/probability/notebooks/Sampling_Monte_Carlo.ipynb", "max_stars_repo_name": "nsydn/Python-for-Probability-Statistics-and-Machine-Learning", "max_stars_repo_head_hexsha": "d3e0f8ea475525a694a975dbfd2bf80bc2967cc6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 570, "max_stars_repo_stars_event_min_datetime": "2016-05-05T19:08:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T05:09:19.000Z", "max_issues_repo_path": "chapters/probability/notebooks/Sampling_Monte_Carlo.ipynb", "max_issues_repo_name": "crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning", "max_issues_repo_head_hexsha": "6fd69459a28c0b76b37fad79b7e8e430d09a86a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-05-12T22:18:58.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-06T14:37:06.000Z", "max_forks_repo_path": "chapters/probability/notebooks/Sampling_Monte_Carlo.ipynb", "max_forks_repo_name": "crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning", "max_forks_repo_head_hexsha": "6fd69459a28c0b76b37fad79b7e8e430d09a86a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 276, "max_forks_repo_forks_event_min_datetime": "2016-05-27T01:42:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T11:20:27.000Z", "avg_line_length": 409.1199304751, "max_line_length": 220340, "alphanum_fraction": 0.9068712906, "converted": true, "num_tokens": 6682, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48828339529583464, "lm_q2_score": 0.22000710486009023, "lm_q1q2_score": 0.10742581615029158}} {"text": "```python\n%matplotlib notebook\n%matplotlib inline\nimport math\nimport matplotlib.pyplot as plt\n```\n\n\n# Nuclear Power Economics and Fuel Management\n\n\n\n## Syllabus\n\nThroughout the semester, you can always find the syllabus online at [https://github.com/katyhuff/npre412/blob/master/syllabus/syllabus.pdf](https://github.com/katyhuff/npre412/blob/master/syllabus/syllabus.pdf).\n\n\n```python\nfrom IPython.display import IFrame\nIFrame(\"../syllabus/syllabus.pdf\", width=1000, height=1000)\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\nfrom IPython.display import IFrame\nIFrame(\"http://katyhuff.github.io\", width=1000, height=700)\n```\n\n\n\n\n\n\n\n\n\n\n## Assessment\n\n\nMy goal (and, hopefully your goal) is for you to learn this material. If I have done my job right, your grade in this class will reflect just that -- how much you have learned. To do this for many learning types, your comprehension of the readings will be assessed with quizzes, your ability to apply what you've learned from class on your own will be assessed with the homeworks and projects, and your wholistic retention of the material will be assessed with tests.\n\n\n\n\n### Monday Background Reading Assignments\n\nRather than introducing you to concepts during class, I think our time is better spent if we focus on exploring those concepts through demonstration and discussion together. **This 'active learning' educational strategy is backed by science, but is also just more respectful of your ability to learn things on your own.** Therefore, the lectures will assume you have studied the background materials ahead of time. This will include book sections, government reports, videos, and other resources. You will be expected to study the material outside of class before the start of each week. This will include book sections, government reports, videos, and other resources. I recommend you take notes on this material as it may be part of the tests.\n\n**On Monday of each week I'll assign a list of material. You'll have 7 days to study that material before we start covering those concepts in class.**\n\n\n\n### Monday Quizzes\n\nTo help me calibrate the in-class discussion, a weekly quiz will assess your comprehension of the background material. The quizzes can be taken online through [Compass2g](https://compass2g.illinois.edu) at any time during the week, but they must be completed by Monday at 10am, 7 days after the material was assigned.\n\n### Friday Homework Assignments\n\nHomeworks will be assigned each Friday concerning the material covered that week. You will have 7 days to do the homework, so it will be due at 10am on the following Friday. You'll notice that my office hours are also on Friday. This is intentional, because I feel office hours are most effective if you come to them after handing in your homework to discuss the parts that you didn't get. \n\n### Projects\n\nThe class will involve a longer project only assigned to graduate students. \n\n### Tests\n\nThe midterms will take place in class. They will be independent of one another. The final will be comprehensive.\n\n### Participation\n\nI will notice when you are not in class, but attendance won't directly affect your grades. It may, however, indirectly affect your grades. If you miss something I demonstrate in class, you'll have a lot more trouble proving that you've learned it. \n\n## How to get an A\n\nMy dear friend, mathematician Kathryn Mann, has a great summary of [how to get an A in her class.](https://math.berkeley.edu/~kpmann/getanA.pdf). Everything she says about her math classes is true for this class as well. You should expect to spend 3 hours outside of class for every hour you spend in class. So, for a 3 credit class, you'll need to spend 3 hours a week in class and 9 hours outside of class on the coursework. If you find you're spending much less or much more time on this class, please let me know. \n\n## Late Work\n\n**Late work has a halflife of 1 hour.** That is, adjusted for lateness, your grade $G(t)$ is a decaying percentage of the raw grade $G_0$. An assignment turned in $t$ hours late will recieve a grade according to the following relation:\n\n$$\n\\begin{align}\n G(t) &= G_0e^{-\\lambda t}\\\\\n\\end{align}\n$$\n\nwhere\n\n$$\n\\begin{align}\n G(t) &= \\mbox{grade adjusted for lateness}\\\\\n G_0 &= \\mbox{raw grade}\\\\\n \\lambda &= \\frac{ln(2)}{t_{1/2}} = \\mbox{decay constant} \\\\\n t &= \\mbox{time elapsed since due [hours]}\\\\\n t_{1/2} &= 1 = \\mbox{half-life [hours]} \\\\\n\\end{align}\n$$\n\n\n```python\nimport math\ndef late_grade(hours_late, grade=100, half_life=1):\n \"\"\"This function describes how much credit you will get for late work\"\"\"\n lam = math.log(2)/half_life\n return grade*math.exp(-lam*hours_late)\n```\n\n\n```python\n# This code plots how much credit you'll get over time\nimport numpy as np\ny = np.arange(24)\nx = np.arange(24)\nfor h in range(0,24):\n x[h] = h\n y[h] = late_grade(h)\n \n# creates a figure and axes with matplotlib\nfig, ax = plt.subplots()\nscatter = plt.scatter(x, y, color='blue', s=y*20, alpha=0.4) \nax.plot(x, y, color='red') \n\n# adds labels to the plot\nax.set_ylabel('Percent of Grade Earned')\nax.set_xlabel('Hours Late')\nax.set_title('Grade Decay')\n\n# adds tooltips\nimport mpld3\nlabels = ['{0}% earned'.format(i) for i in y]\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\n\nmpld3.display()\n```\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n```python\nprint(\"If you turn in your homework an hour late, you'll get \", round(late_grade(1),2), \"% credit.\")\nprint(\"If you turn in your homework six hours late, you'll get \", round(late_grade(6),2), \"% credit.\")\nprint(\"If you turn in your homework a day late, you'll get \", round(late_grade(24),2), \"% credit.\")\nprint(\"If you turn in your homework two days late, you'll get \", round(late_grade(48),2), \"% credit.\")\nprint(\"If you turn in your homework three days late, you'll get \", round(late_grade(72),2), \"% credit.\")\n```\n\n If you turn in your homework an hour late, you'll get 50.0 % credit.\n If you turn in your homework six hours late, you'll get 1.56 % credit.\n If you turn in your homework a day late, you'll get 0.0 % credit.\n If you turn in your homework two days late, you'll get 0.0 % credit.\n If you turn in your homework three days late, you'll get 0.0 % credit.\n\n\n**There will be no negotiation about late work except in the case of absence documented by an absence letter from the Dean of Students.** The university policy for requesting such a letter is [here](http://studentcode.illinois.edu/article1_part5_1-501.html) . Please note that such a letter is appropriate for many types of conflicts, but that religious conflicts require special early handling. In accordance with university policy, students seeking an excused absence for religious reasons should complete the Request for Accommodation for Religious Observances Form, which can be found on the Office of the Dean of Students website. The student should submit this form to the instructor and the Office of the Dean of Students by the end of the second week of the course to which it applies.\n\n## Communications\n\n\nThings to try when you have a question:\n\n- Be Persistent: [Try just one more time.](https://s-media-cache-ak0.pinimg.com/736x/03/54/ce/0354ce58a7a4308edcc46dd9238e12d7.jpg)\n- Google: [You might be surprised at its depth.](https://devhumor.com/content/uploads//images/April2016/google-errors.jpg)\n- Piazza: Try this first, your student colleagues probably know the answer.\n- TA email: A quick question can usually be answered by your TA via email.\n- TA office hours: Your TA is there for you at a regularly scheduled time.\n- Prof. email: Questions not appropriate for your TA can be directed to me.\n- Prof. office hours: I will be in my office once a week for your convenience.\n- Prof. appointment: For private matters or when office hours conflict with your schedule, schedule an appointment with me.\n\n### A note on email\n\n[Email tips for dealing with fussy professor types.](https://medium.com/@lportwoodstacer/how-to-email-your-professor-without-being-annoying-af-cf64ae0e4087)\n\n## Python, IPython, Jupyter, git, and the Notebooks\n\nRather than reading equations off of slides, I will display lecture notes, equations, and images in \"Jupyter notebooks\" like this one. Sometimes, I will call them by their old name, \"IPython notebooks,\" but I'm talking about \"Jupyter notebooks\". Interleaved with the course notes, we will often write small functions in the Python programming language to represent the equations we are talking about. This will allow you to interact with the math, changing variables, modifying the models, and exploring the parameter space. \n\n### But I don't know Python!\n\n*You don't have to know Python to take this class.* However, you will need to learn a little along the way. I will provide lots of example code to support your completion of homework assignments and I will never ask you to write functioning code as part of any written exam. Programming is really hard without the internet. \n\n### Exercises\n\nWatch for blocks titled **Exercise** in the notebooks. Those mark moments when I will ask you, during class, to try something out, explore an equation, or arrive at an answer. These are short and are not meant to be difficult. They exist to quickly solidify an idea before we move on to the next one. I will often randomly call on students (with a random number generator populated with the enrollment list) to give solutions to the exercises, so **a failure to show up and participate will be noticed.** \n\n### Installing Python, IPython, Jupyter, git, and the Notebooks\n\nBecause engaging in the exercises will be really helpful for you to study, you'll should try to gain access to a computer equipped with Python (a version greater than 3.0) and a basic set of scientific python libraries. If you have a computer already, I encourage you to install [anaconda](https://www.continuum.io/downloads).\n\nThese notebooks are stored \"in the cloud,\" which is to say that they are stored on someone else's computers. Those computers are servers at GitHub, a sometimes silly but also very important company in the beautiful city of San Francisco. GitHub stores \"git repositories\" which are collections of files that are \"version controlled\" by the program \"git.\" This is a lot to keep track of, and I won't require that you learn git to participate in this class. However, I strongly recommend using git and GitHub and to keep track of your research code. So, I encourage you to use git to access the notebooks. \n\n**More information about the things you might want to install can be found in the [README](https://github.com/katyhuff/npre412/blob/master/README.md).**\n\n\n\n\n\n\n\n", "meta": {"hexsha": "38bda0263be6f2d51a410bbdfeb3b81139d6640f", "size": 32542, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "introduction/00-intro-syllabus.ipynb", "max_stars_repo_name": "atomicaristides/NPRE412", "max_stars_repo_head_hexsha": "b2ae552303f3e4894628c8401d3bedd2db85a551", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "introduction/00-intro-syllabus.ipynb", "max_issues_repo_name": "atomicaristides/NPRE412", "max_issues_repo_head_hexsha": "b2ae552303f3e4894628c8401d3bedd2db85a551", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "introduction/00-intro-syllabus.ipynb", "max_forks_repo_name": "atomicaristides/NPRE412", "max_forks_repo_head_hexsha": "b2ae552303f3e4894628c8401d3bedd2db85a551", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 81.5588972431, "max_line_length": 5105, "alphanum_fraction": 0.573689386, "converted": true, "num_tokens": 2561, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.40356685373537454, "lm_q2_score": 0.2658804672827598, "lm_q1q2_score": 0.10730054365099458}} {"text": "##### Copyright 2020 The TensorFlow Authors.\n\n\n```\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# Quantum Convolutional Neural Network\n\n\n \n \n \n \n
\n View on TensorFlow.org\n \n Run in Google Colab\n \n View source on GitHub\n \n Download notebook\n
\n\nThis tutorial implements a simplified Quantum Convolutional Neural Network (QCNN), a proposed quantum analogue to a classical convolutional neural network that is also *translationally invariant*.\n\nThis example demonstrates how to detect certain properties of a quantum data source, such as a quantum sensor or a complex simulation from a device. The quantum data source being a cluster state that may or may not have an excitation\u2014what the QCNN will learn to detect (The dataset used in the paper was SPT phase classification).\n\n## Setup\n\n\n```\n!pip install tensorflow==2.3.1\n```\n\nInstall TensorFlow Quantum:\n\n\n```\n!pip install tensorflow-quantum\n```\n\nNow import TensorFlow and the module dependencies:\n\n\n```\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\n```\n\n## 1. Build a QCNN\n\n### 1.1 Assemble circuits in a TensorFlow graph\n\nTensorFlow Quantum (TFQ) provides layer classes designed for in-graph circuit construction. One example is the `tfq.layers.AddCircuit` layer that inherits from `tf.keras.Layer`. This layer can either prepend or append to the input batch of circuits, as shown in the following figure.\n\n\n\nThe following snippet uses this layer:\n\n\n```\nqubit = cirq.GridQubit(0, 0)\n\n# Define some circuits.\ncircuit1 = cirq.Circuit(cirq.X(qubit))\ncircuit2 = cirq.Circuit(cirq.H(qubit))\n\n# Convert to a tensor.\ninput_circuit_tensor = tfq.convert_to_tensor([circuit1, circuit2])\n\n# Define a circuit that we want to append\ny_circuit = cirq.Circuit(cirq.Y(qubit))\n\n# Instantiate our layer\ny_appender = tfq.layers.AddCircuit()\n\n# Run our circuit tensor through the layer and save the output.\noutput_circuit_tensor = y_appender(input_circuit_tensor, append=y_circuit)\n```\n\nExamine the input tensor:\n\n\n```\nprint(tfq.from_tensor(input_circuit_tensor))\n```\n\nAnd examine the output tensor:\n\n\n```\nprint(tfq.from_tensor(output_circuit_tensor))\n```\n\nWhile it is possible to run the examples below without using `tfq.layers.AddCircuit`, it's a good opportunity to understand how complex functionality can be embedded into TensorFlow compute graphs.\n\n### 1.2 Problem overview\n\nYou will prepare a *cluster state* and train a quantum classifier to detect if it is \"excited\" or not. The cluster state is highly entangled but not necessarily difficult for a classical computer. For clarity, this is a simpler dataset than the one used in the paper.\n\nFor this classification task you will implement a deep MERA-like QCNN architecture since:\n\n1. Like the QCNN, the cluster state on a ring is translationally invariant.\n2. The cluster state is highly entangled.\n\nThis architecture should be effective at reducing entanglement, obtaining the classification by reading out a single qubit.\n\n\n\nAn \"excited\" cluster state is defined as a cluster state that had a `cirq.rx` gate applied to any of its qubits. Qconv and QPool are discussed later in this tutorial.\n\n### 1.3 Building blocks for TensorFlow\n\n\n\nOne way to solve this problem with TensorFlow Quantum is to implement the following:\n\n1. The input to the model is a circuit tensor\u2014either an empty circuit or an X gate on a particular qubit indicating an excitation.\n2. The rest of the model's quantum components are constructed with `tfq.layers.AddCircuit` layers.\n3. For inference a `tfq.layers.PQC` layer is used. This reads $\\langle \\hat{Z} \\rangle$ and compares it to a label of 1 for an excited state, or -1 for a non-excited state.\n\n### 1.4 Data\nBefore building your model, you can generate your data. In this case it's going to be excitations to the cluster state (The original paper uses a more complicated dataset). Excitations are represented with `cirq.rx` gates. A large enough rotation is deemed an excitation and is labeled `1` and a rotation that isn't large enough is labeled `-1` and deemed not an excitation.\n\n\n```\ndef generate_data(qubits):\n \"\"\"Generate training and testing data.\"\"\"\n n_rounds = 20 # Produces n_rounds * n_qubits datapoints.\n excitations = []\n labels = []\n for n in range(n_rounds):\n for bit in qubits:\n rng = np.random.uniform(-np.pi, np.pi)\n excitations.append(cirq.Circuit(cirq.rx(rng)(bit)))\n labels.append(1 if (-np.pi / 2) <= rng <= (np.pi / 2) else -1)\n\n split_ind = int(len(excitations) * 0.7)\n train_excitations = excitations[:split_ind]\n test_excitations = excitations[split_ind:]\n\n train_labels = labels[:split_ind]\n test_labels = labels[split_ind:]\n\n return tfq.convert_to_tensor(train_excitations), np.array(train_labels), \\\n tfq.convert_to_tensor(test_excitations), np.array(test_labels)\n```\n\nYou can see that just like with regular machine learning you create a training and testing set to use to benchmark the model. You can quickly look at some datapoints with:\n\n\n```\nsample_points, sample_labels, _, __ = generate_data(cirq.GridQubit.rect(1, 4))\nprint('Input:', tfq.from_tensor(sample_points)[0], 'Output:', sample_labels[0])\nprint('Input:', tfq.from_tensor(sample_points)[1], 'Output:', sample_labels[1])\n```\n\n### 1.5 Define layers\n\nNow define the layers shown in the figure above in TensorFlow.\n\n#### 1.5.1 Cluster state\n\nThe first step is to define the cluster state using Cirq, a Google-provided framework for programming quantum circuits. Since this is a static part of the model, embed it using the `tfq.layers.AddCircuit` functionality.\n\n\n```\ndef cluster_state_circuit(bits):\n \"\"\"Return a cluster state on the qubits in `bits`.\"\"\"\n circuit = cirq.Circuit()\n circuit.append(cirq.H.on_each(bits))\n for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):\n circuit.append(cirq.CZ(this_bit, next_bit))\n return circuit\n```\n\nDisplay a cluster state circuit for a rectangle of cirq.GridQubits:\n\n\n```\nSVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))\n```\n\n#### 1.5.2 QCNN layers\n\nDefine the layers that make up the model using the Cong and Lukin QCNN paper. There are a few prerequisites:\n\n* The one- and two-qubit parameterized unitary matrices from the Tucci paper.\n* A general parameterized two-qubit pooling operation.\n\n\n```\ndef one_qubit_unitary(bit, symbols):\n \"\"\"Make a Cirq circuit enacting a rotation of the bloch sphere about the X,\n Y and Z axis, that depends on the values in `symbols`.\n \"\"\"\n return cirq.Circuit(\n cirq.X(bit)**symbols[0],\n cirq.Y(bit)**symbols[1],\n cirq.Z(bit)**symbols[2])\n\n\ndef two_qubit_unitary(bits, symbols):\n \"\"\"Make a Cirq circuit that creates an arbitrary two qubit unitary.\"\"\"\n circuit = cirq.Circuit()\n circuit += one_qubit_unitary(bits[0], symbols[0:3])\n circuit += one_qubit_unitary(bits[1], symbols[3:6])\n circuit += [cirq.ZZ(*bits)**symbols[6]]\n circuit += [cirq.YY(*bits)**symbols[7]]\n circuit += [cirq.XX(*bits)**symbols[8]]\n circuit += one_qubit_unitary(bits[0], symbols[9:12])\n circuit += one_qubit_unitary(bits[1], symbols[12:])\n return circuit\n\n\ndef two_qubit_pool(source_qubit, sink_qubit, symbols):\n \"\"\"Make a Cirq circuit to do a parameterized 'pooling' operation, which\n attempts to reduce entanglement down from two qubits to just one.\"\"\"\n pool_circuit = cirq.Circuit()\n sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])\n source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])\n pool_circuit.append(sink_basis_selector)\n pool_circuit.append(source_basis_selector)\n pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))\n pool_circuit.append(sink_basis_selector**-1)\n return pool_circuit\n```\n\nTo see what you created, print out the one-qubit unitary circuit:\n\n\n```\nSVGCircuit(one_qubit_unitary(cirq.GridQubit(0, 0), sympy.symbols('x0:3')))\n```\n\nAnd the two-qubit unitary circuit:\n\n\n```\nSVGCircuit(two_qubit_unitary(cirq.GridQubit.rect(1, 2), sympy.symbols('x0:15')))\n```\n\nAnd the two-qubit pooling circuit:\n\n\n```\nSVGCircuit(two_qubit_pool(*cirq.GridQubit.rect(1, 2), sympy.symbols('x0:6')))\n```\n\n##### 1.5.2.1 Quantum convolution\n\nAs in the Cong and Lukin paper, define the 1D quantum convolution as the application of a two-qubit parameterized unitary to every pair of adjacent qubits with a stride of one.\n\n\n```\ndef quantum_conv_circuit(bits, symbols):\n \"\"\"Quantum Convolution Layer following the above diagram.\n Return a Cirq circuit with the cascade of `two_qubit_unitary` applied\n to all pairs of qubits in `bits` as in the diagram above.\n \"\"\"\n circuit = cirq.Circuit()\n for first, second in zip(bits[0::2], bits[1::2]):\n circuit += two_qubit_unitary([first, second], symbols)\n for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):\n circuit += two_qubit_unitary([first, second], symbols)\n return circuit\n```\n\nDisplay the (very horizontal) circuit:\n\n\n```\nSVGCircuit(\n quantum_conv_circuit(cirq.GridQubit.rect(1, 8), sympy.symbols('x0:15')))\n```\n\n##### 1.5.2.2 Quantum pooling\n\nA quantum pooling layer pools from $N$ qubits to $\\frac{N}{2}$ qubits using the two-qubit pool defined above.\n\n\n```\ndef quantum_pool_circuit(source_bits, sink_bits, symbols):\n \"\"\"A layer that specifies a quantum pooling operation.\n A Quantum pool tries to learn to pool the relevant information from two\n qubits onto 1.\n \"\"\"\n circuit = cirq.Circuit()\n for source, sink in zip(source_bits, sink_bits):\n circuit += two_qubit_pool(source, sink, symbols)\n return circuit\n```\n\nExamine a pooling component circuit:\n\n\n```\ntest_bits = cirq.GridQubit.rect(1, 8)\n\nSVGCircuit(\n quantum_pool_circuit(test_bits[:4], test_bits[4:], sympy.symbols('x0:6')))\n```\n\n### 1.6 Model definition\n\nNow use the defined layers to construct a purely quantum CNN. Start with eight qubits, pool down to one, then measure $\\langle \\hat{Z} \\rangle$.\n\n\n```\ndef create_model_circuit(qubits):\n \"\"\"Create sequence of alternating convolution and pooling operators \n which gradually shrink over time.\"\"\"\n model_circuit = cirq.Circuit()\n symbols = sympy.symbols('qconv0:63')\n # Cirq uses sympy.Symbols to map learnable variables. TensorFlow Quantum\n # scans incoming circuits and replaces these with TensorFlow variables.\n model_circuit += quantum_conv_circuit(qubits, symbols[0:15])\n model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],\n symbols[15:21])\n model_circuit += quantum_conv_circuit(qubits[4:], symbols[21:36])\n model_circuit += quantum_pool_circuit(qubits[4:6], qubits[6:],\n symbols[36:42])\n model_circuit += quantum_conv_circuit(qubits[6:], symbols[42:57])\n model_circuit += quantum_pool_circuit([qubits[6]], [qubits[7]],\n symbols[57:63])\n return model_circuit\n\n\n# Create our qubits and readout operators in Cirq.\ncluster_state_bits = cirq.GridQubit.rect(1, 8)\nreadout_operators = cirq.Z(cluster_state_bits[-1])\n\n# Build a sequential model enacting the logic in 1.3 of this notebook.\n# Here you are making the static cluster state prep as a part of the AddCircuit and the\n# \"quantum datapoints\" are coming in the form of excitation\nexcitation_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\ncluster_state = tfq.layers.AddCircuit()(\n excitation_input, prepend=cluster_state_circuit(cluster_state_bits))\n\nquantum_model = tfq.layers.PQC(create_model_circuit(cluster_state_bits),\n readout_operators)(cluster_state)\n\nqcnn_model = tf.keras.Model(inputs=[excitation_input], outputs=[quantum_model])\n\n# Show the keras plot of the model\ntf.keras.utils.plot_model(qcnn_model,\n show_shapes=True,\n show_layer_names=False,\n dpi=70)\n```\n\n### 1.7 Train the model\n\nTrain the model over the full batch to simplify this example.\n\n\n```\n# Generate some training data.\ntrain_excitations, train_labels, test_excitations, test_labels = generate_data(\n cluster_state_bits)\n\n\n# Custom accuracy metric.\n@tf.function\ndef custom_accuracy(y_true, y_pred):\n y_true = tf.squeeze(y_true)\n y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)\n return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))\n\n\nqcnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.losses.mse,\n metrics=[custom_accuracy])\n\nhistory = qcnn_model.fit(x=train_excitations,\n y=train_labels,\n batch_size=16,\n epochs=25,\n verbose=1,\n validation_data=(test_excitations, test_labels))\n```\n\n\n```\nplt.plot(history.history['loss'][1:], label='Training')\nplt.plot(history.history['val_loss'][1:], label='Validation')\nplt.title('Training a Quantum CNN to Detect Excited Cluster States')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()\n```\n\n## 2. Hybrid models\n\nYou don't have to go from eight qubits to one qubit using quantum convolution\u2014you could have done one or two rounds of quantum convolution and fed the results into a classical neural network. This section explores quantum-classical hybrid models.\n\n### 2.1 Hybrid model with a single quantum filter\n\nApply one layer of quantum convolution, reading out $\\langle \\hat{Z}_n \\rangle$ on all bits, followed by a densely-connected neural network.\n\n\n\n#### 2.1.1 Model definition\n\n\n```\n# 1-local operators to read out\nreadouts = [cirq.Z(bit) for bit in cluster_state_bits[4:]]\n\n\ndef multi_readout_model_circuit(qubits):\n \"\"\"Make a model circuit with less quantum pool and conv operations.\"\"\"\n model_circuit = cirq.Circuit()\n symbols = sympy.symbols('qconv0:21')\n model_circuit += quantum_conv_circuit(qubits, symbols[0:15])\n model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],\n symbols[15:21])\n return model_circuit\n\n\n# Build a model enacting the logic in 2.1 of this notebook.\nexcitation_input_dual = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\n\ncluster_state_dual = tfq.layers.AddCircuit()(\n excitation_input_dual, prepend=cluster_state_circuit(cluster_state_bits))\n\nquantum_model_dual = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_dual)\n\nd1_dual = tf.keras.layers.Dense(8)(quantum_model_dual)\n\nd2_dual = tf.keras.layers.Dense(1)(d1_dual)\n\nhybrid_model = tf.keras.Model(inputs=[excitation_input_dual], outputs=[d2_dual])\n\n# Display the model architecture\ntf.keras.utils.plot_model(hybrid_model,\n show_shapes=True,\n show_layer_names=False,\n dpi=70)\n```\n\n#### 2.1.2 Train the model\n\n\n```\nhybrid_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.losses.mse,\n metrics=[custom_accuracy])\n\nhybrid_history = hybrid_model.fit(x=train_excitations,\n y=train_labels,\n batch_size=16,\n epochs=25,\n verbose=1,\n validation_data=(test_excitations,\n test_labels))\n```\n\n\n```\nplt.plot(history.history['val_custom_accuracy'], label='QCNN')\nplt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')\nplt.title('Quantum vs Hybrid CNN performance')\nplt.xlabel('Epochs')\nplt.legend()\nplt.ylabel('Validation Accuracy')\nplt.show()\n```\n\nAs you can see, with very modest classical assistance, the hybrid model will usually converge faster than the purely quantum version.\n\n### 2.2 Hybrid convolution with multiple quantum filters\n\nNow let's try an architecture that uses multiple quantum convolutions and a classical neural network to combine them.\n\n\n\n#### 2.2.1 Model definition\n\n\n```\nexcitation_input_multi = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\n\ncluster_state_multi = tfq.layers.AddCircuit()(\n excitation_input_multi, prepend=cluster_state_circuit(cluster_state_bits))\n\n# apply 3 different filters and measure expectation values\n\nquantum_model_multi1 = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_multi)\n\nquantum_model_multi2 = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_multi)\n\nquantum_model_multi3 = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_multi)\n\n# concatenate outputs and feed into a small classical NN\nconcat_out = tf.keras.layers.concatenate(\n [quantum_model_multi1, quantum_model_multi2, quantum_model_multi3])\n\ndense_1 = tf.keras.layers.Dense(8)(concat_out)\n\ndense_2 = tf.keras.layers.Dense(1)(dense_1)\n\nmulti_qconv_model = tf.keras.Model(inputs=[excitation_input_multi],\n outputs=[dense_2])\n\n# Display the model architecture\ntf.keras.utils.plot_model(multi_qconv_model,\n show_shapes=True,\n show_layer_names=True,\n dpi=70)\n```\n\n#### 2.2.2 Train the model\n\n\n```\nmulti_qconv_model.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.losses.mse,\n metrics=[custom_accuracy])\n\nmulti_qconv_history = multi_qconv_model.fit(x=train_excitations,\n y=train_labels,\n batch_size=16,\n epochs=25,\n verbose=1,\n validation_data=(test_excitations,\n test_labels))\n```\n\n\n```\nplt.plot(history.history['val_custom_accuracy'][:25], label='QCNN')\nplt.plot(hybrid_history.history['val_custom_accuracy'][:25], label='Hybrid CNN')\nplt.plot(multi_qconv_history.history['val_custom_accuracy'][:25],\n label='Hybrid CNN \\n Multiple Quantum Filters')\nplt.title('Quantum vs Hybrid CNN performance')\nplt.xlabel('Epochs')\nplt.legend()\nplt.ylabel('Validation Accuracy')\nplt.show()\n```\n", "meta": {"hexsha": "2030cda3cf9e3b402c6c5e0767cfbef820261d3d", "size": 35806, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "site/en-snapshot/quantum/tutorials/qcnn.ipynb", "max_stars_repo_name": "secsilm/docs-l10n", "max_stars_repo_head_hexsha": "2acda8cb1671a826f44115e2fa6dd593756ba969", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-12T18:02:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-18T19:32:41.000Z", "max_issues_repo_path": "site/en-snapshot/quantum/tutorials/qcnn.ipynb", "max_issues_repo_name": "secsilm/docs-l10n", "max_issues_repo_head_hexsha": "2acda8cb1671a826f44115e2fa6dd593756ba969", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "site/en-snapshot/quantum/tutorials/qcnn.ipynb", "max_forks_repo_name": "secsilm/docs-l10n", "max_forks_repo_head_hexsha": "2acda8cb1671a826f44115e2fa6dd593756ba969", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.1728880157, "max_line_length": 418, "alphanum_fraction": 0.5180695973, "converted": true, "num_tokens": 4879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121444839335, "lm_q2_score": 0.2782567937024021, "lm_q1q2_score": 0.10710441918121508}} {"text": "\n\n[](https://colab.research.google.com/github/m2lschool/tutorials2021/blob/main/3_generative/VAE_Tutorial_Start.ipynb)\n\nContact: {fviola@google.com, marco.ciccone@me.com}\n\n## Contents\n* Enabling TPUs in colab\n* Handling nested data structures using tree utilities in JAX\n* Distributing computation over multiple devices using *pmap*\n* Amortized variational inference (VAEs)\n * Training VAEs optimizing ELBO\n * Training $\\beta$-VAEs\n * Training VAEs using constraint optimization (GECO)\n\n\n# Set up your environment!\n\n\n```\n#@title Download and install all the missing packages required for this tutorial { display-mode: \"code\" }\n! pip install ipdb -q\n! pip install chex -q\n! pip install optax -q\n! pip install dm_haiku -q\n! pip install tfp-nightly[jax] -q\n! pip install tf-nightly -q\n! pip install livelossplot -q\nprint(\"All packages installed!\")\n```\n\n\n```\n# @title Imports\nimport inspect\nimport os\n\nimport chex\nimport dill\nimport functools\nimport haiku as hk\nimport jax\nimport jax.numpy as jnp\n\nimport numpy as np\nimport optax as tx\nimport requests\nfrom pprint import pprint\n\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\nfrom livelossplot import PlotLosses\nfrom livelossplot.outputs import MatplotlibPlot\n\nimport tensorflow as tf\nimport tensorflow_probability\nfrom tensorflow_probability.substrates import jax as tfp\n\nsns.set(rc={\"lines.linewidth\": 2.8}, font_scale=2)\nsns.set_style(\"whitegrid\")\n\n# Returns the code of the python implementation of a given funciton as a string.\nget_code_as_string = lambda fn: dill.source.getsource(fn.__code__)\n```\n\n\n```\n# ------------------ #\n# Enable TPU support #\n# ------------------ #\n\n# This cell execution might take a while! don't worry :)\n# Don't forget to select a TPU or GPU runtime environment in\n# Runtime -> Change runtime type\ntry:\n if 'TPU_DRIVER_MODE' not in globals():\n url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver_nightly'\n resp = requests.post(url)\n TPU_DRIVER_MODE = 1\n\n # The following is required to use TPU Driver as JAX's backend.\n from jax.config import config\n config.FLAGS.jax_xla_backend = \"tpu_driver\"\n config.FLAGS.jax_backend_target = \"grpc://\" + os.environ['COLAB_TPU_ADDR']\nexcept:\n print('TPUs not found. Enable a TPU runtime going to: '\n '\"Runtime -> Change runtime type\"')\ndevices = jax.devices()\nprint(\"Available devices:\", devices)\n# Should print something like\n# Available devices: [TpuDevice(id=0, host_id=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, host_id=0, coords=(0,0,0), core_on_chip=1), TpuDevice(id=2, host_id=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=3, host_id=0, coords=(1,0,0), core_on_chip=1), TpuDevice(id=4, host_id=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=5, host_id=0, coords=(0,1,0), core_on_chip=1), TpuDevice(id=6, host_id=0, coords=(1,1,0), core_on_chip=0), TpuDevice(id=7, host_id=0, coords=(1,1,0), core_on_chip=1)]\n```\n\n# Warmup: bits and bobs\n\nLet's take a minute to look at some JAX functionality we will be using in this tutorial.\n\n## How to work with nested data with `jax.tree_utils`\n\nIt is fairly common to structure data and parameters into nested formats, for example (nested) dictionaries, namedtuples, lists, dataclasses and variants thereof. For example, we might want to cleanly scope and group our model's variables by their component name into some dictionary-like format, like in Haiku, or in a Reinforcemente Learning setting we might want to explicitely lable the components of a rollout.\n\nTypically in our code we don't want to assume a priori any strucuture of the containers we are manipulating, and prefer code that can transparently handle arbitrary nested structures. Luckily JAX can natively do this for us, and we only need to familiarize ourselves with its `jax.tree_util` package, and make sure that our custom objects are registered with it (not to worry, we have libraries that do this for us!).\n\nYou can find out more in the JAX `tree_util` package [documentation](https://jax.readthedocs.io/en/latest/jax.tree_util.html).\n\n\n```\nfrom collections import namedtuple \ndata_container = namedtuple('data_box', 'component_a component_b')\ndata = dict(\n a=jnp.ones(shape=()),\n b=[jnp.ones(shape=()),\n data_container(jnp.ones(shape=()), jnp.ones(shape=()))],\n c=(jnp.ones(shape=()), jnp.ones(shape=())))\nprint('Structured data\\n', data)\n\n# We can use `jax.tree_map` to apply the same function to all the tensors\n# contained in a nested data structure.\nfn = lambda x: x * 2\noutput = jax.tree_map(fn, data)\nprint('Structure data, after {}'.format(get_code_as_string(fn)), output)\n\n# We can also call functions with multiple structured inputs, for example\n# parameters and gradients in an update step.\nfn = lambda x, y, delta=0.1: x + delta * y\noutput = jax.tree_multimap(fn, data, data)\nprint('Structure data, after {}'.format(get_code_as_string(fn)), output)\n\n# We can also 'flatten' the data to get a list of all the tensors contained in\n# the nested data structure.\nentries = jax.tree_leaves(data)\nprint('Tree leaves\\n', entries)\n\n# We can 'unflatten' flattened data to get back the original strucure if we keep\n# around the original structure defintion.\nentries, tree_def = jax.tree_flatten(data) # tree_def capture the structure\ndata = jax.tree_unflatten(tree_def, entries)\nprint('Flattened data\\n', entries)\nprint('Unflattened data\\n', data)\n```\n\n\n```\n# ------------------------ #\n# WARMUP OPTIONAL EXERCISE #\n# ------------------------ #\n\n# You have some batched data, structured in an unknow way and you want\n# to recover a list of unbatched data, structured the same way.\n# Write the code to do that using jax.tree_utils\n\n\ninput_data = dict(\n a=jnp.arange(3),\n b=[jnp.arange(3),\n data_container(jnp.arange(3), jnp.arange(3))],\n c=(jnp.arange(3), jnp.arange(3)))\n\nprint('Input data')\npprint(input_data)\n\ndef unbatch(data):\n flattened_data, tree_def = jax.tree_flatten(data)\n # ADD CODE BELOW \n # ----------------------- \n split_data = ... \n unbatched_data = ...\n # -----------------------\n return unbatched_data\n\nprint('\\nSplit data')\npprint(unbatch(input_data))\n\n# Expected output: \n# Input data\n# {'a': DeviceArray([0, 1, 2], dtype=int32),\n# 'b': [DeviceArray([0, 1, 2], dtype=int32),\n# data_box(component_a=DeviceArray([0, 1, 2], dtype=int32), component_b=DeviceArray([0, 1, 2], dtype=int32))],\n# 'c': (DeviceArray([0, 1, 2], dtype=int32),\n# DeviceArray([0, 1, 2], dtype=int32))}\n\n# Split data\n# [{'a': DeviceArray([0], dtype=int32),\n# 'b': [DeviceArray([0], dtype=int32),\n# data_box(component_a=DeviceArray([0], dtype=int32), component_b=DeviceArray([0], dtype=int32))],\n# 'c': (DeviceArray([0], dtype=int32), DeviceArray([0], dtype=int32))},\n# {'a': DeviceArray([1], dtype=int32),\n# 'b': [DeviceArray([1], dtype=int32),\n# data_box(component_a=DeviceArray([1], dtype=int32), component_b=DeviceArray([1], dtype=int32))],\n# 'c': (DeviceArray([1], dtype=int32), DeviceArray([1], dtype=int32))},\n# {'a': DeviceArray([2], dtype=int32),\n# 'b': [DeviceArray([2], dtype=int32),\n# data_box(component_a=DeviceArray([2], dtype=int32), component_b=DeviceArray([2], dtype=int32))],\n# 'c': (DeviceArray([2], dtype=int32), DeviceArray([2], dtype=int32))}]\n```\n\n## How to parallelize gradients computation over multiple devices using JAX.\n\nIn JAX you can use the `pmap` primitive to parallelize computation over multiple devices; using reduction methods in `jax.lax` you can also have access to values aggregated across devices during the distributed computation, giving you a lot of flexibility over what you can achieve!\n\nFor example, you can split a *large* batch over 8 TPU cores, compute partial \ngradients over the split batches and avarage them prior to updating\nthe model parameters - in parallel - across all devices.\n\nLet's look at some example code of how this can be done.\n\n\n\n```\n# ------- #\n# EXAMPLE #\n# ------- #\n\n# First off, let's verify that you that in this colab you should have access to \n# multiple TPU cores.\nnum_dev = jax.local_device_count()\nprint('Number of devices:\\n', num_dev, '\\n')\n\n# Let's define a simple data generation function, and an MSE loss for linear\n# regression.\ndef get_data(prng_key, batch_size):\n x_key, noise_key = jax.random.split(prng_key, 2)\n x = jax.random.uniform(x_key, shape=(batch_size,1))\n true_model = lambda x: x * 3 + 2 \n noise = lambda key: jax.random.normal(key, shape=(batch_size, 1)) * 0.2\n return jnp.concatenate([x, true_model(x) + noise(noise_key)], axis=1)\n\ndef loss_fn(params, data):\n prediction = params[0] * data[:, 0] + params[1]\n target = data[:, 1]\n return jnp.mean((prediction - target) ** 2)\n\nbatch_size = 1024\nprng_key = jax.random.PRNGKey(0)\ndata = get_data(prng_key, batch_size)\nparams = jax.random.normal(prng_key, shape=(2,))\n\n# On a single core we can compute the gradients over a minibatch with:\ngrad_fn = jax.grad(loss_fn)\ngradients = grad_fn(params, data)\n# and update the parameters using any update rule, like those you can find in\n# the optax package. \n\n# We now want to take advantage of the the multiple core made available to us. \n# To do so, we batch computation _over cores_.\n\n# We format the data and batch it twice, over cores and per-device core: \ndata = get_data(prng_key, batch_size * num_dev)\ndata = jnp.reshape(data, (num_dev, batch_size) + data.shape[1:])\n# Note that the leading dimension is now the number of cores!\nprint('Data shape:\\n', data.shape, '\\n')\n\n# We provide each core with a copy of the parameters. Given an instance of the \n# params we can use JAX's broadcasting utility to achieve this.\nbroadcast = lambda params: jnp.broadcast_to(params, (num_dev,) + params.shape)\nmapped_params = broadcast(params)\n# All the parameters copies are synced at the start of the model fitting; since\n# the udpates to the params will be the same, params will stay synced during\n# optimization.\nprint('Params:\\n', params, '\\n')\nprint('Mapped params:\\n', mapped_params, '\\n')\n\n# We then pmap the gradient fn, averaging the gradients computed across devices\n# using the pmap primitive and the jax.lax.pmean function.\ndef get_averaged_grads(params, data):\n grads = grad_fn(params, data)\n grads = jax.lax.pmean(grads, axis_name='i')\n return grads\nget_averaged_grads = jax.pmap(\n get_averaged_grads, axis_name='i', devices=jax.devices())\naveraged_grads = get_averaged_grads(mapped_params, data)\nprint('Averaged_grads:\\n', averaged_grads, '\\n')\n\n# Note that this is equivalent to mapping the gradient function and manually\n# averaging the result!\nget_mapped_grads = jax.pmap(grad_fn, axis_name='i', devices=jax.devices())\nmapped_grads = get_mapped_grads(mapped_params, data)\nprint('Mapped_grads:\\n', mapped_grads, '\\n')\nprint('Manually averaged_grads:\\n', jnp.mean(mapped_grads, axis=0), '\\n')\n\n# Of course we can pmap in one step, cleanly, much more complicated logic.\n# For example, the whole update step:\noptimizer = tx.sgd(1e-1)\ndef update(params, opt_state, data): \n loss, grads = jax.value_and_grad(loss_fn)(params, data)\n grads = jax.lax.pmean(grads, axis_name='i')\n raw_updates, opt_state = optimizer.update(grads, opt_state)\n params = tx.apply_updates(params, raw_updates)\n return params, opt_state, loss\n# Note that pmap also compiles the function to XLA, akin to JIT!\nupdate = jax.pmap(update, axis_name='i', devices=jax.devices())\nopt_state = jax.tree_map(broadcast, optimizer.init(params))\n\nlosses = []\nfor _ in range(250):\n prng_key, data_key = jax.random.split(prng_key)\n data = get_data(prng_key, batch_size * num_dev)\n data = jnp.reshape(data, (num_dev, batch_size) + data.shape[1:])\n mapped_params, opt_state, loss = update(mapped_params, opt_state, data)\n losses.append(jnp.mean(loss))\n \nsz = 5; plt.figure(figsize=(4*sz,sz))\nplt.subplot(121)\nplt.plot(losses)\nplt.title('loss')\nplt.subplot(122)\ndata = get_data(prng_key, 100)\nplt.scatter(data[:,0], data[:,1], label='data')\nplt.plot(\n [0, 1],\n [mapped_params[0, 1], mapped_params[0, 0] + mapped_params[0, 1]],\n 'r', label='prediction')\nplt.legend();\n```\n\n# Get the data\n\nIn this tutorial we will use the MNIST and Fashion MNIST datasets (and variations thereof). \nWe can use TensorFlow data to download the data from the cloud.\n\n\n```\nimport tensorflow_datasets as tfds\nmnist = tfds.load(\"mnist\")\nfashion_mnist = tfds.load(\"fashion_mnist\")\n```\n\n[Chex](https://github.com/deepmind/chex) is a library of utilities for helping to write reliable JAX code.\n\nWithin `chex` you will find a `dataclass` object definition, which will automatically register new class instances into JAX, so you can easily apply JAX's tree utilities out of the box. We will use it to define a labelled data object type.\n\n\n```\n@chex.dataclass\nclass ContextualData(): \n target: chex.Array\n context: chex.Array\n```\n\n\n```\n# Here we provide some utilities for experiment and data visualization.\n\ndef gallery(array, ncols=None):\n \"\"\"Rearrange an array of images into a tiled layout.\"\"\"\n nindex, height, width, num_channels = array.shape \n if ncols is None:\n ncols = int(np.sqrt(nindex)) \n nrows = int(np.ceil(nindex/ncols)) \n pad = np.zeros((nrows*ncols-nindex, height, width, num_channels))\n array = np.concatenate([array, pad], axis=0)\n result = (array.reshape(nrows, ncols, height, width, num_channels)\n .swapaxes(1,2)\n .reshape(height*nrows, width*ncols, num_channels))\n return result\n\ndef imshow(x, title=''):\n \"\"\"Shorthand for imshow.\"\"\"\n plt.imshow(x[..., 0], cmap='gist_yarg', interpolation=None)\n plt.axis('off')\n plt.title(title)\n\ndef custom_after_subplot(ax: plt.Axes, group_name: str, x_label: str):\n \"\"\"Disable Legend in LiveLossPlot (interactive Matplotlib)\"\"\"\n ax.set_title(group_name)\n ax.set_xlabel(x_label)\n ax.legend().set_visible(False)\n```\n\n# The datasets\n\nIn this tuturial we will train _conditional_ models.\nAs such, and as suggested by the `ContextualData` defintion, the dataset will provide targets and contexts. We will use two types of context, depending on the dataset.\n\n- **Simple**: for FashionMNIST the context will be the object class label, in one-hot format. \n- **Hard**: for MNIST digits we will make things a bit more interesting: the user will specify a simple function, mapping tuples of integers in the range `[0,9]` to an integer in `[0, 9]`, for example:\n```\nlambda i, j: (i + j) % 10\n```\nThe context will be a tuple of images from MNIST whose labels will match the context integers, and the target will be an MNIST image of the corresponging\noutput label. Hance, the result will be a dataset whose conditional context is potentially very rich and challenging to capture.\n\n\n```\n# --------------------------------- #\n# SIMPLE context dataset generation #\n# --------------------------------- #\n\ndef get_fashion_mnist(batch_size, data_split='train', seed=1, conditional=True):\n def _preprocess(sample):\n image = tf.cast(sample[\"image\"], tf.float32) / 255.0\n context = tf.one_hot(sample[\"label\"], 10) \n # We can optionally make context constant, effectively making the dataset\n # unconditional. This can be useful for debuggin purposes.\n if not conditional: \n context *= 0\n return ContextualData(target=image, context=context)\n \n ds = fashion_mnist[data_split]\n ds = ds.map(map_func=_preprocess, \n num_parallel_calls=tf.data.experimental.AUTOTUNE)\n ds = ds.cache()\n ds = ds.shuffle(100000, seed=seed).repeat().batch(batch_size)\n return iter(tfds.as_numpy(ds))\n\n# Visualize a sample of the data\nds = next(get_fashion_mnist(batch_size=12))\nd = gallery(ds.target)\nimshow(d, title='Fashion MNIST targets')\n```\n\n\n```\n# ------------------------------- #\n# HARD context dataset generation #\n# ------------------------------- #\n\ndef get_raw_data(data_split='train'):\n def _preprocess(sample):\n image = tf.cast(sample[\"image\"], tf.float32) / 255.0\n id = sample[\"label\"]\n return image, id\n \n ds = mnist[data_split]\n ds = ds.map(map_func=_preprocess, \n num_parallel_calls=tf.data.experimental.AUTOTUNE)\n ds = ds.cache()\n ds = ds.shuffle(100000, seed=0).batch(2048)\n images, labels = next(iter(tfds.as_numpy(ds)))\n\n data_by_label = []\n for i in range(10):\n data_by_label.append(images[labels==i])\n min_num = min([d.shape[0] for d in data_by_label])\n return np.stack([d[:min_num] for d in data_by_label])\n\n\ndef get_mnist_digits(batch_size,\n fn=None,\n data_split='train',\n max_num_examplars=None):\n \"\"\"Instantiates a data generation function implementing the input fn in mnist.\n\n Args:\n batch_size: (int) batch size of the returned data\n fn: (python function) default sum of two digits modulo 10, function from n\n digits to a digit. The generator will return a batch of n + 1 digit\n images whose labels correspont to inputs and output of the function.\n data_split: (string) default 'train', crop of mnist to use for the data.\n max_num_examplars: (int) maximum number of exemplars to be used - even when\n not specified all digits will be represented by the same number of\n exemplars.\n\n Returns:\n Function of JAX PRNG returning samples from mnist related to each other via\n the user input fn function.\n \"\"\"\n raw_data = get_raw_data(data_split)\n sz = raw_data.shape[1]\n if max_num_examplars is not None:\n sz = min(max_num_examplars, sz)\n\n if fn is None:\n # Examples\n # fn = lambda i, j: (i + j) % 10\n # fn = lambda i: (i + 1) % 10\n fn = lambda i: (i + 1) % 10\n\n num_inputs = len(inspect.signature(fn).parameters)\n def data_fn(prng):\n digit_prng, sample_prng = jax.random.split(prng, 2)\n digit_indices = jax.random.randint(\n digit_prng, shape=[batch_size, num_inputs], minval=0, maxval=10)\n digit_indices = digit_indices.split(num_inputs, axis=1)\n digit_indices += [fn(*digit_indices)]\n\n sample_indices = jax.random.randint(\n sample_prng, shape=[batch_size, num_inputs+1], minval=0, maxval=sz)\n sample_indices = sample_indices.split(num_inputs + 1, axis=1)\n samples = [raw_data[i[..., 0], j[..., 0], ...]\n for i, j in zip(digit_indices, sample_indices)]\n return samples\n\n def generator(key=None):\n if key is None:\n key = jax.random.PRNGKey(0)\n while True:\n key, sample_key = jax.random.split(key)\n data = data_fn(sample_key)\n yield ContextualData(target=data[-1], context=data[:-1])\n\n return generator\n\ndef tile_context(context, filler_value=.2):\n entries = [gallery(d) for d in data.context]\n entry_shape = entries[0].shape\n separator = np.full((entry_shape[0], 1, 1), filler_value) \n entries = [\n np.concatenate([e, separator], axis=1) for e in entries[:-1]] + entries[-1:] \n return np.concatenate(entries, axis=1)\n\n# Visualize samples of target with hard context function \ncontext_fn = lambda x, y, z: (x + y + z) % 10\ndata = next(get_mnist_digits(fn=context_fn, batch_size=16)())\nsz = 5\nplt.figure(figsize=(4 * sz, sz))\nplt.subplot(122)\nimshow(gallery(data.target),\n 'Target with ' + get_code_as_string(context_fn))\nplt.subplot(121)\nimshow(tile_context(data.context) , 'Context')\n```\n\nFor convenience, we define a dataset constructor function taking a `hard` flag switching between the MNIST and Fashion MNIST datasets, as well a dummy dataset\nconstructor that we will use for the purpose of retrieving shape information at graph construction time.\n\n\n```\ndef get_dataset(batch_size, num_dev, hard=False, data_split='train'):\n # Instantiates the dataset adjusting the batch size to support training on \n # multiple devices.\n if hard: \n dataset = get_mnist_digits(\n batch_size=batch_size*num_dev, \n data_split=data_split)()\n else:\n dataset = get_fashion_mnist(\n batch_size=batch_size*num_dev, \n data_split=data_split)\n return dataset\n\n\ndef get_dummy_data(hard=False):\n # Returns an instance of the data to gather shape info.\n return next(get_dataset(batch_size=1, num_dev=1, hard=hard))\n```\n\n# Amortized variational inference (VAEs)\n\nConsider a joint distribution $p(x, z)$ over a set of latent variables $z \\in \\mathcal{Z}$ and observed variable $x \\in \\mathcal{X}$ (for instance, the images of our dataset).\n\nMaking inference over the observed variable $x$ involves computing the posterior distribution $p(z|x) = p(x,z)/p(x)$ which is often intractable to compute, as the _marginal likelihood_ $p(x) = \\int_z p(x, z)dz$ requires integrating over a potentially exponential number of configurations of $z$. \n\n**Variational Inference (VI)** can be used to approximate the posterior $p(z|x)$ in a tractable fashion. VI casts the problem of computing the posterior as an optimization problem introducing a family of tractable (simpler) distributions $\\mathcal{Q}$ parametrized by $\\lambda$. The objective is to find the best approximation of the true posterior $q_{\\lambda^*} \\in \\mathcal{Q}$ that minimizes the Kullback-Leibler (KL) divergence with the exact prosterior: \n$$\nq_{\\lambda^*}(z) = \\underset{q_{\\lambda}}{arg min} \\ \\ D_{KL}(q_{\\lambda}(z) || p(z|x))\n$$\n\n$q_{\\lambda^*}(z)$ can serve as a proxy for the true posterior distribution. Note that the solution depends on the speci\ufb01c value of the observed (evidence) variables $x_i$ we are conditioning on, so computing the posterior requires solving an optimization problem for each sample independently.\n\nIn this tutorial, we use a much more efficient approach. Rather than solving an optimization process per data point, we can **amortize the cost of inference** by leveraging the power of function approximation and learn a deterministic mapping to predict the distributional variables as a function of $x$. Specifically, the posterior parameters for $x_i$ will be the output of a *learned* function $f_\\theta(x_i)$, where $\\theta$ are parameters shared across all data points.\n\n\n\nFor more information, see: \n * [Kingma and Welling, (2013), Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114)\n * [Kingma and Welling, (2019), An Introduction to Variational Autoencoders](https://arxiv.org/abs/1906.02691)\n\n# Implement and train a (Conditional) Variational AutoEncoder\n\nVariational AutoEncoders (VAEs) are a powerful class of deep generative models with latent variables, comprising of three main components: an encoder, a decoder and a prior. \n\nThe encoder, also called *inference* or *recognition model*, computes the approximate posterior distribution $q_{\\phi}(z|x)$ of a given sample conditioned on $x$, while the decoder, or *generative model* reconstructs the sample starting from the latent variables $z$. In the conditional setting, the encoder and decoder will potentially also receive an additional conditioning input that we will refer to as _context_.\n\nWhen we have a unconditional generative model, we generally don't have control over the specific outputs produced by the model, and the generated samples could be anything depending on the sampled latent variable. Perhaps, a more useful generative process should allow one to influence specific characteristics of the samples to generate, for instance, based on a context variable $c$, e.g. a label or category. For this reason, in this tutorial, we focus on Conditional VAEs (CVAEs) where both the encoder and the decoder are conditioned on a context variable $c$.\n\nThe main components of our model will be: \n\n* $P^*$ is the true data distribution. We have some samples from this in the form of a dataset.\n* $p(z)$ is a *prior* distribution over the latent space. In general the prior is simply $\\mathcal{N}(0, 1)$ but in our case we will *learn* a conditional prior distribution $p(z|c)$ based on the context.\n* $E(x, c)$ the encoder outputs distributions over the latent space $Z$, not just elements of it. The produced distribution is denoted $q_\\phi(z|x, c)$ and is the (approximate) *posterior* distribution.\n* $D(z, c)$ the decoder may be stochastic again, modeling the output distribution $p_\\theta(x|z, c)$.\n\nNow we go over the reconstruction and sampling process and finally motivate the losses used to train VAEs in this tutorial.\n\n### Reconstruction\nThe process for reconstruction is:\n\n1. Take $x, c \\sim P^*$.\n2. Encode it $E_\\phi(x|c)$, yielding $q_\\phi(z|x, c)$.\n3. Sample a latent $z \\sim q_\\phi(z|x, c)$.\n4. Decode the latent $p_\\theta(\\hat{x}|z, c) = D_\\theta(z, c)$.\n5. Sample a reconstruction: $\\hat{x} \\sim p_\\theta(\\hat{x}|z, c)$.\n\nThe prior has not showed up here, it plays a role in sampling.\n\n
\n\n
Illustration of the conditional VAE architecture.
Credits: https://ijdykeman.github.io/ml/2016/12/21/cvae.html
\n
\n\n### Sampling\nThe sampling process is:\n\n1. Given a context $c$, sample a latent $z \\sim p(z|c)$ from the conditional prior.\n2. Decode the latent $p_\\theta(x|z, c) = D_\\theta(z, c)$.\n3. Sample a reconstruction: $x \\sim p_\\theta(x|z, c)$.\n\nIn practice we usually use simple, parametrizable distributions in the encoder and decoder. More specifically:\n\n**Encoder**\nEach latent dimension is a (univariate) gaussian, parametrized by mean and standard deviation. Note, this is the same as a multivariate guassian over the latent space with a diagional covariance matrix.\n\n**Decoder**\nWe will quantize the pixels to 0 and 1, which allows us to use a Bernoulli distribution per pixel to model it. Though for visualizations we will continue to use grayscale values.\n\n\n## The Loss\n\nWe use maximum likelihood for training, that is, ideally we would like to maximize:\n\n$$\\mathbb{E}_{x,c \\sim P^*}\\log p_{\\theta}(x|c).$$\n\nNote that $p_{\\theta}(x|c)$ is the marginal probability distribution $p_{\\theta}(x|c) = \\int p_\\theta(x, z|c) dz$. We can rewrite this in familiar terms as $\\int p_\\theta(x|z,c) p(z|c) dz$. However, computing (and maximizing) the above marginal is computationally infeasible.\n\nInstead, we can show\n\n$$\\log p_{\\theta}(x|c) \\ge \\mathbb{E}_{z \\sim q(z|x,c)} \\big[\\log p_\\theta(x | z,c)\\big] - \\mathbb{KL}\\big(q_\\phi(z | x,c) || p(z|c)\\big).$$\n\nThis right hand side is called the evidence lower bound (ELBO). Broadly speaking the term variational methods, like variational inference, refers to this technique of using an approximate posterior distribution and the ELBO; this is where Variational Autoencoder gets its name from too.\n\nIn order to try to maximize the likelihood, we maximize the ELBO instead. Recall from the lecture that under some conditions (that are not going to apply to us) the inequality is actually an equality. This yield the following loss used with Variational AutoEncoders:\n\n\n
\n\n\n$$ \\mathcal{L}(x|c) = - \\Big( \\mathbb{E}_{z \\sim q(z|x, c)} \\big[\\log p_\\theta(x | z, c)\\big] - \\mathbb{KL}\\big(q_\\phi(z | x, c) || p(z|c)\\big) \\Big).$$\n
\n\nObserve that:\n* The first term encourages the model to reconstruct the input faithfully. This part is similar to the Vanilla AutoEncoder.\n* The second term can be seen as a *regularization term* of the encoder towards the prior.\n* Encoder, Decoder and Prior are conditioned on the context $c$. Removing the conditioning corresponds to recovering the original VAE formulation.\n\n(The formula contains an expectation; in practice that would be approximated with one or more samples.)\n\n## Implementation\nWe start by defining the main VAE object class.\n\nWe want to decouple its interface from how its components are defined; for example at this point we don't really care if we are using neural networks to implement the mapping from inputs to posterior.\n\nSome of the implementation will have to be `haiku` specific, but we will make an effort to restrict these details into the initialization functions. \n\n\n\n\n```\ndef _transform(component):\n return hk.without_apply_rng(hk.transform(component))\n\n# Sum-reduce on all dimensions but batch size\n_batch_sum = jax.vmap(jnp.sum) \n\n\nclass VAE():\n \"\"\"A (conditional) Variational Autoencoder.\n \n The class expects at construction time haiku modules implementing the\n conditional {encoder, decoder, prior}, as well as context projector module.\n \n The encoder, decoder and prior use the output of the context projector\n together with their other inputs to define distributions (posterior, output\n and conditional prior respectively), implemented as tfp.distributions.\n\n The context projector is used to map a potentially nested context to a single\n condition tensor.\n \"\"\"\n\n def __init__(self, *, encoder, decoder, prior, context_projector):\n self._encoder = _transform(encoder)\n self._decoder = _transform(decoder)\n self._prior = _transform(prior)\n self._context_projector = _transform(context_projector)\n\n def init_params(self, prng, data):\n prng_encoder, prng_decoder, prng_prior, prng_proj = jax.random.split(\n prng, 4)\n \n # Initialize the context mapping.\n # ADD CODE BELOW \n # -----------------------\n context_projector_params = ... \n projected_context = ...\n # -----------------------\n\n # Initialize the conditional prior.\n # ADD CODE BELOW \n # -----------------------\n prior_params = ...\n z = ...\n # -----------------------\n\n # Initialize the encoder.\n # ADD CODE BELOW \n # -----------------------\n encoder_params = ...\n # -----------------------\n\n # Initialize the decoder.\n # ADD CODE BELOW \n # -----------------------\n decoder_params = ...\n # -----------------------\n\n # Merge all the parameters into a single data structure.\n params = hk.data_structures.merge(\n context_projector_params, prior_params, encoder_params, decoder_params)\n return params\n \n def sample(self, params, prng, context, mean=True):\n prior_prng, decoder_prng = jax.random.split(prng)\n # Map the context.\n # ADD CODE BELOW \n # ----------------------- \n projected_context = ...\n # -----------------------\n\n # Get the conditional prior distribution, and take a sample from it.\n # ADD CODE BELOW \n # -----------------------\n conditional_prior = ...\n z = ...\n # -----------------------\n\n # Get the conditional output distribution.\n # ADD CODE BELOW \n # -----------------------\n output_distribution = ...\n # -----------------------\n if mean:\n return output_distribution.mean()\n else:\n return output_distribution.sample(seed=decoder_prng)\n\n def sample_prior(self, params, prng, context):\n projected_context = self._context_projector.apply(params, context)\n conditional_prior = self._prior.apply(params, projected_context)\n return conditional_prior.sample(seed=prng)\n\n def decode(self, params, prng, z, context, mean=True):\n projected_context = self._context_projector.apply(params, context)\n output_distribution = self._decoder.apply(params, z, projected_context)\n if mean:\n return output_distribution.mean()\n else:\n return output_distribution.sample(seed=prng)\n\n def reconstruct(self, params, prng, x, context, mean=True):\n posterior_prng, decoder_prng = jax.random.split(prng)\n # Map the context.\n # ADD CODE BELOW \n # ----------------------- \n projected_context = ...\n # -----------------------\n \n # Get the conditional posterior distribution p(z|x, context), and sample\n # from it.\n # ADD CODE BELOW\n #\u00a0-----------------------\n posterior = ...\n z = ...\n #\u00a0-----------------------\n\n # Get the conditional output distribution. \n # ADD CODE BELOW\n #\u00a0-----------------------\n output_distribution = ...\n #\u00a0-----------------------\n if mean:\n return output_distribution.mean()\n else:\n return output_distribution.sample(seed=decoder_prng)\n\n def stochastic_elbo(self, params, prng, x, context):\n \"\"\"ELBO = log_p(x) - KL(q(z|x) || p(z)).\"\"\"\n projected_context = self._context_projector.apply(params, context) \n z, kl = self._z_and_kl(params, prng, x, projected_context)\n output_distribution = self._decoder.apply(params, z, projected_context)\n # Sum reduce over signal domain (but not over batch!)\n log_p_x = _batch_sum(output_distribution.log_prob(x))\n elbo = log_p_x - kl\n # Assemble all the stats that we want to log in an extra output dictionary.\n extra_outputs = dict(kl=kl, log_p=log_p_x, elbo=elbo)\n return elbo, extra_outputs\n\n def kl(self, params, prng, x, context):\n projected_context = self._context_projector.apply(params, context)\n _, kl = self._z_and_kl(params, prng, x, projected_context)\n return kl\n\n def _z_and_kl(self, params, prng, x, projected_context):\n \"\"\"Sample z and compute KL(q(z|x) || p(z)) = log_p(q(z|x)) - log_p(p(x)).\"\"\"\n\n # Get the conditional prior, given the pre-projected context\n # ADD CODE BELOW\n #\u00a0-----------------------\n prior = ...\n #\u00a0-----------------------\n \n # Get the conditional posterior distribution p(z|x, context), and sample\n # from it. \n # ADD CODE BELOW\n #\u00a0-----------------------\n posterior = ...\n z = ...\n #\u00a0-----------------------\n\n # Compute the posterior log probability for the sampled z.\n # Note _batch_sum: what is this doing?\n log_q_z = _batch_sum(posterior.log_prob(z))\n \n # Compute prior log probability for the sampled z.\n # ADD CODE BELOW\n #\u00a0-----------------------\n log_p_z = ...\n #\u00a0-----------------------\n\n # Compute the KL (see formula)\n # ADD CODE BELOW\n #\u00a0-----------------------\n kl = ...\n #\u00a0-----------------------\n return z, kl\n\n```\n\nHere we define all the utilities we will need to implement Haiku modules for the prior, encoder and decoder required to instantiate a VAE.\n\n\n```\ndef _make_positive(x):\n \"\"\"Transforms elementwise unconstrained inputs into positive outputs.\"\"\"\n # The offset is such that the output will be equal to 1 whenever the input is\n # equal to 0.\n offset = jnp.log(jnp.exp(1.) - 1.)\n return jax.nn.softplus(x + offset)\n \n\nclass DiagonalNormal(tfp.distributions.Normal):\n \"\"\"Normal distribution with diagonal covariance.\"\"\"\n\n def __init__(self, params, name='diagonal_normal'):\n if params.shape[-1] != 2:\n raise ValueError(\n f'The last dimension of `params` must be 2, got {params.shape[-1]}.')\n super().__init__(\n loc=params[..., 0], scale=_make_positive(params[..., 1]), name=name)\n\n\nclass ConditionalPrior(hk.Module):\n \"\"\"A prior distribution whose parameters are computed by a conditioner.\"\"\"\n\n def __init__(self, map_ctor, distribution, name='prior_net'):\n super().__init__(name=name) \n # This function will map: (bs,) + context shape --> (bs, num_latents, 2)\n self._map = map_ctor()\n self._distribution = distribution\n\n def __call__(self, context):\n return self._distribution(self._map(context))\n\n\nclass ConditionalEncoder(hk.Module):\n \"\"\"A posterior distribution whose parameters are computed by a conditioner.\"\"\"\n\n def __init__(self, map_ctor, distribution, name='posterior_net'):\n super().__init__(name=name)\n # This function will map inputs to the to posterior distribution parameters.\n self._map = map_ctor() \n self._distribution = distribution\n\n def __call__(self, x, context): \n # We assume that x is an image and the context is flat. \n chex.assert_rank(x, 4)\n chex.assert_rank(context, 2) \n \n # Tile the context in order to be concatenated to the input tensor x.\n # ADD CODE BELOW \n # -----------------------\n bs, height, width, _ = x.shape\n context = ... \n x_and_context = ...\n # -----------------------\n\n # Compute the posterior q(z|x, context) \n return self._distribution(self._map(x_and_context))\n\n\nclass ConditionalDecoder(hk.Module):\n \"\"\"An output distribution whose parameters are computed by a conditioner.\"\"\"\n\n def __init__(self, map_ctor, distribution, name='output_net'):\n super().__init__(name=name)\n # This function will map (z, context) to the to output distribution\n # parameters.\n self._map = map_ctor()\n self._distribution = distribution\n\n def __call__(self, z, context):\n chex.assert_equal_shape_prefix([z, context], -1)\n # Concatenate the context to the latents \n # ADD CODE BELOW \n #\u00a0-------------------------------------------------------------\n z_and_context = ...\n #\u00a0-------------------------------------------------------------\n return self._distribution(self._map(z_and_context))\n\n\n```\n\nHere we define how to assemble the VAE.\n\n\n```\n# -- Flat model \ndef get_model(num_latents=50, num_hiddens=500, data_shape=[28, 28, 1]):\n \"\"\"Creates a fully-connected VAE model.\"\"\"\n\n def _build_map_to_latent_dist_params_net():\n return hk.Sequential([\n hk.Flatten(),\n hk.nets.MLP([num_hiddens, num_hiddens, num_latents * 2]),\n hk.Reshape([num_latents, 2]),\n ])\n\n def encoder_fn(*args, **kwargs):\n return ConditionalEncoder(\n map_ctor=_build_map_to_latent_dist_params_net,\n distribution=DiagonalNormal)(*args, **kwargs)\n \n def prior_fn(*args, **kwargs):\n return ConditionalPrior(\n map_ctor=_build_map_to_latent_dist_params_net,\n distribution=DiagonalNormal)(*args, **kwargs)\n\n def _build_map_to_output_dist_params_net():\n return hk.Sequential([\n hk.Flatten(),\n hk.nets.MLP([num_hiddens, num_hiddens, np.prod(data_shape)]),\n hk.Reshape(data_shape),\n ])\n\n def decoder_fn(*args, **kwargs):\n return ConditionalDecoder(\n map_ctor=_build_map_to_output_dist_params_net,\n distribution=tfp.distributions.Bernoulli)(*args, **kwargs) \n \n def _concatenate(tensors):\n proj = hk.nets.MLP([num_hiddens, num_latents])\n data = []\n for x in jax.tree_leaves(tensors):\n if x.ndim > 2:\n x = proj(hk.Flatten()(x))\n # Note: this is not commutative!\n # What could we do to make this projection commutative?\n data.append(x)\n data = jnp.concatenate(data, axis=-1)\n return data\n\n def proj_fn(*args):\n \"\"\"Maps context inputs to conditioning tensor.\"\"\"\n return hk.to_module(_concatenate)('context_projector')(*args)\n\n return VAE(encoder=encoder_fn,\n decoder=decoder_fn,\n prior=prior_fn,\n context_projector=proj_fn)\n\n\n```\n\n# Training\n\n## Shared hyper-parameters\n\n\n```\n# @title Shared hyper-parameters\n\n# Model hyper-parameters: you can play around with these and see what changes!\nBATCH_SIZE = 128 #@param {type:'integer'}\nVIS_BATCH_SIZE = 64 #@param {type:'integer'}\nNUM_LATENTS = 30 #@param {type:'integer'}\nNUM_HIDDENS = 50 #@param {type:'integer'}\nTRAINING_STEPS = 1000 #@param {type:'integer'}\nUSE_HARD_DATA = False #@param {type:'boolean'}\nNUM_DEV = len(jax.local_devices())\n\n# Plot configs: no need to change them \nPLOT_REFRESH_EVERY = 50 #@param {type:'integer'}\nPLOT_EVERY = 5 #@param {type:'integer'}\n```\n\n\n```\nmodel = get_model(num_latents=NUM_LATENTS, num_hiddens=NUM_HIDDENS)\n\n# Once we have trained a model, we can explore its latent space by interpolating\n# (and decoding) between latents.\ndef get_latent_interpolations(model, params, prng_key, num_steps, context):\n start_key, stop_key, decoder_key = jax.random.split(prng_key, 3)\n z_start = model.sample_prior(params, start_key, data.context)\n z_stop = model.sample_prior(params, stop_key, data.context)\n # To ensure that the interpolation is still likely under the Gaussian prior,\n # we use Gaussian interpolation - rather than linear interpolation.\n a = jnp.linspace(.0, 1.0, num_steps)\n a = jnp.expand_dims(a, axis=1)\n interpolations = (jnp.sqrt(a) * z_start + jnp.sqrt(1 - a) * z_stop)\n ncols = int(jnp.sqrt(num_steps))\n context = jax.tree_map(\n lambda x: jnp.tile(x, [num_steps] + [1] * (x.ndim-1)), data.context)\n samples_from_interpolations = model.decode(\n params, decoder_key, interpolations, context, mean=True)\n return samples_from_interpolations\n```\n\n\n```\n# These are utilities for shape manipulation to facilitate training using pmap.\n@jax.jit\ndef format_data(data):\n return jax.tree_map(\n lambda x: x.reshape((NUM_DEV, BATCH_SIZE) + x.shape[1:]), data)\n\ndef setup_for_distributed_training(params, prng_key, opt_state, num_dev):\n broadcast = lambda x: jnp.broadcast_to(x, (num_dev,) + x.shape)\n (params, opt_state) = jax.tree_map(broadcast, (params, opt_state))\n prng_key = jax.random.split(prng_key, num_dev)\n return params, prng_key, opt_state\n\nget_slice = lambda x: jax.tree_map(lambda x: x[0], x)\n\ndef reconstruct_and_sample(params, prng_key, model, data):\n prng_key = prng_key[0]\n params = get_slice(params)\n sample_key, reconstruction_key, prng = jax.random.split(prng_key, 3)\n reconstruction = model.reconstruct(\n params, reconstruction_key, data.target, data.context)\n sample = model.sample(params, sample_key, data.context)\n return data.target, reconstruction, sample\n```\n\n# Training a VAE optimizing ELBO\n\n\n```\ndef get_elbo_loss_fn(model):\n def elbo_loss_fn(params, prng, data):\n elbo, stats = jax.tree_map(\n jnp.mean, model.stochastic_elbo(params, prng, data.target, data.context)) \n return -elbo, stats\n return elbo_loss_fn\n\n```\n\n\n```\nelbo_loss_fn = get_elbo_loss_fn(model)\n\n# Initialize the model\nprng_key = jax.random.PRNGKey(0)\nparams = model.init_params(prng_key, get_dummy_data(hard=USE_HARD_DATA))\n\n# Instantiate and initialize the optimizer\nelbo_optimizer = tx.adam(1e-2)\nopt_state = elbo_optimizer.init(params)\n\n# Setup the optimizer state and params for distributed training \nparams, prng_key, opt_state = setup_for_distributed_training(\n params, prng_key, opt_state, num_dev=NUM_DEV)\n\ndataset = get_dataset(\n batch_size=BATCH_SIZE, num_dev=NUM_DEV, hard=USE_HARD_DATA)\n\n# Define and pmap the update function\ndef elbo_update(params, prng_key, opt_state, data): \n loss_key, prng_key = jax.random.split(prng_key)\n\n # Use jax.value_and_grad to compute amd return all the outputs and the\n # gradients for elbo_loss_fn (tip: what does the has_aux kwargs do?)\n # ADD CODE BELOW \n #\u00a0------------------------------------------------------------- \n loss_outputs, grads = ... \n #\u00a0-------------------------------------------------------------\n\n # Reduce-mean the gradients across devices.\n # ADD CODE BELOW \n #\u00a0------------------------------------------------------------- \n grads = ...\n # ------------------------------------------------------------- \n\n # Perform an udpate step on the elbo_optimizer to produce parameter udpates,\n # them apply the updates of the model params.\n # ADD CODE BELOW \n #\u00a0------------------------------------------------------------- \n raw_updates, opt_state = ...\n params = ...\n # ------------------------------------------------------------- \n return params, prng_key, opt_state, loss_outputs\nelbo_update = jax.pmap(elbo_update, axis_name='i', devices=jax.devices())\n\n\n# -- Set up logging and interactive plotting\nlosses = []\nkls = []\nlogps = []\n\nplot = PlotLosses(\n groups={\n 'log p(x)': ['log_p'], \n 'KL': ['kl'],\n 'negative ELBO': ['negative ELBO']}, \n outputs=[MatplotlibPlot(max_cols=3, after_subplot=custom_after_subplot)],\n step_names='Iterations')\n```\n\n\n```\nfor step in range(TRAINING_STEPS): \n params, prng_key, opt_state, stats = elbo_update(\n params, prng_key, opt_state, format_data(next(dataset)))\n\n elbo = stats[1]['elbo'].mean()\n kl = stats[1]['kl'].mean()\n log_p = stats[1]['log_p'].mean()\n\n if (step + 1) % PLOT_EVERY == 0:\n plot.update({\n 'negative ELBO': -elbo,\n 'kl': kl,\n 'log_p': log_p\n }, current_step=step)\n if (step + 1) % PLOT_REFRESH_EVERY == 0:\n plot.send()\n \n losses.append(elbo)\n kls.append(kl)\n logps.append(log_p)\n```\n\n## Visualize reconstructions and samples\n\n\n\n\n\n```\ntargets, reconstructions, samples = reconstruct_and_sample(\n params, prng_key, model,\n next(get_dataset(batch_size=VIS_BATCH_SIZE, num_dev=1, hard=USE_HARD_DATA,\n data_split='test')))\n\nsz = 6\n_ = plt.figure(figsize=((3*sz, 1*sz)))\nplt.subplot(131)\nimshow(gallery(targets), 'Targets')\nplt.subplot(132)\nimshow(gallery(reconstructions), 'Reconstructions')\nplt.subplot(133)\nimshow(gallery(samples), 'Conditional samples');\n```\n\n## Explore the latent space\n\n\n\n\n```\ndata = get_dummy_data(hard=USE_HARD_DATA)\nnum_steps = 7 * 7\nsamples_from_interpolations = get_latent_interpolations(\n model, get_slice(params), jax.random.PRNGKey(1), num_steps, data.context)\n\nsz = 6\n_ = plt.figure(figsize=((2*sz, sz)))\nplt.subplot(121)\nimshow(data.target[0,...], 'Label example')\nplt.subplot(122)\nimshow(gallery(samples_from_interpolations), 'Latent space interpolation')\n```\n\n# Training a VAE with KL annealing\n\nStochastic Variational Inference can get stuck in local optima since there is a certain tension between the likelihood and the KL terms. To ease the optimization, we could use a schedule fpr $\\beta$ where the KL coefficient is slowly annealed from $0$ to $1$ throughout training. This corresponds to weight more the reconstruction term at the beginning of the training and then move towards the fully stochastic variational objective. The modified objective becomes: \n\n\n
\n\n\n$$ \\mathcal{L}(x|c) = - \\Big( \\mathbb{E}_{z \\sim q(z|x, c)} \\big[\\log p_\\theta(x | z, c)\\big] - \\beta \\ \\mathbb{KL}\\big(q_\\phi(z | x, c) || p(z|c)\\big) \\Big).$$\n
\n\nwhere the hyper-parameter $\\beta$ is annealed throughout optimization.\n\n**Tasks:**\n- Modify the training objective with a linear KL annealing schedule.\n- Look at the loss terms, what differences do you notice in their behaviours compared to the standard VAE objective?\n- Look at the reconstructions and samples during training. What do you see?\n\n
\n\nFor more information, see: \n* [Bowman et al, (2015), Generating Sentences from a Continuous Space, (Section 3.1)](https://arxiv.org/abs/1511.06349)\n* [S\u00f8nderby et al, (2016), Ladder variational Autoencoder, (Section 2.3)](https://arxiv.org/abs/1602.02282)\n* [Burgess et al, (2018), Understanding disentangling in \u03b2-VAE](https://arxiv.org/abs/1804.03599)\n\n\n\n\n```\ndef get_beta_vae_loss_fn(model): \n def beta_vae_loss_fn(params, prng, beta, data):\n _, loss_components = jax.tree_map(\n jnp.mean, model.stochastic_elbo(\n params, prng, data.target, data.context))\n # The beta-VAE training loss function is a function of the beta\n # hyper-parameter. Implement the loss using the components returned by \n # the stochastic_elbo of the model class.\n # elbo_loss_fn\n # ADD CODE BELOW \n #\u00a0------------------------------------------------------------- \n loss = ... \n #\u00a0------------------------------------------------------------- \n return loss, loss_components\n return beta_vae_loss_fn\n```\n\n\n```\nprng_key = jax.random.PRNGKey(0)\nparams = model.init_params(prng_key, get_dummy_data(hard=USE_HARD_DATA))\n\nbeta_vae_loss_fn = get_beta_vae_loss_fn(model)\n\n# Set up the optimzer and model like you did for the ELBO optimization\n# ADD CODE BELOW \n#\u00a0------------------------------------------------------------- \nbeta_vae_optimizer = ... \nopt_state = ...\nparams, prng_key, opt_state = ...\ndataset = ...\n \ndef beta_vae_update(params, prng_key, opt_state, beta, data): \n ... \n return params, prng_key, opt_state, beta_vea_loss\n\nbeta_vae_update = ...\n#\u00a0------------------------------------------------------------- \n\n\n# During the optimization of the beta-VAE we will be scheduling the value of \n# beta. Use optax's polynomial schedule to set up a linear schedule, e.g.\n# from 0.01 to 1.0 over TRAINING_STEPS//2 steps.\n# ADD CODE BELOW \n#\u00a0------------------------------------------------------------- \nbeta_vae_current_step = ...\ninitial_beta, final_beta = ...\nbeta_schedule = ...\n#\u00a0------------------------------------------------------------- \n\nbeta_vae_losses = []\nbeta_vae_kls = []\nbeta_vae_logps = []\n\nbeta_vae_groups = {\n 'log p(x)': ['log_p'], \n 'kl': ['kl'],\n 'negative ELBO': ['negative ELBO'],\n 'beta': ['beta']\n}\n\nbeta_vae_plot = PlotLosses(\n groups=beta_vae_groups, \n outputs=[MatplotlibPlot(max_cols=4, after_subplot=custom_after_subplot)], \n step_names='Iterations')\n```\n\n\n```\nbroadcast = lambda x: jnp.broadcast_to(x, (NUM_DEV,) + x.shape)\n\nfor _ in range(TRAINING_STEPS):\n beta_vae_current_step += 1\n beta = beta_schedule(beta_vae_current_step)\n # Since beta is varying during training, it cannot be statically compiled into\n # udpate function and needs to be broadcast prior to being passed to the \n # beta_vae_update.\n params, prng_key, opt_state, stats = beta_vae_update(\n params, prng_key, opt_state, broadcast(beta), format_data(next(dataset)))\n \n elbo = stats[1]['elbo'].mean()\n kl = stats[1]['kl'].mean()\n log_p = stats[1]['log_p'].mean()\n\n if (beta_vae_current_step + 1) % PLOT_EVERY == 0:\n beta_vae_plot.update({\n 'negative ELBO': -elbo,\n 'kl': kl,\n 'log_p': log_p,\n 'beta': beta\n }, current_step=beta_vae_current_step)\n if beta_vae_current_step % PLOT_REFRESH_EVERY == 0:\n beta_vae_plot.send()\n \n beta_vae_losses.append(elbo)\n beta_vae_kls.append(kl)\n beta_vae_logps.append(log_p)\n```\n\n## Visualize reconstructions and samples\n\n\n```\ntargets, reconstructions, samples = reconstruct_and_sample(\n params, prng_key, model,\n next(get_dataset(batch_size=VIS_BATCH_SIZE, num_dev=1, hard=USE_HARD_DATA,\n data_split='test')))\n\nsz = 6\n_ = plt.figure(figsize=((3*sz, 1*sz)))\nplt.subplot(131)\nimshow(gallery(targets), 'Targets')\nplt.subplot(132)\nimshow(gallery(reconstructions), 'Reconstructions')\nplt.subplot(133)\nimshow(gallery(samples), 'Conditional samples');\n```\n\n## Explore the latent space\n\n\n\n```\ndata = get_dummy_data(hard=USE_HARD_DATA)\nnum_steps = 7 * 7\nsamples_from_interpolations = get_latent_interpolations(\n model, get_slice(params), jax.random.PRNGKey(1), num_steps, data.context)\n\nsz = 6\n_ = plt.figure(figsize=((2*sz, sz)))\nplt.subplot(121)\nimshow(data.target[0,...], 'Label example')\nplt.subplot(122)\nimshow(gallery(samples_from_interpolations), 'Latent space interpolation')\n```\n\n# Constrained optimization\n\nConstrained optimization can be used to dynamically tune the relative weight of the likelihood and KL terms during optimization. \nThis removes the need to manually tune $\\beta$, or create an optimization schedule, which can be problem specific.\n\nThe objective now becomes:\n\n\\begin{equation}\n \\text{minimize } \\mathbb{E}_{p^*(x)} KL(q(z|x)||p(z)) \\text{ such that } \\mathbb{E}_{p^*(x)} \\mathbb{E}_{q(z|x) \\log p_\\theta(x|z)} > \\kappa \n\\end{equation}\n\nThis can be solved using Lagrange multipliers. The objective then becomes:\n\n\\begin{equation}\n \\text{minimize } \\mathbb{E}_{p^*(x)} KL(q(z|x)||p(z)) + \\lambda (\\mathbb{E}_{p^*(x)} \\mathbb{E}_{q(z|x)} (\\kappa - \\log p_\\theta(x|z)))\n\\end{equation}\n\n\nThe difference compared to the KL annealing is that:\n\n * $\\lambda$ is a learned parameter - it will be learned using stochastic gradient descent, like the network parameters. The difference is that the lagrangian has to solve a maximization problem. You can see this intuitively: the graadient with respect to $\\lambda$ in the objective above is $\\mathbb{E}_{p^*(x)} \\mathbb{E}_{q(z|x)} (\\alpha - \\log p_\\theta(x|z))$. If $ \\mathbb{E}_{p^*(x)} \\mathbb{E}_{q(z|x)} (\\alpha - \\log p_\\theta(x|z))> 0$, the constraint is not being satisfied, so the value of the lagrangian needs to increase. This will be done by doing gradient ascent, instead of gradient descent. Note that for $\\lambda$ to be a valid lagranian in a minimization problem, it has to be positive.\n * The practicioner has to specify the hyperparameter $\\kappa$, which determines the reoncstruction quality of the model.\n * the coefficient is in front of the likelihood term, not the KL term. This is mainly for convenience, as it is easier to specify the hyperparameter $\\kappa$ for the likelihood (reconstruction loss).\n\nFor more assumptions made by this method, see the Karush\u2013Kuhn\u2013Tucker conditions.\n\nFor more information, see: \n * http://bayesiandeeplearning.org/2018/papers/33.pdf\n\n\n## Lagrange multipliers for constrained optimization\n\nThis is a reimplementation in JAX of the [constrained optimization tools](https://github.com/deepmind/sonnet/blob/master/sonnet/python/modules/optimization_constraints.py) found in Sonnet v1.\n\nNote the peculiar implementation of the `clip` function, which is used to limit the valid range of the multipliers; this is used for example to force their values to be non-negative, or to limit their maximum values to control the interaction of the loss components during training.\n\nIn this implementation the gradients flowing through the clipping function are not set to 0 when above and below the thresholds, if the gradients would make the inputs move towards the valid range. This is particularly useful when setting a maximum valid value, since it allows for the multipliers to become smaller once the constraints are (eventually) satisfied.\n\n\n```\n@functools.partial(jax.custom_vjp, nondiff_argnums=(1, 2, 3))\ndef clip(x, min_val, max_val, maximize=True):\n del maximize\n return jax.tree_map(lambda e: jnp.clip(e, min_val, max_val), x)\n\n\ndef clip_fwd(x, min_val, max_val, maximize):\n return clip(x, min_val, max_val, maximize), x\n\n\ndef clip_bwd(min_val, max_val, maximize, x, co_tangents):\n zeros = jax.tree_map(jnp.zeros_like, co_tangents)\n if min_val is not None:\n get_mask = lambda x, v, t: (x < v) & (t < 0 if maximize else t > 0)\n mask = jax.tree_multimap(get_mask, x, min_val, co_tangents)\n co_tangents = jax.tree_multimap(jnp.where, mask, zeros, co_tangents)\n if max_val is not None:\n get_mask = lambda x, v, t: (x > v) & (t > 0 if maximize else t < 0)\n mask = jax.tree_multimap(get_mask, x, max_val, co_tangents)\n co_tangents = jax.tree_multimap(jnp.where, mask, zeros, co_tangents)\n return co_tangents,\n\n\nclip.defvjp(clip_fwd, clip_bwd)\n\n\nclass LagrangeMultiplier(hk.Module):\n \"\"\"Lagrange Multiplier module for constrained optimization.\"\"\"\n\n def __init__(self,\n shape=(),\n initializer=1.0,\n maximize=True,\n valid_range=None,\n name='lagrange_multiplier'):\n super().__init__(name=name)\n self._shape = shape\n if callable(initializer):\n self._initializer = initializer\n else:\n self._initializer = lambda *args: jnp.ones(*args) * initializer\n self._maximize = maximize\n self._valid_range = valid_range if valid_range is not None else (0.0, None)\n assert self._valid_range[0] >= 0\n\n def __call__(self):\n lag_mul = hk.get_parameter('w', self._shape, init=self._initializer)\n lag_mul = clip(lag_mul, *self._valid_range, maximize=self._maximize)\n return lag_mul\n```\n\n## Training a VAE using GECO\n\n\n```\ndef get_geco_loss(prng, model, kappa, valid_range, init_lambda):\n\n def constraint_term(x, target): \n lag_mul = LagrangeMultiplier(\n shape=x.shape, valid_range=valid_range, initializer=init_lambda)()\n return jnp.sum(lag_mul * (x - target)), lag_mul\n constraint_term = _transform(constraint_term)\n lagmul_params = constraint_term.init(prng, jnp.ones(()), jnp.ones(()))\n\n def geco_loss(params, prng, data):\n \"\"\"Loss using constrained optimization as in GECO.\"\"\"\n # Compute the KL term (make sure to average across the batch!)\n # ADD CODE BELOW \n # -----------------------\n kl = ... \n # -----------------------\n\n # Reconstruct the inputs and compute the reconstruction error (e.g. MSE)\n # ADD CODE BELOW \n # ----------------------- \n reconstruction = ...\n reconstruction_err = ...\n # -----------------------\n\n # If you didn't use MSE before, you need to adjust the kappa term in the\n # constraint definition.\n constraint_satisfaction, lag_mul = constraint_term.apply(\n params, reconstruction_err, np.prod(data.target.shape[1:]) * kappa**2)\n loss = constraint_satisfaction + kl\n metrics = {\n 'loss': loss, \n 'mse': jnp.mean((reconstruction - data.target)**2),\n 'kl': kl,\n 'lag_mul': lag_mul}\n return loss, metrics\n\n return geco_loss, lagmul_params\n```\n\n\n```\nprng_key = jax.random.PRNGKey(0)\nkappa = 0.18\nvalid_range = (0, 4.0)\ninit_lambda = 2.0\n\nparams = model.init_params(prng_key, get_dummy_data(hard=USE_HARD_DATA))\ngeco_loss_fn, lagmul_params = get_geco_loss(\n prng_key, model, kappa, valid_range, init_lambda=init_lambda)\n```\n\n## Using multiple optimizers jointly\n\nWe will now see how we can optimize joinlty the model parameters `params` and the Lagrange multipliers `lagmul_params`.\n\nWe want to update the Lagrange multipliers via stochastic gradient *ascent*; for example we could consider using the logic in `tx.sgd` with a *negative* learning rate. The model params will be instead optimized using ADAM, just like in the previous cells.\n\nIn the next cell you will find a handy wrapper to capture multiple optimizers into a single optax object. \n\nWhat is left to do is:\n\n1. Merge all the parameters into a single tree; you can do so using the `hk.data_structures.merge` utility in Haiku:\n\n ```\n params = hk.data_structures.merge(params, lagmul_params)\n ```\n2. Specify how to filter the resulting `params` structure to select which variables should be mapped to which optimizer. Tip: you can filter the Lagrange multipliers with:\n ```\n multiplier_filter = lambda m, n, p: 'lagrange' in m\n ```\n and the model variables with:\n ```\n model_params_filter = lambda m, n, p: 'lagrange' not in m\n```\n\n\n```\ndef multi_opt(**filter_optimizer_map):\n \"\"\"Wraps multiple optimizers within an object with the optax interface.\n\n Args:\n **filter_optimizer_map: kwargs used to map optimizer names to\n (predicate, optimizer) tuples, where predicate is a function which will be\n passed to haiku.data_structure.filter in order to select which variables\n are to be updated by the corresponding optimizer.\n\n Returns:\n An optax.InitUpdate tuple.\n \"\"\"\n def filter_(predicate, params):\n if params is None:\n return None\n return hk.data_structures.filter(predicate, params)\n\n merge_ = hk.data_structures.merge\n\n def _init(params):\n opt_state = dict()\n for opt_name, (predicate, opt) in filter_optimizer_map.items():\n opt_state[opt_name] = opt.init(filter_(predicate, params))\n return opt_state\n\n def _update(updates, state, params=None):\n new_updates = {}\n new_state = dict()\n for opt_name, (predicate, opt) in filter_optimizer_map.items():\n update, new_state[opt_name] = opt.update(\n filter_(predicate, updates), state[opt_name],\n filter_(predicate, params))\n new_updates = merge_(new_updates, update)\n return new_updates, new_state\n\n return tx.InitUpdate(_init, _update)\n```\n\n\n```\n# Merge them using hk.data_structures.merge\n# function.\n# ADD CODE BELOW \n# -----------------------\nparams = ... \n# -----------------------\n\n# Optimize the Lagrange multipliers via stochastic gradient ascent.\n# Optimize the model params will be optimized using ADAM.\n# ADD CODE BELOW \n# -----------------------\ngeco_optimizer = ...\nopt_state = ...\n# -----------------------\n\nparams, prng_key, opt_state = setup_for_distributed_training(\n params, prng_key, opt_state, num_dev=NUM_DEV)\n\ndataset = get_dataset(\n batch_size=BATCH_SIZE, num_dev=NUM_DEV, hard=USE_HARD_DATA)\n\ndef geco_update(params, prng_key, opt_state, data):\n loss_key, prng_key = jax.random.split(prng_key)\n geco_loss, grads = jax.value_and_grad(\n geco_loss_fn, has_aux=True)(params, loss_key, data)\n grads = jax.lax.pmean(grads, axis_name='i')\n raw_updates, opt_state = geco_optimizer.update(grads, opt_state)\n params = tx.apply_updates(params, raw_updates)\n return params, prng_key, opt_state, geco_loss\ngeco_update = jax.pmap(geco_update, axis_name='i', devices=jax.devices())\n\ngeco_current_step = 0\nmses = []\nkls = []\nlag_muls = []\n\ngeco_groups = {\n 'mse': ['mse'],\n 'kl': ['kl'],\n 'lag_mul': ['lag_mul'],\n}\n\ngeco_plot = PlotLosses(\n groups=geco_groups,\n outputs=[MatplotlibPlot(max_cols=4, after_subplot=custom_after_subplot)], \n step_names='Iterations')\n\n```\n\n\n```\nfor _ in range(TRAINING_STEPS):\n geco_current_step += 1\n params, prng_key, opt_state, stats = geco_update(\n params, prng_key, opt_state, format_data(next(dataset)))\n mse = stats[1]['mse'].mean()\n kl = stats[1]['kl'].mean()\n lag_mul = stats[1]['lag_mul'].mean()\n\n geco_plot.update({\n 'mse': mse,\n 'kl': kl,\n 'lag_mul': lag_mul\n })\n if geco_current_step % PLOT_REFRESH_EVERY == 0:\n geco_plot.send()\n \n mses.append(mse)\n kls.append(kl) \n lag_muls.append(lag_mul)\n```\n\n## Visualize reconstructions and samples\n\n\n```\ntargets, reconstructions, samples = reconstruct_and_sample(\n params, prng_key, model,\n next(get_dataset(batch_size=VIS_BATCH_SIZE, num_dev=1, hard=USE_HARD_DATA,\n data_split='test')))\n\nsz = 6\n_ = plt.figure(figsize=((3*sz, 1*sz)))\nplt.subplot(131)\nimshow(gallery(targets), 'Targets')\nplt.subplot(132)\nimshow(gallery(reconstructions), 'Reconstructions')\nplt.subplot(133)\nimshow(gallery(samples), 'Conditional samples');\n```\n\n## Explore the latent space\n\n\n```\ndummy_data = get_dummy_data(hard=USE_HARD_DATA)\nnum_steps = 7 * 7\nsamples_from_interpolations = get_latent_interpolations(\n model, get_slice(params), jax.random.PRNGKey(0), num_steps, dummy_data.context)\n\nsz = 6\n_ = plt.figure(figsize=((2*sz, sz)))\nplt.subplot(121)\nimshow(dummy_data.target[0,...], 'Label example')\nplt.subplot(122)\nimshow(gallery(samples_from_interpolations), 'Latent space interpolation')\n```\n", "meta": {"hexsha": "2d61b02fb7391c3e4701e3c44c884b64956b4a0d", "size": 91533, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "VAE_Tutorial_Start.ipynb", "max_stars_repo_name": "linker81/tutorials2021", "max_stars_repo_head_hexsha": "f3479e7510545e4ef94aa3795683c8db7039f786", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "VAE_Tutorial_Start.ipynb", "max_issues_repo_name": "linker81/tutorials2021", "max_issues_repo_head_hexsha": "f3479e7510545e4ef94aa3795683c8db7039f786", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "VAE_Tutorial_Start.ipynb", "max_forks_repo_name": "linker81/tutorials2021", "max_forks_repo_head_hexsha": "f3479e7510545e4ef94aa3795683c8db7039f786", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.7956937799, "max_line_length": 733, "alphanum_fraction": 0.524073285, "converted": true, "num_tokens": 16007, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.45713670203584295, "lm_q2_score": 0.23370635691404026, "lm_q1q2_score": 0.10683575324449598}} {"text": "\n\n\n```python\nname = \"Daniel Silva Lopes da Costa\" # write YOUR NAME\n\nhonorPledge = \"I affirm that I have not given or received any unauthorized \" \\\n \"help on this assignment, and that this work is my own.\\n\"\n\n\nprint(\"\\nName: \", name)\nprint(\"\\nHonor pledge: \", honorPledge)\n```\n\n \n Name: Daniel Silva Lopes da Costa\n \n Honor pledge: I affirm that I have not given or received any unauthorized help on this assignment, and that this work is my own.\n \n\n\n# MAC0460 / MAC5832 (2021)\n
\n\n# EP2: Linear regression - analytic solution\n\n### Objectives:\n\n- to implement and test the analytic solution for the linear regression task (see, for instance, Slides of Lecture 03 and Lecture 03 of *Learning from Data*)\n- to understand the core idea (*optimization of a loss or cost function*) for parameter adjustment in machine learning\n
\n\n# Linear regression\n\nGiven a dataset $\\{(\\mathbf{x}^{(1)}, y^{(1)}), \\dots ,(\\mathbf{x}^{(N)}, y^{(N)})\\}$ with $\\mathbf{x}^{(i)} \\in \\mathbb{R}^{d}$ and $y^{(i)} \\in \\mathbb{R}$, we would like to approximate the unknown function $f:\\mathbb{R}^{d} \\rightarrow \\mathbb{R}$ (recall that $y^{(i)} =f(\\mathbf{x}^{(i)})$) by means of a linear model $h$:\n$$\nh(\\mathbf{x}^{(i)}; \\mathbf{w}, b) = \\mathbf{w}^\\top \\mathbf{x}^{(i)} + b\n$$\n\nNote that $h(\\mathbf{x}^{(i)}; \\mathbf{w}, b)$ is, in fact, an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation) of $\\mathbf{x}^{(i)}$. As commonly done, we will use the term \"linear\" to refer to an affine transformation.\n\nThe output of $h$ is a linear transformation of $\\mathbf{x}^{(i)}$. We use the notation $h(\\mathbf{x}^{(i)}; \\mathbf{w}, b)$ to make clear that $h$ is a parametric model, i.e., the transformation $h$ is defined by the parameters $\\mathbf{w}$ and $b$. We can view vector $\\mathbf{w}$ as a *weight* vector that controls the effect of each *feature* in the prediction.\n\nBy adding one component with value equal to 1 to the observations $\\mathbf{x}$ (an artificial coordinate), we have:\n\n$$\\tilde{\\mathbf{x}} = (1, x_1, \\ldots, x_d) \\in \\mathbb{R}^{1+d}$$\n\nand then we can simplify the notation:\n$$\nh(\\mathbf{x}^{(i)}; \\mathbf{w}) = \\hat{y}^{(i)} = \\mathbf{w}^\\top \\tilde{\\mathbf{x}}^{(i)}\n$$\n\nWe would like to determine the optimal parameters $\\mathbf{w}$ such that prediction $\\hat{y}^{(i)}$ is as closest as possible to $y^{(i)}$ according to some error metric. Adopting the *mean square error* as such metric we have the following cost function:\n\n\\begin{equation}\nJ(\\mathbf{w}) = \\frac{1}{N}\\sum_{i=1}^{N}\\big(\\hat{y}^{(i)} - y^{(i)}\\big)^{2}\n\\end{equation}\n\nThus, the task of determining a function $h$ that is closest to $f$ is reduced to the task of finding the values $\\mathbf{w}$ that minimize $J(\\mathbf{w})$.\n\n**Now we will explore this model, starting with a simple dataset.**\n\n\n### Auxiliary functions\n\n\n```python\n# some imports\nimport numpy as np\nimport time\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n```\n\n\n```python\n# An auxiliary function\ndef get_housing_prices_data(N, verbose=True):\n \"\"\"\n Generates artificial linear data,\n where x = square meter, y = house price\n\n :param N: data set size\n :type N: int\n \n :param verbose: param to control print\n :type verbose: bool\n :return: design matrix, regression targets\n :rtype: np.array, np.array\n \"\"\"\n cond = False\n while not cond:\n x = np.linspace(90, 1200, N)\n gamma = np.random.normal(30, 10, x.size)\n y = 50 * x + gamma * 400\n x = x.astype(\"float32\")\n x = x.reshape((x.shape[0], 1))\n y = y.astype(\"float32\")\n y = y.reshape((y.shape[0], 1))\n cond = min(y) > 0\n \n xmean, xsdt, xmax, xmin = np.mean(x), np.std(x), np.max(x), np.min(x)\n ymean, ysdt, ymax, ymin = np.mean(y), np.std(y), np.max(y), np.min(y)\n if verbose:\n print(\"\\nX shape = {}\".format(x.shape))\n print(\"y shape = {}\\n\".format(y.shape))\n print(\"X: mean {}, sdt {:.2f}, max {:.2f}, min {:.2f}\".format(xmean,\n xsdt,\n xmax,\n xmin))\n print(\"y: mean {:.2f}, sdt {:.2f}, max {:.2f}, min {:.2f}\".format(ymean,\n ysdt,\n ymax,\n ymin))\n return x, y\n```\n\n\n```python\n# Another auxiliary function\ndef plot_points_regression(x,\n y,\n title,\n xlabel,\n ylabel,\n prediction=None,\n legend=False,\n r_squared=None,\n position=(90, 100)):\n \"\"\"\n Plots the data points and the prediction,\n if there is one.\n\n :param x: design matrix\n :type x: np.array\n :param y: regression targets\n :type y: np.array\n :param title: plot's title\n :type title: str\n :param xlabel: x axis label\n :type xlabel: str\n :param ylabel: y axis label\n :type ylabel: str\n :param prediction: model's prediction\n :type prediction: np.array\n :param legend: param to control print legends\n :type legend: bool\n :param r_squared: r^2 value\n :type r_squared: float\n :param position: text position\n :type position: tuple\n \"\"\"\n fig, ax = plt.subplots(1, 1, figsize=(8, 8))\n line1, = ax.plot(x, y, 'bo', label='Real data')\n if prediction is not None:\n line2, = ax.plot(x, prediction, 'r', label='Predicted data')\n if legend:\n plt.legend(handles=[line1, line2], loc=2)\n ax.set_title(title,\n fontsize=20,\n fontweight='bold')\n if r_squared is not None:\n bbox_props = dict(boxstyle=\"square,pad=0.3\",\n fc=\"white\", ec=\"black\", lw=0.2)\n t = ax.text(position[0], position[1], \"$R^2 ={:.4f}$\".format(r_squared),\n size=15, bbox=bbox_props)\n\n ax.set_xlabel(xlabel, fontsize=20)\n ax.set_ylabel(ylabel, fontsize=20)\n plt.show()\n\n```\n\n### The dataset \n\nThe first dataset we will use is a toy dataset. We will generate $N=100$ observations with only one *feature* and a real value associated to each of them. We can view these observations as being pairs *(area of a real state in square meters, price of the real state)*. Our task is to construct a model that is able to predict the price of a real state, given its area.\n\n\n```python\nX, y = get_housing_prices_data(N=100)\n```\n\n \n X shape = (100, 1)\n y shape = (100, 1)\n \n X: mean 645.0, sdt 323.65, max 1200.00, min 90.00\n y: mean 44003.19, sdt 17195.94, max 79998.91, min 5441.34\n\n\n### Ploting the data\n\n\n```python\nplot_points_regression(X,\n y,\n title='Real estate prices prediction',\n xlabel=\"m\\u00b2\",\n ylabel='$')\n```\n\n### The solution\n\nGiven $f:\\mathbb{R}^{N\\times M} \\rightarrow \\mathbb{R}$ and $\\mathbf{A} \\in \\mathbb{R}^{N\\times M}$, we define the gradient of $f$ with respect to $\\mathbf{A}$ as:\n\n\\begin{equation*}\n\\nabla_{\\mathbf{A}}f = \\frac{\\partial f}{\\partial \\mathbf{A}} = \\begin{bmatrix}\n\\frac{\\partial f}{\\partial \\mathbf{A}_{1,1}} & \\dots & \\frac{\\partial f}{\\partial \\mathbf{A}_{1,m}} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f}{\\partial \\mathbf{A}_{n,1}} & \\dots & \\frac{\\partial f}{\\partial \\mathbf{A}_{n,m}}\n\\end{bmatrix}\n\\end{equation*}\n\nLet $\\mathbf{X} \\in \\mathbb{R}^{N\\times d}$ be a matrix (sometimes also called the *design matrix*) whose rows are the observations of the dataset and let $\\mathbf{y} \\in \\mathbb{R}^{N}$ be the vector consisting of all values $y^{(i)}$ (i.e., $\\mathbf{X}^{(i,:)} = \\mathbf{x}^{(i)}$ and $\\mathbf{y}^{(i)} = y^{(i)}$). It can be verified that: \n\n\\begin{equation}\nJ(\\mathbf{w}) = \\frac{1}{N}(\\mathbf{X}\\mathbf{w} - \\mathbf{y})^{T}(\\mathbf{X}\\mathbf{w} - \\mathbf{y})\n\\end{equation}\n\nUsing basic matrix derivative concepts we can compute the gradient of $J(\\mathbf{w})$ with respect to $\\mathbf{w}$:\n\n\\begin{equation}\n\\nabla_{\\mathbf{w}}J(\\mathbf{w}) = \\frac{2}{N} (\\mathbf{X}^{T}\\mathbf{X}\\mathbf{w} -\\mathbf{X}^{T}\\mathbf{y}) \n\\end{equation}\n\nThus, when $\\nabla_{\\mathbf{w}}J(\\mathbf{w}) = 0$ we have \n\n\\begin{equation}\n\\mathbf{X}^{T}\\mathbf{X}\\mathbf{w} = \\mathbf{X}^{T}\\mathbf{y}\n\\end{equation}\n\nHence,\n\n\\begin{equation}\n\\mathbf{w} = (\\mathbf{X}^{T}\\mathbf{X})^{-1}\\mathbf{X}^{T}\\mathbf{y}\n\\end{equation}\n\nNote that this solution has a high computational cost. As the number of variables (*features*) increases, the cost for matrix inversion becomes prohibitive. See [this text](https://sgfin.github.io/files/notes/CS229_Lecture_Notes.pdf) for more details.\n\n# Exercise 1\nUsing only **NumPy** (a quick introduction to this library can be found [here](http://cs231n.github.io/python-numpy-tutorial/)), complete the two functions below. Recall that $\\mathbf{X} \\in \\mathbb{R}^{N\\times d}$; thus you will need to add a component of value 1 to each of the observations in $\\mathbf{X}$ before performing the computation described above.\n\nNOTE: Although the dataset above has data of dimension $d=1$, your code must be generic (it should work for $d\\geq1$)\n\n## 1.1. Weight computation function\n\n\n```python\ndef normal_equation_weights(X, y):\n \"\"\"\n Calculates the weights of a linear function using the normal equation method.\n You should add into X a new column with 1s.\n\n :param X: design matrix\n :type X: np.ndarray(shape=(N, d))\n :param y: regression targets\n :type y: np.ndarray(shape=(N, 1))\n :return: weight vector\n :rtype: np.ndarray(shape=(d+1, 1))\n \"\"\"\n \n # START OF YOUR CODE:\n X = np.hstack(( np.ones((X.shape[0],1)), X ) )\n #print(X.T)\n Xt = np.linalg.inv(np.dot(X.T, X))\n w = np.dot(np.dot(Xt, X.T), y)\n return w\n raise NotImplementedError(\"Function normal_equation_weights() is not implemented\")\n # END OF YOUR CODE\n \n```\n\n\n```python\n# test of function normal_equation_weights()\n\nw = 0 # this is not necessary\nw = normal_equation_weights(X, y)\nprint(\"Estimated w =\\n\", w)\n```\n\n Estimated w =\n [[10804.41058626]\n [ 51.47097573]]\n\n\n## 1.2. Prediction function\n\n\n```python\ndef normal_equation_prediction(X, w):\n \"\"\"\n Calculates the prediction over a set of observations X using the linear function\n characterized by the weight vector w.\n You should add into X a new column with 1s.\n\n :param X: design matrix\n :type X: np.ndarray(shape=(N, d))\n :param w: weight vector\n :type w: np.ndarray(shape=(d+1, 1))\n :param y: regression prediction\n :type y: np.ndarray(shape=(N, 1))\n \"\"\"\n \n # START OF YOUR CODE:\n X = np.hstack(( np.ones((X.shape[0],1)), X ) )\n Y = np.dot(X, w)\n return Y\n raise NotImplementedError(\"Function normal_equation_prediction() is not implemented\")\n # END OF YOUR CODE\n\n```\n\n## 1.3. Coefficient of determination\nWe can use the [$R^2$](https://pt.wikipedia.org/wiki/R%C2%B2) metric (Coefficient of determination) to evaluate how well the linear model fits the data.\n\n**Which $\ud835\udc45^2$ value would you expect to observe ?**\n\n\n```python\nfrom sklearn.metrics import r2_score\n\n# test of function normal_equation_prediction()\nprediction = normal_equation_prediction(X, w)\n\n# compute the R2 score using the r2_score function from sklearn\n# Replace 0 with an appropriate call of the function\n\n# START OF YOUR CODE:\nr_2 = r2_score(y, prediction)\n# END OF YOUR CODE\n\nplot_points_regression(X,\n y,\n title='Real estate prices prediction',\n xlabel=\"m\\u00b2\",\n ylabel='$',\n prediction=prediction,\n legend=True,\n r_squared=r_2)\n```\n\n## Additional tests\n\nLet us compute a prediction for $x=650$\n\n\n\n```python\n# Let us use the prediction function\nx = np.asarray([650]).reshape(1,1)\nprediction = normal_equation_prediction(x, w)\nprint(\"Area = %.2f Predicted price = %.4f\" %(x[0], prediction))\n```\n\n Area = 650.00 Predicted price = 44260.5448\n\n\n## 1.4. Processing time\n\nExperiment with different nummber of samples $N$ and observe how processing time varies.\n\nBe careful not to use a too large value; it may make jupyter freeze ...\n\n\n```python\n# Add other values for N\n# START OF YOUR CODE:\nN = [1600] \n# END OF YOUR CODE\n\nfor i in N:\n X, y = get_housing_prices_data(N=i)\n init = time.time()\n w = normal_equation_weights(X, y)\n prediction = normal_equation_prediction(X,w)\n init = time.time() - init\n \n print(\"\\nExecution time = {:.8f}(s)\\n\".format(init))\n\n\n# Tempos de algumas an\u00e1lises:\n# N Tempo(s)\n# 100 0.00376582\n# 200 0.00273514\n# 400 0.00603080\n# 800 0.00363159\n# 1600 0.00614405\n\n```\n\n \n X shape = (1600, 1)\n y shape = (1600, 1)\n \n X: mean 645.0, sdt 320.63, max 1200.00, min 90.00\n y: mean 44313.57, sdt 16460.92, max 81999.68, min 7103.05\n \n Execution time = 0.00114775(s)\n \n\n\n# Exercise 2\n\nLet us test the code with $\ud835\udc51>1$. \nWe will use the data we have collected in our first class. The [file](https://edisciplinas.usp.br/pluginfile.php/5982803/course/section/6115454/QT1data.csv) can be found on e-disciplinas. \n\nLet us try to predict the weight based on one or more features.\n\n\n```python\nimport pandas as pd\n\n# load the dataset\ndf = pd.read_csv('QT1data.csv')\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SexAgeHeightWeightShoe numberTrouser number
0Female53154593640
1Male23170564038
2Female23167633740
3Male21178784040
4Female25153583638
\n
\n\n\n\n\n```python\ndf.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
AgeHeightWeightShoe number
count130.000000130.000000130.000000130.000000
mean28.238462170.68461570.23846239.507692
std12.38704211.56849115.5348092.973386
min3.000000100.00000015.00000024.000000
25%21.000000164.25000060.00000038.000000
50%23.000000172.00000069.50000040.000000
75%29.000000178.00000080.00000041.000000
max62.000000194.000000130.00000046.000000
\n
\n\n\n\n\n```python\n# Our target variable is the weight\ny = df.pop('Weight').values\ny\n```\n\n\n\n\n array([ 59, 56, 63, 78, 58, 89, 68, 83, 70, 56, 65, 66, 78,\n 75, 47, 68, 65, 99, 80, 62, 60, 84, 91, 60, 15, 85,\n 56, 62, 69, 78, 60, 48, 66, 85, 101, 74, 52, 52, 80,\n 72, 75, 78, 61, 74, 70, 90, 66, 79, 80, 65, 90, 69,\n 58, 63, 62, 73, 55, 65, 62, 75, 48, 59, 74, 80, 51,\n 90, 58, 117, 77, 75, 56, 50, 67, 93, 70, 76, 85, 50,\n 86, 96, 63, 56, 90, 95, 130, 70, 83, 70, 64, 57, 54,\n 69, 53, 28, 62, 68, 73, 54, 75, 85, 62, 69, 55, 82,\n 84, 52, 64, 73, 86, 77, 64, 65, 55, 50, 98, 77, 51,\n 66, 83, 61, 80, 81, 76, 78, 70, 75, 72, 80, 90, 53])\n\n\n\n## 2.1. One feature ($d=1$)\n\nWe will use 'Height' as the input feature and predict the weight\n\n\n```python\nfeature_cols = ['Height']\nX = df.loc[:, feature_cols]\nX.shape\n```\n\n\n\n\n (130, 1)\n\n\n\nWrite the code for computing the following\n- compute the regression weights using $\\mathbf{X}$ and $\\mathbf{y}$\n- compute the prediction\n- compute the $R^2$ value\n- plot the regression graph (use appropriate values for the parameters of function plot_points_regression())\n\n\n```python\n# START OF YOUR CODE:\nw = normal_equation_weights(X, y)\nprediction = normal_equation_prediction(X, w)\nr_2 = r2_score(y, prediction)\nprint(\"Erro quadr\u00e1tico m\u00e9dio:\", r_2)\n\nplot_points_regression(X,\n y,\n title='Predi\u00e7\u00e3o de peso pela altura',\n xlabel=\"altura(cm)\",\n ylabel='peso(kg)',\n prediction=prediction,\n legend=True,\n r_squared=r_2)\n\n# END OF YOUR CODE\n\n```\n\n## 2.2 - Two input features ($d=2$)\n\nNow repeat the exercise with using as input the features 'Height' and 'Shoe number'\n\n- compute the regression weights using $\\mathbf{X}$ and $\\mathbf{y}$\n- compute the prediction\n- compute and print the $R^2$ value\n\nNote that our plotting function can not be used. There is no need to do plotting here.\n\n\n```python\n# START OF YOUR CODE:\nfeature_cols = ['Height', 'Shoe number']\nX = df.loc[:, feature_cols]\nw = normal_equation_weights(X, y)\nprediction = normal_equation_prediction(X, w)\nr_2 = r2_score(y, prediction)\nprint(\"Erro quadr\u00e1tico m\u00e9dio:\", r_2)\n# END OF YOUR CODE\n\n```\n\n Erro quadr\u00e1tico m\u00e9dio: 0.45381183096658595\n\n\n## 2.3 - Three input features ($d=3$)\n\nNow try with three features. There is no need to do plotting here.\n- compute the regression weights using $\\mathbf{X}$ and $\\mathbf{y}$\n- compute the prediction\n- compute and print the $R^2$ value\n\n\n```python\n# START OF YOUR CODE:\nfeature_cols = ['Height', 'Shoe number', 'Age' ]\nX = df.loc[:, feature_cols]\nw = normal_equation_weights(X, y)\nprediction = normal_equation_prediction(X, w)\nr_2 = r2_score(y, prediction)\nprint(\"Erro quadr\u00e1tico m\u00e9dio:\", r_2)\n# END OF YOUR CODE\n\n```\n\n Erro quadr\u00e1tico m\u00e9dio: 0.4776499498669615\n\n\n## 2.4 - Your comments\n\nDid you observe anything interesting with varying values of $d$ ? Comment about it.\n\nYOUR COMMENT BELOW:\n\n===> O errro quadr\u00e1tico aumentou com o aumento das hisp\u00f3teses. O que evidencia uma discuss\u00e3o desenvolvida em aula de como, mesmo com um conjunto mais diversificado de hip\u00f3teses n\u00e3o necessariamente o algorimto ficaria melhor, principalmente na regress\u00e3o linear.\n\n\n\n", "meta": {"hexsha": "1129f84658f75a7dcc6a59f0b84efbb9081f7b21", "size": 124294, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ep02_linreg_analytic.ipynb", "max_stars_repo_name": "dslcosta1/Machine_Learning", "max_stars_repo_head_hexsha": "6d7eef5ccd061af1d96a1baf9c533ebadf097711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ep02_linreg_analytic.ipynb", "max_issues_repo_name": "dslcosta1/Machine_Learning", "max_issues_repo_head_hexsha": "6d7eef5ccd061af1d96a1baf9c533ebadf097711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ep02_linreg_analytic.ipynb", "max_forks_repo_name": "dslcosta1/Machine_Learning", "max_forks_repo_head_hexsha": "6d7eef5ccd061af1d96a1baf9c533ebadf097711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 105.2447078747, "max_line_length": 34482, "alphanum_fraction": 0.7806973788, "converted": true, "num_tokens": 6408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41111085480195975, "lm_q2_score": 0.2598256494239272, "lm_q1q2_score": 0.10681714483414503}} {"text": "\\title{Shift Registers in myHDL}\n\\author{Steven K Armour}\n\\maketitle\n\nShift registers are common structures that move around data in primitive memory by rearranging the bit order. While shift registers are built as needed they typically fall into five categories. The four conversion types: PIPO, PISO, SISO, SIPO; and cyclic shift registers such as the ring and Johnson counters. Wich can be used as counters but are in reality shift registers since they reorder the bits in the modules internal memory\n\n

Table of Contents

\n\n\n# Refrances\n@misc{myhdl_2017,\ntitle={Johnson Counter},\nurl={http://www.myhdl.org/docs/examples/jc2.html},\njournal={Myhdl.org},\nauthor={myHDL},\nyear={2017}\n}\n\n@misc{the shift register,\nurl={https://www.electronics-tutorials.ws/sequential/seq_5.html},\njournal={Electronics Tutorials}\n},\n\n@misc{petrescu,\ntitle={Shift Registers},\nurl={http://www.csit-sun.pub.ro/courses/Masterat/Xilinx%20Synthesis%20Technology/toolbox.xilinx.com/docsan/xilinx4/data/docs/xst/hdlcode8.html},\njournal={Csit-sun.pub.ro},\nauthor={Petrescu, Adrian}\n},\n\n@misc{reddy_2014,\ntitle={verilog code for ALU,SISO,PIPO,SIPO,PISO},\nurl={http://thrinadhreddy.blogspot.com/2014/01/verilog-code-for-alusisopiposipopiso.html},\njournal={Thrinadhreddy.blogspot.com},\nauthor={Reddy, Trinadh},\nyear={2014}\n}\n\n# Libraries and Helper functions\n\n\n```python\n#This notebook also uses the `(some) LaTeX environments for Jupyter`\n#https://github.com/ProfFan/latex_envs wich is part of the\n#jupyter_contrib_nbextensions package\n\nfrom myhdl import *\nfrom myhdlpeek import Peeker\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import *\ninit_printing()\n\nimport random\n\n#https://github.com/jrjohansson/version_information\n%load_ext version_information\n%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random\n```\n\n\n\n\n
SoftwareVersion
Python3.6.2 64bit [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
IPython6.2.1
OSLinux 4.15.0 30 generic x86_64 with debian stretch sid
myhdl0.10
myhdlpeek0.0.6
numpy1.13.3
pandas0.23.3
matplotlib2.1.0
sympy1.1.2.dev
randomThe 'random' distribution was not found and is required by the application
Wed Sep 05 07:50:11 2018 MDT
\n\n\n\n\n```python\n#helper functions to read in the .v and .vhd generated files into python\ndef VerilogTextReader(loc, printresult=True):\n with open(f'{loc}.v', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***Verilog modual from {loc}.v***\\n\\n', VerilogText)\n return VerilogText\n\ndef VHDLTextReader(loc, printresult=True):\n with open(f'{loc}.vhd', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***VHDL modual from {loc}.vhd***\\n\\n', VerilogText)\n return VerilogText\n\ndef ConstraintXDCTextReader(loc, printresult=True):\n with open(f'{loc}.xdc', 'r') as xdcText:\n ConstraintText=xdcText.read()\n if printresult:\n print(f'***Constraint file from {loc}.xdc***\\n\\n', ConstraintText)\n return ConstraintText\n```\n\n\n```python\nCountVal=17\nBitSize=int(np.log2(CountVal))+1; BitSize\n```\n\n# Parallel-In Parallel-Out (PIPO) Shift Register\n\nA PIPO shift Register is one of the most redundant of the four classic shift registers when used as a One Bus In, *One* Bus Out, thus the PIPO here is implemented as a One Bus In, *Two* Bus Out. Further, this opportunity is taken here to talk about HDL algorithms vs HDL Implementation by presenting the same One In, Two Out PIPO but implemented as an asynchronous case and two synchronous cases. While there are obvious differences algorithmically due to the asynchronous vs synchronous. The hardware implementation is even more strikingly different and serves as case in point that HDL programming is neither hardware or software but an intermedte between the worlds. But with grave consequences when translated into hardware that the HDL must keep in perspective when writing Hardware Descriptive Language code. \n\n\n```python\nTestData=np.random.randint(0, 2**4, 15); TestData\n```\n\n\n\n\n array([ 2, 7, 8, 11, 0, 14, 1, 5, 12, 1, 13, 8, 2, 11, 6])\n\n\n\n## Asynchronous\n\n\n```python\n@block\ndef PIPO_AS1(DataIn, DataOut1, DataOut2):\n \"\"\"\n 1:2 PIPO shift regestor with no clock (Asynchronous) \n \n Input:\n DataIn(bitvec)\n Output:\n DataOut1(bitVec): ouput one bitvec len should be same as \n `DataIn`\n DataOut2(bitVec):ouput two bitvec len should be same as \n `DataIn`\n \"\"\"\n @always_comb\n def logic():\n DataOut1.next=DataIn\n DataOut2.next=DataIn\n \n return instances()\n\n```\n\n### myHDL Testing\n\n\n```python\nPeeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nDataIn=Signal(intbv(0)[4:]); Peeker(DataIn, 'DataIn')\nDataOut1=Signal(intbv(0)[4:]); Peeker(DataOut1, 'DataOut1')\nDataOut2=Signal(intbv(0)[4:]); Peeker(DataOut2, 'DataOut2')\n\nDUT=PIPO_AS1(DataIn, DataOut1, DataOut2)\n\ndef PIPO_TB():\n \"\"\"\n myHDL only Testbench for `PIPO_*` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n \n DataIn.next=int(TestData[i])\n \n if i==14:\n raise StopSimulation()\n \n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, PIPO_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nPIPO_AS1Data=Peeker.to_dataframe(); \nPIPO_AS1Data=PIPO_AS1Data[PIPO_AS1Data['clk']==1]\nPIPO_AS1Data.drop(['clk', 'rst'], axis=1, inplace=True)\nPIPO_AS1Data.reset_index(drop=True, inplace=True)\nPIPO_AS1Data\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
DataInDataOut1DataOut2
0777
1888
2111111
3000
4141414
5111
6555
7121212
8111
9131313
10888
11222
12111111
\n
\n\n\n\n### Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('PIPO_AS1');\n```\n\n ***Verilog modual from PIPO_AS1.v***\n \n // File: PIPO_AS1.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:50:17 2018\n \n \n `timescale 1ns/10ps\n \n module PIPO_AS1 (\n DataIn,\n DataOut1,\n DataOut2\n );\n // 1:2 PIPO shift regestor with no clock (Asynchronous) \n // \n // Input:\n // DataIn(bitvec)\n // Output:\n // DataOut1(bitVec): ouput one bitvec len should be same as \n // `DataIn`\n // DataOut2(bitVec):ouput two bitvec len should be same as \n // `DataIn`\n \n input [3:0] DataIn;\n output [3:0] DataOut1;\n wire [3:0] DataOut1;\n output [3:0] DataOut2;\n wire [3:0] DataOut2;\n \n \n \n \n \n assign DataOut1 = DataIn;\n assign DataOut2 = DataIn;\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PIPO_AS1_RTL.png}}\n\\caption{\\label{fig:APIPORTL} PIPO_AS1 Shift Register RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PIPO_AS1_SYN.png}}\n\\caption{\\label{fig:APIPOSYN} PIPO_AS1 Shift Register Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n### Verilog Testbench (!ToDo)\n\n## Synchronous 1\n\n### myHDL Module\n\n\n```python\n@block\ndef PIPO_S1(DataIn, DataOut1, DataOut2, clk, rst):\n \"\"\"\n one-in two-out PIPO typicaly found in the lititure\n lacking buffering\n \n Inputs:\n DataIn(bitVec): one-in Parallel data int\n clk(bool): clock\n rst(bool): reset\n \n Ouputs:\n DataOut1(bitVec): Parallel out 1\n DataOut2(bitVec): Parallel out 1\n \n \"\"\"\n\n \n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n DataOut1.next=0\n DataOut2.next=0\n else:\n DataOut1.next=DataIn\n DataOut2.next=DataIn\n \n return instances()\n\n```\n\n### myHDL Testing\n\n\n```python\nPeeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nDataIn=Signal(intbv(0)[4:]); Peeker(DataIn, 'DataIn')\nDataOut1=Signal(intbv(0)[4:]); Peeker(DataOut1, 'DataOut1')\nDataOut2=Signal(intbv(0)[4:]); Peeker(DataOut2, 'DataOut2')\n\nDUT=PIPO_S1(DataIn, DataOut1, DataOut2, clk, rst)\n\ndef PIPO_TB():\n \"\"\"\n myHDL only Testbench for `RingCounter` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n \n DataIn.next=int(TestData[i])\n \n if i==14:\n raise StopSimulation()\n \n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, PIPO_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nPIPO_S1Data=Peeker.to_dataframe(); \nPIPO_S1Data=PIPO_S1Data[PIPO_S1Data['clk']==1]\nPIPO_S1Data.drop(['clk', 'rst'], axis=1, inplace=True)\nPIPO_S1Data.reset_index(drop=True, inplace=True)\nPIPO_S1Data\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
DataInDataOut1DataOut2
0722
1877
21188
301111
41400
511414
6511
71255
811212
91311
1081313
11288
121122
\n
\n\n\n\n### Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('PIPO_S1');\n```\n\n ***Verilog modual from PIPO_S1.v***\n \n // File: PIPO_S1.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:50:34 2018\n \n \n `timescale 1ns/10ps\n \n module PIPO_S1 (\n DataIn,\n DataOut1,\n DataOut2,\n clk,\n rst\n );\n // one-in two-out PIPO typicaly found in the lititure\n // lacking buffering\n // \n // Inputs:\n // DataIn(bitVec): one-in Parallel data int\n // clk(bool): clock\n // rst(bool): reset\n // \n // Ouputs:\n // DataOut1(bitVec): Parallel out 1\n // DataOut2(bitVec): Parallel out 1\n // \n \n input [3:0] DataIn;\n output [3:0] DataOut1;\n reg [3:0] DataOut1;\n output [3:0] DataOut2;\n reg [3:0] DataOut2;\n input clk;\n input rst;\n \n \n \n \n always @(posedge clk, negedge rst) begin: PIPO_S1_LOGIC\n if (rst) begin\n DataOut1 <= 0;\n DataOut2 <= 0;\n end\n else begin\n DataOut1 <= DataIn;\n DataOut2 <= DataIn;\n end\n end\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PIPO_S1_RTL.png}}\n\\caption{\\label{fig:S1PIPORTL} PIPO_S1 Shift Register RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PIPO_S1_SYN.png}}\n\\caption{\\label{fig:S1PIPOSYN} PIPO_S1 Shift Register Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Synchronous 2\n\n### myHDL Module\n\n\n```python\n@block\ndef PIPO_S2(DataIn, DataOut1, DataOut2, clk, rst):\n \"\"\"\n one-in two-out PIPO with buffering\n \n Inputs:\n DataIn(bitVec): one-in Parallel data int\n clk(bool): clock\n rst(bool): reset\n \n Ouputs:\n DataOut1(bitVec): Parallel out 1\n DataOut2(bitVec): Parallel out 1\n \n \"\"\"\n \n Buffer=Signal(modbv(0)[len(DataIn):])\n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n Buffer.next=0\n else:\n Buffer.next=DataIn\n \n \n #not normaly found in PIPO; but is better practice since buffers help \n #with isolation for ASIC desighn\n @always_comb\n def OuputBuffer():\n DataOut1.next=Buffer\n DataOut2.next=Buffer\n \n return instances()\n\n```\n\n### myHDL Testing\n\n\n```python\nPeeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nDataIn=Signal(intbv(0)[4:]); Peeker(DataIn, 'DataIn')\nDataOut1=Signal(intbv(0)[4:]); Peeker(DataOut1, 'DataOut1')\nDataOut2=Signal(intbv(0)[4:]); Peeker(DataOut2, 'DataOut2')\n\nDUT=PIPO_S2(DataIn, DataOut1, DataOut2, clk, rst)\n\ndef PIPO_TB():\n \"\"\"\n myHDL only Testbench for `PIPO_*` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n \n DataIn.next=int(TestData[i])\n \n if i==14:\n raise StopSimulation()\n \n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, PIPO_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nPIPO_S2Data=Peeker.to_dataframe(); \nPIPO_S2Data=PIPO_S2Data[PIPO_S2Data['clk']==1]\nPIPO_S2Data.drop(['clk', 'rst'], axis=1, inplace=True)\nPIPO_S2Data.reset_index(drop=True, inplace=True)\nPIPO_S2Data\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
DataInDataOut1DataOut2
0722
1877
21188
301111
41400
511414
6511
71255
811212
91311
1081313
11288
121122
\n
\n\n\n\n### Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('PIPO_S2');\n```\n\n ***Verilog modual from PIPO_S2.v***\n \n // File: PIPO_S2.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:50:41 2018\n \n \n `timescale 1ns/10ps\n \n module PIPO_S2 (\n DataIn,\n DataOut1,\n DataOut2,\n clk,\n rst\n );\n // one-in two-out PIPO with buffering\n // \n // Inputs:\n // DataIn(bitVec): one-in Parallel data int\n // clk(bool): clock\n // rst(bool): reset\n // \n // Ouputs:\n // DataOut1(bitVec): Parallel out 1\n // DataOut2(bitVec): Parallel out 1\n // \n \n input [3:0] DataIn;\n output [3:0] DataOut1;\n wire [3:0] DataOut1;\n output [3:0] DataOut2;\n wire [3:0] DataOut2;\n input clk;\n input rst;\n \n reg [3:0] Buffer;\n \n \n \n always @(posedge clk, negedge rst) begin: PIPO_S2_LOGIC\n if (rst) begin\n Buffer <= 0;\n end\n else begin\n Buffer <= DataIn;\n end\n end\n \n \n \n assign DataOut1 = Buffer;\n assign DataOut2 = Buffer;\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PIPO_S2_RTL.png}}\n\\caption{\\label{fig:S2PIPORTL} PIPO_S2 Shift Register RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PIPO_S2_SYN.png}}\n\\caption{\\label{fig:S2PIPOSYN} PIPO_S2 Shift Register Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Comparison of the three designs\n\nIn the asynchronous case, it can be seen in the RTL that incoming Bus is passed through a buffer and then the Bus is then junctioned to two outputs. This , therefore, does not have any synchronicity to a clock and is the HDL equivalent of taping the wires of the incoming bus to create a copy of the signal. What this design lacks in clock support it gains in resource saving and instantaneous interchange of the input signal to output signals\n\nFor the two synchronous cases the incoming signal is buffered by a register set, therefore any signal on the bus will not be present on the output buses for at least one clock cycle. Wich for clocked designs is a good thing but for signals that needs an instantaneous transmission, this is a failing in tradeoff. And while Vivado (and most other FPGA synthesis tools) recognized that the two designs are the same we should not take this for granted. Since other tools may not. Since at the RTL level the implementations are clearly different. \n\nIn the first case, the incoming signals is recorded by two registers that then feeds each of the respective output busses. While this means that we gain redundancy in parallel registers. The parallel memories could also become out of sync for any number of reasons. Further, this is a huge amount of resource if it were to be synthesized this way. In comparison, the second design uses a single register that then feeds each of the outputs where this design has better resource allocation and would not suffer from any asynchronicity between parallel registers. It now suffers in lacking redundancy since any issues in the one register effect both outputs. \n\nThe lesson here is that HDL should never be thought of as of programming. Instead, it is a *sophisticated abstraction description* of **Hardware!** And therefore HDL should always be written to satisfy the hardware constraints. \n\n# Parallel-In Serial-Out (PISO)\nUsed to translate bus(parallel) data to a single output wire (serial) data a common example is the transmit line of a UART\n\n## myHDL Module\n\n\n```python\n@block\ndef PISO(ReadBus, BusIn, SerialOut, clk, rst):\n \"\"\"\n Parallel In Serial Out right shift regestor\n \n Input:\n ReadBus(bool): read bus flag\n BusIn(bool): Serial wire input\n clk(bool): clock\n rst(bool): reset\n \n Output:\n SerialOut(bool): Serial(wire) output data from `BusIn`\n Note:\n Does not have a finsh serial write indicator \n \"\"\"\n \n Buffer=Signal(intbv(0)[len(BusIn):])\n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n Buffer.next=0\n elif ReadBus:\n Buffer.next=BusIn\n else:\n Buffer.next=Buffer>>1\n \n #A more robust PISO would have a counter to trigger\n #a finish serial write flag here\n \n @always_comb\n def SerialWriteOut():\n SerialOut.next=Buffer[0]\n \n return instances()\n \n```\n\n## myHDL Testing\n\n\n```python\nTestData=np.random.randint(0, 2**4, 3)\nprint(TestData)\n#reverse bit order since right shift\nTestDataBin=\"\".join([bin(i, 4)[::-1] for i in TestData])\nTestDataBin=[int(i) for i in TestDataBin]\nTestDataBin\n```\n\n\n```python\nPeeker.clear()\nReadBus=Signal(bool(0)); Peeker(ReadBus, 'ReadBus') \nBusIn=Signal(intbv(0)[4:]); Peeker(BusIn, 'BusIn')\nSerialOut=Signal(bool(0)); Peeker(SerialOut, 'SerialOut')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\n\nDUT=PISO(ReadBus, BusIn, SerialOut, clk, rst)\n\ndef PISO_TB():\n \"\"\"\n myHDL only Testbench for `SIPO` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n yield clk.posedge\n \n for i in TestData:\n BusIn.next=int(i)\n ReadBus.next=True\n yield clk.posedge\n \n ReadBus.next=False\n for j in range(len(bin(i, 4))):\n yield clk.posedge\n \n\n\n \n raise StopSimulation()\n \n \n return instances()\n\nsim=Simulation(DUT, PISO_TB(), *Peeker.instances()).run()\nprint(TestData)\nprint(TestDataBin)\n```\n\n [11 15 14]\n [1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1]\n\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nPISOData=Peeker.to_dataframe(); \nPISOData=PISOData[PISOData['clk']==1]\nPISOData.drop(['clk', 'rst'], axis=1, inplace=True)\nPISOData['BusBits']=PISOData['BusIn'].apply(lambda x:bin(x, 4))\nPISOData.reset_index(drop=True, inplace=True)\nPISOData\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
BusInReadBusSerialOutBusBits
011101011
111011011
211011011
311001011
411011011
515101111
615011111
715011111
815011111
915011111
1014101110
1114001110
1214011110
1314011110
1414011110
\n
\n\n\n\n## Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('PISO');\n```\n\n ***Verilog modual from PISO.v***\n \n // File: PISO.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:50:56 2018\n \n \n `timescale 1ns/10ps\n \n module PISO (\n ReadBus,\n BusIn,\n SerialOut,\n clk,\n rst\n );\n // Parallel In Serial Out right shift regestor\n // \n // Input:\n // ReadBus(bool): read bus flag\n // BusIn(bool): Serial wire input\n // clk(bool): clock\n // rst(bool): reset\n // \n // Output:\n // SerialOut(bool): Serial(wire) output data from `BusIn`\n // Note:\n // Does not have a finsh serial write indicator \n \n input ReadBus;\n input [3:0] BusIn;\n output SerialOut;\n wire SerialOut;\n input clk;\n input rst;\n \n reg [3:0] Buffer;\n \n \n \n always @(posedge clk, negedge rst) begin: PISO_LOGIC\n if (rst) begin\n Buffer <= 0;\n end\n else if (ReadBus) begin\n Buffer <= BusIn;\n end\n else begin\n Buffer <= (Buffer >>> 1);\n end\n end\n \n \n \n assign SerialOut = Buffer[0];\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PISO_RTL.png}}\n\\caption{\\label{fig:PISORTL} PISO Shift Register RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{PISO_SYN.png}}\n\\caption{\\label{fig:PISOSYN} PISO Shift Register Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Verilog Testbench (!ToDo)\n\n# Serial-In Serial-Out (SISO)\nSISO are used to buffer input data like a primitive serial version of the First In First Out (FIFO) Memory. Or can be used for sampling of a wire input that not clock synced to an output that is clock sync\n\n## myHDL Module\n\n\n```python\n@block\ndef SISO(SerialIn, SerialOut, clk, rst, BufferSize):\n \"\"\"\n SISO Left Shift registor\n \n Input:\n SerialIn(bool): serial input feed\n clk(bool): clock signal\n rst(bool):reset signal\n \n Output:\n SerialOut(bool): serial out delayed by BufferSize\n \n Paramter:\n BufferSize(int): size of SISO buffer, aka delay amount\n \"\"\"\n Buffer=Signal(modbv(0)[BufferSize:])\n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n Buffer.next=0\n else:\n Buffer.next=concat(Buffer[BufferSize-1:0], SerialIn)\n \n @always_comb\n def ReadLeftMostToSer():\n SerialOut.next=Buffer[BufferSize-1]\n \n return instances()\n \n```\n\n## myHDL Testing\n\n\n```python\nSerialInTVLen=20\nnp.random.seed(71)\nSerialInTV=np.random.randint(0,2,20).astype(int)\nSerialInTV\n```\n\n\n\n\n array([1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0])\n\n\n\n\n```python\nPeeker.clear()\nSerialIn=Signal(bool(0)); Peeker(SerialIn, 'SerialIn')\nSerialOut=Signal(bool(0)); Peeker(SerialOut, 'SerialOut')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nBufferSize=4\n\nDUT=SISO(SerialIn, SerialOut, clk, rst, BufferSize)\n\ndef SISO_TB():\n \"\"\"\n myHDL only Testbench for `SISO` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n for i in range(SerialInTVLen):\n SerialIn.next=int(SerialInTV[i])\n yield clk.posedge\n \n for i in range(2):\n if i==0:\n SerialIn.next=0\n rst.next=1\n else:\n rst.next=0\n yield clk.posedge\n \n for i in range(BufferSize+1):\n SerialIn.next=1\n yield clk.posedge\n \n raise StopSimulation()\n \n return instances()\n \nsim=Simulation(DUT, SISO_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nSISOData=Peeker.to_dataframe()\nSISOData\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SerialInSerialOutclkrst
01000
11010
21000
31010
41000
51010
61000
70110
80100
90110
100100
110110
120100
131110
141100
151010
161000
170010
180000
190010
200000
210110
220100
231110
241100
250010
260000
270010
280000
291010
301000
311110
321100
330010
340000
351010
361000
370110
380100
390111
400101
410010
420000
431010
441000
451010
461000
471010
481000
491010
501000
511110
521100
\n
\n\n\n\n\n```python\nSISOData=SISOData[SISOData['clk']==1]\nSISOData.drop('clk', inplace=True, axis=1)\nSISOData.reset_index(drop=True, inplace=True)\nSISOData\n```\n\n /home/iridium/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py:3697: SettingWithCopyWarning: \n A value is trying to be set on a copy of a slice from a DataFrame\n \n See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n errors=errors)\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SerialInSerialOutrst
0100
1100
2100
3010
4010
5010
6110
7100
8000
9000
10010
11110
12000
13000
14100
15110
16000
17100
18010
19011
20000
21100
22100
23100
24100
25110
\n
\n\n\n\n\n```python\nSISOData['SerialOutS4']=SISOData['SerialOut'].shift(-4)\nSISOData\n```\n\n /home/iridium/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \n A value is trying to be set on a copy of a slice from a DataFrame.\n Try using .loc[row_indexer,col_indexer] = value instead\n \n See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SerialInSerialOutrstSerialOutS4
01001.0
11001.0
21001.0
30100.0
40100.0
50100.0
61101.0
71001.0
80000.0
90000.0
100100.0
111101.0
120000.0
130000.0
141001.0
151101.0
160000.0
171000.0
180100.0
190110.0
200000.0
211001.0
22100NaN
23100NaN
24100NaN
25110NaN
\n
\n\n\n\n\n```python\nSISOCheck=(SISOData[:16]['SerialIn'] == SISOData[:16]['SerialOutS4'].astype(int)).all()\nprint(f'SISO Check:{SISOCheck}')\n```\n\n SISO Check:True\n\n\n## Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('SISO');\n```\n\n ***Verilog modual from SISO.v***\n \n // File: SISO.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:51:08 2018\n \n \n `timescale 1ns/10ps\n \n module SISO (\n SerialIn,\n SerialOut,\n clk,\n rst\n );\n // SISO Left Shift registor\n // \n // Input:\n // SerialIn(bool): serial input feed\n // clk(bool): clock signal\n // rst(bool):reset signal\n // \n // Output:\n // SerialOut(bool): serial out delayed by BufferSize\n // \n // Paramter:\n // BufferSize(int): size of SISO buffer, aka delay amount\n \n input SerialIn;\n output SerialOut;\n wire SerialOut;\n input clk;\n input rst;\n \n reg [3:0] Buffer;\n \n \n \n always @(posedge clk, negedge rst) begin: SISO_LOGIC\n if (rst) begin\n Buffer <= 0;\n end\n else begin\n Buffer <= {Buffer[(4 - 1)-1:0], SerialIn};\n end\n end\n \n \n \n assign SerialOut = Buffer[(4 - 1)];\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{SISO_RTL.png}}\n\\caption{\\label{fig:SISORTL} SISO Shift Register RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{SISO_SYN.png}}\n\\caption{\\label{fig:SISOSYN} SISO Shift Register Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Verilog Testbench\n\n\n```python\n#create BitVector for BufferGate_TBV\nSerialInTVs=intbv(int(''.join(SerialInTV.astype(str)), 2))[SerialInTVLen:]\nSerialInTVs, bin(SerialInTVs)\n```\n\n\n\n\n (intbv(989338), '11110001100010011010')\n\n\n\n\n```python\nBufferSize=4\n\n@block\ndef SISO_TBV():\n \"\"\"\n myHDL -> Verilog Testbench for `SISO` module\n \"\"\"\n \n SerialIn=Signal(bool(0))\n SerialOut=Signal(bool(0))\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n \n @always_comb\n def print_data():\n print(SerialIn, SerialOut, clk, rst)\n \n SerialInTV=Signal(SerialInTVs)\n\n DUT=SISO(SerialIn, SerialOut, clk, rst, BufferSize)\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n for i in range(SerialInTVLen):\n SerialIn.next=int(SerialInTV[i])\n yield clk.posedge\n \n for i in range(2):\n if i==0:\n SerialIn.next=0\n rst.next=1\n else:\n rst.next=0\n yield clk.posedge\n \n for i in range(BufferSize+1):\n SerialIn.next=1\n yield clk.posedge\n \n raise StopSimulation()\n \n return instances()\n \nTB=SISO_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('SISO_TBV');\n```\n\n \n \n \n \n ***Verilog modual from SISO_TBV.v***\n \n // File: SISO_TBV.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:52:33 2018\n \n \n `timescale 1ns/10ps\n \n module SISO_TBV (\n \n );\n // myHDL -> Verilog Testbench for `SISO` module\n \n \n reg clk = 0;\n reg rst = 0;\n wire SerialOut;\n wire [19:0] SerialInTV;\n reg SerialIn = 0;\n reg [3:0] SISO0_0_Buffer = 0;\n \n assign SerialInTV = 20'd989338;\n \n \n always @(rst, SerialOut, SerialIn, clk) begin: SISO_TBV_PRINT_DATA\n $write(\"%h\", SerialIn);\n $write(\" \");\n $write(\"%h\", SerialOut);\n $write(\" \");\n $write(\"%h\", clk);\n $write(\" \");\n $write(\"%h\", rst);\n $write(\"\\n\");\n end\n \n \n always @(posedge clk, negedge rst) begin: SISO_TBV_SISO0_0_LOGIC\n if (rst) begin\n SISO0_0_Buffer <= 0;\n end\n else begin\n SISO0_0_Buffer <= {SISO0_0_Buffer[(4 - 1)-1:0], SerialIn};\n end\n end\n \n \n \n assign SerialOut = SISO0_0_Buffer[(4 - 1)];\n \n \n initial begin: SISO_TBV_CLK_SIGNAL\n while (1'b1) begin\n clk <= (!clk);\n # 1;\n end\n end\n \n \n initial begin: SISO_TBV_STIMULES\n integer i;\n for (i=0; i<20; i=i+1) begin\n SerialIn <= SerialInTV[i];\n @(posedge clk);\n end\n for (i=0; i<2; i=i+1) begin\n if ((i == 0)) begin\n SerialIn <= 0;\n rst <= 1;\n end\n else begin\n rst <= 0;\n end\n @(posedge clk);\n end\n for (i=0; i<(4 + 1); i=i+1) begin\n SerialIn <= 1;\n @(posedge clk);\n end\n $finish;\n end\n \n endmodule\n \n\n\n /home/iridium/anaconda3/lib/python3.6/site-packages/myhdl/conversion/_toVerilog.py:349: ToVerilogWarning: Signal is not driven: SerialInTV\n category=ToVerilogWarning\n\n\n# Serial-In Parallel-Out (SIPO)\nused to translate single wire (serial) data to but (parallel) data, a common example is the read line of a UART\n\n## myHDL Module\n\n\n```python\n@block\ndef SIPO(SerialIn, BusOut, clk, rst):\n \"\"\"\n Serial In Parallel Out right shift regestor\n \n Input:\n SerialIn(bool): Serial wire input\n clk(bool): clock\n rst(bool): reset\n \n Output:\n BusOut(bitVec): Parallel(Bus) output data from `SerialWire`\n \"\"\"\n \n Buffer=Signal(modbv(0)[len(BusOut):])\n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n Buffer.next=0\n else:\n Buffer.next=concat(Buffer[len(Buffer):0],SerialIn)\n \n @always_comb\n def OuputBuffer():\n BusOut.next=Buffer\n \n return instances()\n```\n\n## myHDL Testing\n\n\n```python\nTestData=np.random.randint(0, 2**4, 3)\nprint(TestData)\nTestDataBin=\"\".join([bin(i, 4) for i in TestData])\nTestDataBin=[int(i) for i in TestDataBin]\nTestDataBin\n```\n\n\n```python\nPeeker.clear()\nSerialIn=Signal(bool(0)); Peeker(SerialIn, 'SerialIn')\nBusOut=Signal(intbv(0)[4:]); Peeker(BusOut, 'BusOut')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\n\nDUT=SIPO(SerialIn, BusOut, clk, rst)\n\ndef SIPO_TB():\n \"\"\"\n myHDL only Testbench for `SIPO` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if ilen(TestDataBin):\n raise StopSimulation()\n i+=1\n yield clk.posedge\n\n\n \n raise StopSimulation()\n \n \n return instances()\n\nsim=Simulation(DUT, SIPO_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nSIPOData=Peeker.to_dataframe(); \nSIPOData=SIPOData[SIPOData['clk']==1]\nSIPOData.drop(['clk', 'rst'], axis=1, inplace=True)\nSIPOData['BusBits']=SIPOData['BusOut'].apply(lambda x:bin(x, 4))\nSIPOData.reset_index(drop=True, inplace=True)\nSIPOData\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
BusOutSerialInBusBits
0100001
1210010
2510101
31111011
4710111
51501111
61411110
71301101
81001010
9400100
10801000
11000000
\n
\n\n\n\n\n```python\nTestData, TestDataBin\n```\n\n\n\n\n (array([11, 13, 0]), [1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0])\n\n\n\n## Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('SIPO');\n```\n\n ***Verilog modual from SIPO.v***\n \n // File: SIPO.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:52:43 2018\n \n \n `timescale 1ns/10ps\n \n module SIPO (\n SerialIn,\n BusOut,\n clk,\n rst\n );\n // Serial In Parallel Out right shift regestor\n // \n // Input:\n // SerialIn(bool): Serial wire input\n // clk(bool): clock\n // rst(bool): reset\n // \n // Output:\n // BusOut(bitVec): Parallel(Bus) output data from `SerialWire`\n \n input SerialIn;\n output [3:0] BusOut;\n wire [3:0] BusOut;\n input clk;\n input rst;\n \n reg [3:0] Buffer = 0;\n \n \n \n always @(posedge clk, negedge rst) begin: SIPO_LOGIC\n if (rst) begin\n Buffer <= 0;\n end\n else begin\n Buffer <= {Buffer[4-1:0], SerialIn};\n end\n end\n \n \n \n assign BusOut = Buffer;\n \n endmodule\n \n\n\n /home/iridium/anaconda3/lib/python3.6/site-packages/myhdl/conversion/_toVerilog.py:309: ToVerilogWarning: Port is not used: SerialIn\n category=ToVerilogWarning\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{SIPO_RTL.png}}\n\\caption{\\label{fig:SIPORTL} SIPO Shift Register RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{SIPO_SYN.png}}\n\\caption{\\label{fig:SIPOSYN} SIPO Shift Register Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Verilog Testbench\n\n\n```python\nV=int(''.join([str(i) for i in TestDataBin]), 2)\nSerialVal=intbv(V)[len(TestDataBin):]\nSerialVal\n```\n\n\n\n\n intbv(3024)\n\n\n\n\n```python\n@block\ndef SIPO_TBV():\n \"\"\"\n myHDL -> Verilog Testbench for `SIPO` module\n \"\"\"\n SerialVals=Signal(SerialVal)\n SerialIn=Signal(bool(0))\n BusOut=Signal(intbv(0)[4:])\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n \n @always_comb\n def print_data():\n print(SerialIn, BusOut, clk, rst)\n\n\n \n DUT=SIPO(SerialIn, BusOut, clk, rst)\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n if ilen(TestDataBin):\n raise StopSimulation()\n i+=1\n yield clk.posedge\n\n\n \n raise StopSimulation()\n \n \n return instances()\n\nTB=SIPO_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('SIPO_TBV');\n```\n\n \n \n \n \n ***Verilog modual from SIPO_TBV.v***\n \n // File: SIPO_TBV.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:52:49 2018\n \n \n `timescale 1ns/10ps\n \n module SIPO_TBV (\n \n );\n // myHDL -> Verilog Testbench for `SIPO` module\n \n \n reg clk = 0;\n wire rst;\n reg SerialIn = 0;\n wire [3:0] BusOut;\n wire [11:0] SerialVals;\n reg [3:0] SIPO0_0_Buffer = 0;\n \n assign rst = 1'd0;\n assign SerialVals = 12'd3024;\n \n \n always @(rst, BusOut, SerialIn, clk) begin: SIPO_TBV_PRINT_DATA\n $write(\"%h\", SerialIn);\n $write(\" \");\n $write(\"%h\", BusOut);\n $write(\" \");\n $write(\"%h\", clk);\n $write(\" \");\n $write(\"%h\", rst);\n $write(\"\\n\");\n end\n \n \n always @(posedge clk, negedge rst) begin: SIPO_TBV_SIPO0_0_LOGIC\n if (rst) begin\n SIPO0_0_Buffer <= 0;\n end\n else begin\n SIPO0_0_Buffer <= {SIPO0_0_Buffer[4-1:0], SerialIn};\n end\n end\n \n \n \n assign BusOut = SIPO0_0_Buffer;\n \n \n initial begin: SIPO_TBV_CLK_SIGNAL\n while (1'b1) begin\n clk <= (!clk);\n # 1;\n end\n end\n \n \n initial begin: SIPO_TBV_STIMULES\n integer i;\n i = 0;\n while (1'b1) begin\n if ((i < 12)) begin\n SerialIn <= SerialVals[i];\n end\n else if ((i == 12)) begin\n // pass\n end\n else if ((i > 12)) begin\n $finish;\n end\n i = i + 1;\n @(posedge clk);\n end\n $finish;\n end\n \n endmodule\n \n\n\n /home/iridium/anaconda3/lib/python3.6/site-packages/myhdl/conversion/_toVerilog.py:349: ToVerilogWarning: Signal is not driven: rst\n category=ToVerilogWarning\n /home/iridium/anaconda3/lib/python3.6/site-packages/myhdl/conversion/_toVerilog.py:349: ToVerilogWarning: Signal is not driven: SerialVals\n category=ToVerilogWarning\n\n\n# Cyclic Shift Register Johnson Counter\nThe Johnson Counter is implemented in this notebook about shift registers since in reality, a Johnson Counter is a cyclic shift register with single bit inversion. Hence the Johnson Counter, which goes by the other name of a Mobius Ring Counter. The following are aspects of Johnson Counter by Sougata Bhattacharjee [https://www.quora.com/What-is-the-difference-between-a-Johnson-counter-and-a-ring-counter]\n\n\\begin{itemize}\n\\item In a Johnson Counter the output bar or Q(bar) of the last flip-flop is connected to the input of the first flip-flop\n\n\\item If $n$ is the number of the flip-flop used, then the total number of states used is $2n$.\n\n\\item The Johnson Counter is also known as walking counter, switching tail counter and is mostly used in phase shift or function generator.\n\n\\item The decoding circuit is complex as compared to a ring counter.\n\n\\item If input frequency is $f$ in Johnson Counter,then the output is $\\dfrac{f}{2n}$.\n\n\\item The total number of unused states in Johnson Counter is $(2^n - 2n)$\n\n\\item The main problem with Johnson counter is that once it enters into an unused state it is in a lockout state\n\n\\end{itemize}\n\n\nFor a four-bit Johnson Counter, the next state diagram is given by the following table from [wikipedga](https://en.wikipedia.org/wiki/Ring_counter)\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=1cm]{JohnsonCounterTable.png}}\n\\caption{\\label{fig:JohnsonTable} 4-bit Ring Counter next state table from [wikipedia](https://en.wikipedia.org/wiki/Ring_counter#Four-bit_ring-counter_sequences)}\n\\end{figure}\n\nwhereupon examining the next state table for the Johnson counter we can see why a Johnson counter is also called a Mobius counter\n\n\n\n## myHDL Module\n\n\n```python\n#Create the Direction States for Johnson Counter\nDirStates=enum('Left','Halt','Right')\nprint(f\"`Left` state repersentation is {bin(DirStates.Left)}\")\nprint(f\"`Halt` state repersentation is {bin(DirStates.Halt)}\")\nprint(f\"`Right` state repersentation is {bin(DirStates.Right)}\")\n```\n\n `Left` state repersentation is 0\n `Halt` state repersentation is 1\n `Right` state repersentation is 10\n\n\n\n```python\n@block\ndef JohnsonCount3(Dir, q, clk, rst):\n \"\"\"\n Based of the `jc2` exsample from the myHDL website \n http://www.myhdl.org/docs/examples/jc2.html\n \n Input:\n Dir(state): Left,Right, Halt Direction States\n clk(bool): input clock\n rst(bool): reset signal\n \n Ouput:\n q(bitVec): the values in the D flip flops(aka counter)\n \"\"\"\n \n q_i=Signal(intbv(0)[len(q):])\n @always(clk.posedge, rst.negedge)\n def JCStateMachine():\n #moore state machine\n if rst:\n q_i.next=0\n \n elif Dir==DirStates.Left:\n #set bit slice from left most to one from the right\n #from bit slice from one to the left to the right most\n q_i.next[len(q_i):1]=q_i[len(q_i)-1:]\n #set next right most bit to negated one to the left bit\n q_i.next[0]=not q_i[len(q_i)-1]\n \n elif Dir==DirStates.Halt:\n #create circular stop\n q_i.next=q_i\n \n elif Dir==DirStates.Right:\n #set bit slice from one from the left to right most\n #from bit slice left most bit to one from the right\n q_i.next[len(q_i)-1:]=q_i[len(q_i):1]\n #set next one bit from the right to be negated left most bit \n q_i.next[len(q_i)-1]=not q_i[0]\n \n \n \n @always_comb\n def OuputBuffer():\n q.next=q_i\n \n return instances()\n \n```\n\n## myHDL Testing\n\n\n```python\nBitSize=4\nPeeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nq=Signal(intbv(0)[BitSize:]); Peeker(q, 'q')\nDir=Signal(DirStates.Right); Peeker(Dir, 'Dir')\n\nDUT=JohnsonCount3(Dir, q, clk, rst)\n\ndef JohnsonCount3_TB():\n \"\"\"\n myHDL only Testbench for `RingCounter` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==2*2*BitSize:\n Dir.next=DirStates.Left\n elif i==4*2*BitSize:\n rst.next=1\n elif i==4*2*BitSize+1:\n rst.next=0\n elif i==4*2*BitSize+2:\n Dir.next=DirStates.Halt\n \n \n if i==5*2*BitSize:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, JohnsonCount3_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nJohnsonCount3Data=Peeker.to_dataframe()\nJohnsonCount3Data=JohnsonCount3Data[JohnsonCount3Data['clk']==1]\nJohnsonCount3Data.drop('clk', axis=1, inplace=True)\nJohnsonCount3Data.reset_index(drop=True, inplace=True)\nJohnsonCount3Data\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrst
0Right80
1Right120
2Right140
3Right150
4Right70
5Right30
6Right10
7Right00
8Right80
9Right120
10Right140
11Right150
12Right70
13Right30
14Right10
15Left00
16Left10
17Left30
18Left70
19Left150
20Left140
21Left120
22Left80
23Left00
24Left10
25Left30
26Left70
27Left150
28Left140
29Left120
30Left80
31Left01
32Left10
33Halt30
34Halt30
35Halt30
36Halt30
37Halt30
38Halt30
\n
\n\n\n\n\n```python\nJohnsonCount3Data['q']=JohnsonCount3Data['q'].apply(lambda x:bin(x, BitSize))\nJohnsonCount3Data\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrst
0Right10000
1Right11000
2Right11100
3Right11110
4Right01110
5Right00110
6Right00010
7Right00000
8Right10000
9Right11000
10Right11100
11Right11110
12Right01110
13Right00110
14Right00010
15Left00000
16Left00010
17Left00110
18Left01110
19Left11110
20Left11100
21Left11000
22Left10000
23Left00000
24Left00010
25Left00110
26Left01110
27Left11110
28Left11100
29Left11000
30Left10000
31Left00001
32Left00010
33Halt00110
34Halt00110
35Halt00110
36Halt00110
37Halt00110
38Halt00110
\n
\n\n\n\n\n```python\nJohnsonCount3Data[JohnsonCount3Data['Dir']==DirStates.Right]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrst
0Right10000
1Right11000
2Right11100
3Right11110
4Right01110
5Right00110
6Right00010
7Right00000
8Right10000
9Right11000
10Right11100
11Right11110
12Right01110
13Right00110
14Right00010
\n
\n\n\n\n\n```python\nJohnsonCount3Data[JohnsonCount3Data['Dir']==DirStates.Left]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrst
15Left00000
16Left00010
17Left00110
18Left01110
19Left11110
20Left11100
21Left11000
22Left10000
23Left00000
24Left00010
25Left00110
26Left01110
27Left11110
28Left11100
29Left11000
30Left10000
31Left00001
32Left00010
\n
\n\n\n\n\n```python\nJohnsonCount3Data[JohnsonCount3Data['Dir']==DirStates.Halt]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrst
33Halt00110
34Halt00110
35Halt00110
36Halt00110
37Halt00110
38Halt00110
\n
\n\n\n\n## Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('JohnsonCount3');\n```\n\n ***Verilog modual from JohnsonCount3.v***\n \n // File: JohnsonCount3.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:53:01 2018\n \n \n `timescale 1ns/10ps\n \n module JohnsonCount3 (\n Dir,\n q,\n clk,\n rst\n );\n // Based of the `jc2` exsample from the myHDL website \n // http://www.myhdl.org/docs/examples/jc2.html\n // \n // Input:\n // Dir(state): Left,Right, Halt Direction States\n // clk(bool): input clock\n // rst(bool): reset signal\n // \n // Ouput:\n // q(bitVec): the values in the D flip flops(aka counter)\n \n input [1:0] Dir;\n output [3:0] q;\n wire [3:0] q;\n input clk;\n input rst;\n \n reg [3:0] q_i = 0;\n \n \n \n always @(posedge clk, negedge rst) begin: JOHNSONCOUNT3_JCSTATEMACHINE\n if (rst) begin\n q_i <= 0;\n end\n else if ((Dir == 2'b00)) begin\n q_i[4-1:1] <= q_i[(4 - 1)-1:0];\n q_i[0] <= (!q_i[(4 - 1)]);\n end\n else if ((Dir == 2'b01)) begin\n q_i <= q_i;\n end\n else if ((Dir == 2'b10)) begin\n q_i[(4 - 1)-1:0] <= q_i[4-1:1];\n q_i[(4 - 1)] <= (!q_i[0]);\n end\n end\n \n \n \n assign q = q_i;\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{JohnsonCount3_RTL.png}}\n\\caption{\\label{fig:JC3RTL} JohnsonCount3 RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{JohnsonCount3_SYN.png}}\n\\caption{\\label{fig:JC3SYN} JohnsonCount3 Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{JohnsonCount3_IMP.png}}\n\\caption{\\label{fig:BGIMP} JohnsonCount3 Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Verilog Testbench\n\n\n```python\n@block\ndef JohnsonCount3_TBV():\n \"\"\"\n myHDL -> Verilog Testbench for `UpDown_Counter` module\n \"\"\"\n \n clk=Signal(bool(0))\n rst=Signal(bool(0))\n q=Signal(intbv(0)[BitSize:])\n Dir=Signal(DirStates.Right)\n \n @always_comb\n def print_data():\n print(clk, rst, q, Dir)\n\n DUT=JohnsonCount3(Dir, q, clk, rst)\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n if i==2*2*BitSize:\n Dir.next=DirStates.Left\n elif i==4*2*BitSize:\n rst.next=1\n elif i==4*2*BitSize+1:\n rst.next=0\n elif i==4*2*BitSize+2:\n Dir.next=DirStates.Halt\n else:\n pass\n \n \n if i==5*2*BitSize:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nTB=JohnsonCount3_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('JohnsonCount3_TBV');\n```\n\n \n \n \n \n ***Verilog modual from JohnsonCount3_TBV.v***\n \n // File: JohnsonCount3_TBV.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:53:06 2018\n \n \n `timescale 1ns/10ps\n \n module JohnsonCount3_TBV (\n \n );\n // myHDL -> Verilog Testbench for `UpDown_Counter` module\n \n \n reg clk = 0;\n reg rst = 0;\n wire [3:0] q;\n reg [1:0] Dir = 2'b10;\n reg [3:0] JohnsonCount30_0_q_i = 0;\n \n \n \n always @(rst, q, Dir, clk) begin: JOHNSONCOUNT3_TBV_PRINT_DATA\n $write(\"%h\", clk);\n $write(\" \");\n $write(\"%h\", rst);\n $write(\" \");\n $write(\"%h\", q);\n $write(\" \");\n $write(\"%h\", Dir);\n $write(\"\\n\");\n end\n \n \n always @(posedge clk, negedge rst) begin: JOHNSONCOUNT3_TBV_JOHNSONCOUNT30_0_JCSTATEMACHINE\n if (rst) begin\n JohnsonCount30_0_q_i <= 0;\n end\n else if ((Dir == 2'b00)) begin\n JohnsonCount30_0_q_i[4-1:1] <= JohnsonCount30_0_q_i[(4 - 1)-1:0];\n JohnsonCount30_0_q_i[0] <= (!JohnsonCount30_0_q_i[(4 - 1)]);\n end\n else if ((Dir == 2'b01)) begin\n JohnsonCount30_0_q_i <= JohnsonCount30_0_q_i;\n end\n else if ((Dir == 2'b10)) begin\n JohnsonCount30_0_q_i[(4 - 1)-1:0] <= JohnsonCount30_0_q_i[4-1:1];\n JohnsonCount30_0_q_i[(4 - 1)] <= (!JohnsonCount30_0_q_i[0]);\n end\n end\n \n \n \n assign q = JohnsonCount30_0_q_i;\n \n \n initial begin: JOHNSONCOUNT3_TBV_CLK_SIGNAL\n while (1'b1) begin\n clk <= (!clk);\n # 1;\n end\n end\n \n \n initial begin: JOHNSONCOUNT3_TBV_STIMULES\n integer i;\n i = 0;\n while (1'b1) begin\n if ((i == ((2 * 2) * 4))) begin\n Dir <= 2'b00;\n end\n else if ((i == ((4 * 2) * 4))) begin\n rst <= 1;\n end\n else if ((i == (((4 * 2) * 4) + 1))) begin\n rst <= 0;\n end\n else if ((i == (((4 * 2) * 4) + 2))) begin\n Dir <= 2'b01;\n end\n else begin\n // pass\n end\n if ((i == ((5 * 2) * 4))) begin\n $finish;\n end\n i = i + 1;\n @(posedge clk);\n end\n end\n \n endmodule\n \n\n\n## PYNQ-Z1 Deployment\n\n### Board Constraints\n\n\n```python\nConstraintXDCTextReader('PYNQ_Z1Constraints_JohnsonCount3');\n```\n\n ***Constraint file from PYNQ_Z1Constraints_JohnsonCount3.xdc***\n \n ## PYNQ-Z1 Constraint File for JohnsonCount3\n ## Based on https://github.com/Xilinx/PYNQ/blob/master/sdbuild/boot_configs/Pynq-Z1-defconfig/constraints.xdc\n \n \n ##Switches\n \n set_property -dict {PACKAGE_PIN M20 IOSTANDARD LVCMOS33} [get_ports {Dir[0]}]; ##SW0\n set_property -dict {PACKAGE_PIN M19 IOSTANDARD LVCMOS33} [get_ports {Dir[1]}]; ##SW1\n \n ##LEDs\n \n set_property -dict {PACKAGE_PIN R14 IOSTANDARD LVCMOS33} [get_ports {q[0]}]; ##LED0\n set_property -dict {PACKAGE_PIN P14 IOSTANDARD LVCMOS33} [get_ports {q[1]}]; ##LED1\n set_property -dict {PACKAGE_PIN N16 IOSTANDARD LVCMOS33} [get_ports {q[2]}]; ##LED2\n set_property -dict {PACKAGE_PIN M14 IOSTANDARD LVCMOS33} [get_ports {q[3]}]; ##LED3\n \n set_property -dict { PACKAGE_PIN D19 IOSTANDARD LVCMOS33 } [get_ports { rst }]; ##btn[0]\n set_property -dict { PACKAGE_PIN L19 IOSTANDARD LVCMOS33 } [get_ports { clk }]; ##btn[3]\n ## Needed since if constraints even thinks a clock port is going to be connected to a non clock driver it wont synthize without it\n set_property CLOCK_DEDICATED_ROUTE FALSE [get_nets {clk}];\n ##should only be done for teaching and realy only on LOW (nearly none) jitter (Bouncy) sources \n \n\n\nPay attention to line 22 that follows of setting the `clk` input signal to Button 3\n```\nset_property CLOCK_DEDICATED_ROUTE FALSE [get_nets {clk}];\n\n```\nThis is needed in the constraint file in order for the Implementation and Bitstream to work. What this line says to Vivado is \"I know that a clock signal is hooked up to a nonstandard clock source, go ahead and make the connection so\". This is because the internal rule checking in vivado will raise errors if one attempts to connect a nonclock source to what it perceives (in this case correctly) should be a clock input. Normally this should not be done this way. But because this is for teaching and the `JohnsonCount3` was to be implemented as a stand-alone module without a clock divider at a clock speed of Hertz (FPGAs typically have Megahertz built-in clocks) this had to be done. \n\n### Deployment Results\nYouTube:[Bi-Directional Johnson Counter from myHDL on the PYNQ-Z1\n](https://www.youtube.com/watch?v=UulvPq7Tk_Q)\n\n# Cyclic Shift Register Ring Counter\nLike the Johnson Counter, which is the Mobius ring version of the Ring Counter, a Ring Counter is, in reality, a cyclic shift register since it simply shift the bits in its memory left or right. The following are \naspects of Ring Counter by Sougata Bhattacharjee [https://www.quora.com/What-is-the-difference-between-a-Johnson-counter-and-a-ring-counter]\n\n\\begin{itemize}\n\\item In a ring counter, the output of the last flip-flop is connected to the input of the first flip-flop.\n\n\\item If $n$ is the number of flip-flops that are used in ring counter, the number of possible states are also $n$.That means the number of states is equal to the number of flip-flops used.\n\n\\item A Ring counter is mostly used in Successive approximation type ADC and stepper motor control. \n\n\\item Decoding is easy in a ring counter as the number of states is equal to the number of flip-flops.\n\n\\item If the input frequency to a ring counter is $f$ then the output frequency $\\dfrac{f}{n}$.\n\n\\item The total number of unused states in the ring counter is $(2^n - n)$.\n\n\\end{itemize}\n\nFor a four-bit ring counter, the next state diagram is given by the following table from [wikipedga](https://en.wikipedia.org/wiki/Ring_counter)\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=1cm]{RingCounterTable.png}}\n\\caption{\\label{fig:RingTable} 4-bit Ring Counter next state table from [wikipedia](https://en.wikipedia.org/wiki/Ring_counter#Four-bit_ring-counter_sequences)}\n\\end{figure}\n\n\n## myHDL Module\n\n\n```python\n#Create the Direction States for Ring Counter\nDirStates=enum('Left','Halt','Right')\nprint(f\"`Left` state repersentation is {bin(DirStates.Left)}\")\nprint(f\"`Halt` state repersentation is {bin(DirStates.Halt)}\")\nprint(f\"`Right` state repersentation is {bin(DirStates.Right)}\")\n```\n\n `Left` state repersentation is 0\n `Halt` state repersentation is 1\n `Right` state repersentation is 10\n\n\n\n```python\n@block\ndef RingCounter(seed, Dir, q, clk, rst):\n \"\"\"\n Seedable and direction controlable ring counter in myHDL\n \n Input:\n seed(bitvec): intial value for ring counter\n Dir(enum): Direction contorl signal\n clk(bool): clock\n rst(bool): reset\n Output\n \"\"\"\n q_i=Signal(intbv(int(seed))[len(q):])\n @always(clk.posedge, rst.negedge)\n def RCStateMachine():\n #moore state machine\n if rst:\n q_i.next=seed\n elif Dir==DirStates.Left:\n q_i.next=concat(q_i[len(q_i)-1:0],q_i[len(q_i)-1])\n\n elif Dir==DirStates.Halt:\n #create circular stop\n q_i.next=q_i\n \n elif Dir==DirStates.Right:\n q_i.next=concat(q_i[0], q_i[len(q_i):1])\n \n @always_comb\n def OuputBuffer():\n q.next=q_i\n \n return instances()\n\n \n \n```\n\n## myHDL testing\n\n\n```python\nBitSize=4; seedval=3\nPeeker.clear()\nseed=Signal(intbv(seedval)[BitSize:]); Peeker(seed, 'seed')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nq=Signal(intbv(0)[BitSize:]); Peeker(q, 'q')\nDir=Signal(DirStates.Right); Peeker(Dir, 'Dir')\n\nDUT=RingCounter(seed, Dir, q, clk, rst)\n\ndef RingCounter_TB():\n \"\"\"\n myHDL only Testbench for `RingCounter` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==2*BitSize:\n Dir.next=DirStates.Left\n elif i==3*BitSize:\n rst.next=1\n elif i==3*BitSize+1:\n rst.next=0\n elif i==3*BitSize+2:\n Dir.next=DirStates.Halt\n \n \n if i==5*BitSize:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, RingCounter_TB(), *Peeker.instances()).run()\n```\n\n\n```python\nPeeker.to_wavedrom()\n```\n\n\n
\n\n\n\n\n\n```python\nRingCountData=Peeker.to_dataframe()\nRingCountData=RingCountData[RingCountData['clk']==1]\nRingCountData.drop('clk', axis=1, inplace=True)\nRingCountData.reset_index(drop=True, inplace=True)\nRingCountData\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrstseed
0Right903
1Right1203
2Right603
3Right303
4Right903
5Right1203
6Right603
7Left303
8Left603
9Left1203
10Left903
11Left313
12Left603
13Halt1203
14Halt1203
15Halt1203
16Halt1203
17Halt1203
18Halt1203
\n
\n\n\n\n\n```python\nRingCountData['q']=RingCountData['q'].apply(lambda x:bin(x, BitSize))\nRingCountData\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrstseed
0Right100103
1Right110003
2Right011003
3Right001103
4Right100103
5Right110003
6Right011003
7Left001103
8Left011003
9Left110003
10Left100103
11Left001113
12Left011003
13Halt110003
14Halt110003
15Halt110003
16Halt110003
17Halt110003
18Halt110003
\n
\n\n\n\n\n```python\nRingCountData[RingCountData['Dir']==DirStates.Right]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrstseed
0Right100103
1Right110003
2Right011003
3Right001103
4Right100103
5Right110003
6Right011003
\n
\n\n\n\n\n```python\nRingCountData[RingCountData['Dir']==DirStates.Left]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrstseed
7Left001103
8Left011003
9Left110003
10Left100103
11Left001113
12Left011003
\n
\n\n\n\n\n```python\nRingCountData[RingCountData['Dir']==DirStates.Halt]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Dirqrstseed
13Halt110003
14Halt110003
15Halt110003
16Halt110003
17Halt110003
18Halt110003
\n
\n\n\n\n## Verilog Code\n\n\n```python\nDUT.convert()\nVerilogTextReader('RingCounter');\n```\n\n ***Verilog modual from RingCounter.v***\n \n // File: RingCounter.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:53:27 2018\n \n \n `timescale 1ns/10ps\n \n module RingCounter (\n seed,\n Dir,\n q,\n clk,\n rst\n );\n // Seedable and direction controlable ring counter in myHDL\n // \n // Input:\n // seed(bitvec): intial value for ring counter\n // Dir(enum): Direction contorl signal\n // clk(bool): clock\n // rst(bool): reset\n // Output\n \n input [3:0] seed;\n input [1:0] Dir;\n output [3:0] q;\n wire [3:0] q;\n input clk;\n input rst;\n \n reg [3:0] q_i = 3;\n \n \n \n always @(posedge clk, negedge rst) begin: RINGCOUNTER_RCSTATEMACHINE\n if (rst) begin\n q_i <= seed;\n end\n else if ((Dir == 2'b00)) begin\n q_i <= {q_i[(4 - 1)-1:0], q_i[(4 - 1)]};\n end\n else if ((Dir == 2'b01)) begin\n q_i <= q_i;\n end\n else if ((Dir == 2'b10)) begin\n q_i <= {q_i[0], q_i[4-1:1]};\n end\n end\n \n \n \n assign q = q_i;\n \n endmodule\n \n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounter_RTL.png}}\n\\caption{\\label{fig:RCRTL} RingCounter RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounter_SYN.png}}\n\\caption{\\label{fig:RCSYN} RingCounter Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n## Verilog Testbench\n\n\n```python\nBitSize=4; seedval=3\n@block\ndef RingCounter_TBV():\n \"\"\"\n myHDL -> verilog Testbench for `RingCounter` module\n \"\"\"\n seed=Signal(intbv(seedval)[BitSize:])\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n q=Signal(intbv(0)[BitSize:])\n Dir=Signal(DirStates.Right)\n\n @always_comb\n def print_data():\n print(seed, Dir, q, clk, rst)\n\n DUT=RingCounter(seed, Dir, q, clk, rst)\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n if i==2*BitSize:\n Dir.next=DirStates.Left\n elif i==3*BitSize:\n rst.next=1\n elif i==3*BitSize+1:\n rst.next=0\n elif i==3*BitSize+2:\n Dir.next=DirStates.Halt\n \n \n if i==5*BitSize:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\n\n\nTB=RingCounter_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('RingCounter_TBV');\n```\n\n \n \n \n \n \n ***Verilog modual from RingCounter_TBV.v***\n \n // File: RingCounter_TBV.v\n // Generated by MyHDL 0.10\n // Date: Wed Sep 5 07:53:29 2018\n \n \n `timescale 1ns/10ps\n \n module RingCounter_TBV (\n \n );\n // myHDL -> verilog Testbench for `RingCounter` module\n \n \n reg clk = 0;\n reg rst = 0;\n wire [3:0] q;\n reg [1:0] Dir = 2'b10;\n wire [3:0] seed;\n reg [3:0] RingCounter0_0_q_i = 3;\n \n assign seed = 4'd3;\n \n \n always @(q, rst, Dir, clk, seed) begin: RINGCOUNTER_TBV_PRINT_DATA\n $write(\"%h\", seed);\n $write(\" \");\n $write(\"%h\", Dir);\n $write(\" \");\n $write(\"%h\", q);\n $write(\" \");\n $write(\"%h\", clk);\n $write(\" \");\n $write(\"%h\", rst);\n $write(\"\\n\");\n end\n \n \n always @(posedge clk, negedge rst) begin: RINGCOUNTER_TBV_RINGCOUNTER0_0_RCSTATEMACHINE\n if (rst) begin\n RingCounter0_0_q_i <= seed;\n end\n else if ((Dir == 2'b00)) begin\n RingCounter0_0_q_i <= {RingCounter0_0_q_i[(4 - 1)-1:0], RingCounter0_0_q_i[(4 - 1)]};\n end\n else if ((Dir == 2'b01)) begin\n RingCounter0_0_q_i <= RingCounter0_0_q_i;\n end\n else if ((Dir == 2'b10)) begin\n RingCounter0_0_q_i <= {RingCounter0_0_q_i[0], RingCounter0_0_q_i[4-1:1]};\n end\n end\n \n \n \n assign q = RingCounter0_0_q_i;\n \n \n initial begin: RINGCOUNTER_TBV_CLK_SIGNAL\n while (1'b1) begin\n clk <= (!clk);\n # 1;\n end\n end\n \n \n initial begin: RINGCOUNTER_TBV_STIMULES\n integer i;\n i = 0;\n while (1'b1) begin\n if ((i == (2 * 4))) begin\n Dir <= 2'b00;\n end\n else if ((i == (3 * 4))) begin\n rst <= 1;\n end\n else if ((i == ((3 * 4) + 1))) begin\n rst <= 0;\n end\n else if ((i == ((3 * 4) + 2))) begin\n Dir <= 2'b01;\n end\n if ((i == (5 * 4))) begin\n $finish;\n end\n i = i + 1;\n @(posedge clk);\n end\n end\n \n endmodule\n \n\n\n /home/iridium/anaconda3/lib/python3.6/site-packages/myhdl/conversion/_toVerilog.py:349: ToVerilogWarning: Signal is not driven: seed\n category=ToVerilogWarning\n\n\n## PYNQ-Z1 Deployment\n\n### Block Design\nVideo on how Block Design was made found here: YouTube:[Seeded Ring Counter RTL IP Hookup in Vivado from myHDL to PYNQ-Z1\n](https://youtu.be/vnCKs0hQq3U)\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingBlockDesign.png}}\n\\caption{\\label{fig:RCBD} RingCounter Block IP Block Design; Xilinx Vivado 2017.4}\n\\end{figure}\n\nThe Constant IP in the top left corner of the Block Design has the following internal parameterizations, accessed by right clicking on the IP and selecting \"Customize Block\"\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ConstIPSettings.png}}\n\\caption{\\label{fig:ConstSetting} Xilinx IP Constant 1.1 Internal Settings for Ring Counter Block design ; Xilinx Vivado 2017.4}\n\\end{figure}\n\n### Board Constraints\n\n\n```python\nConstraintXDCTextReader('PYNQ_Z1Constraints_RingCounterBlock');\n```\n\n ***Constraint file from PYNQ_Z1Constraints_RingCounterBlock.xdc***\n \n ## PYNQ-Z1 Constraint File for RingCounterBlock\n ## Based on https://github.com/Xilinx/PYNQ/blob/master/sdbuild/boot_configs/Pynq-Z1-defconfig/constraints.xdc\n \n \n ##Switches\n \n set_property -dict {PACKAGE_PIN M20 IOSTANDARD LVCMOS33} [get_ports {Dir[0]}]; ##SW0\n set_property -dict {PACKAGE_PIN M19 IOSTANDARD LVCMOS33} [get_ports {Dir[1]}]; ##SW1\n \n ##LEDs\n \n set_property -dict {PACKAGE_PIN R14 IOSTANDARD LVCMOS33} [get_ports {q[0]}]; ##LED0\n set_property -dict {PACKAGE_PIN P14 IOSTANDARD LVCMOS33} [get_ports {q[1]}]; ##LED1\n set_property -dict {PACKAGE_PIN N16 IOSTANDARD LVCMOS33} [get_ports {q[2]}]; ##LED2\n set_property -dict {PACKAGE_PIN M14 IOSTANDARD LVCMOS33} [get_ports {q[3]}]; ##LED3\n \n set_property -dict { PACKAGE_PIN D19 IOSTANDARD LVCMOS33 } [get_ports { rst }]; ##btn[0]\n set_property -dict { PACKAGE_PIN L19 IOSTANDARD LVCMOS33 } [get_ports { clk }]; ##btn[3]\n ## Needed since if constraints even thinks a clock port is going to be connected to a non clock driver it wont synthize without it\n set_property CLOCK_DEDICATED_ROUTE FALSE [get_nets {clk}];\n ##should only be done for teaching and realy only on LOW (nearly none) jitter (Bouncy) sources \n \n\n\n### RTL, Synthesis, & Implementation \n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounterBlock_RTL1.png}}\n\\caption{\\label{fig:RCBRTL1} Ring Counter Block RTL schematic Level 1; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounterBlock_RTL23.png}}\n\\caption{\\label{fig:RCBRTL23} Ring Counter Block RTL `RingImp_i` schematic internal; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounterBlock_RTL4.png}}\n\\caption{\\label{fig:RCBRTL4} Ring Counter Block RTL `RingCounter_0` `inst` schematic internal; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounterBlock_SYN13.png}}\n\\caption{\\label{fig:RCBSYN} Ring Counter Block Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounterBlock_IMP1.png}}\n\\caption{\\label{fig:RCBIMP1} Ring Counter Block Implementated Schematic Top Level; Xilinx Vivado 2017.4}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{RingCounterBlock_IMP3.png}}\n\\caption{\\label{fig:RCBIMP3} Ring Counter Block Implementated Schematic Expanded; Xilinx Vivado 2017.4}\n\\end{figure}\n\n### Deployment Results\nYouTube:[Seeded Ring Counter from myHDL on PYNQ-Z1\n](https://www.youtube.com/watch?v=7ZQ4qCvjokU)\n", "meta": {"hexsha": "6b93e2a0ce6306f4a2ec0f57b26855367752512e", "size": 212616, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "myHDL_DigLogicFundamentals/myHDL_ShiftRegisters/ShiftRegistersInmyHDL.ipynb", "max_stars_repo_name": "PyLCARS/PythonUberHDL", "max_stars_repo_head_hexsha": "f7ae2293d6efaca7986d62540798cdf061383d06", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2017-10-09T12:15:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T09:05:21.000Z", "max_issues_repo_path": "myHDL_DigLogicFundamentals/myHDL_ShiftRegisters/ShiftRegistersInmyHDL.ipynb", "max_issues_repo_name": "cfelton/PythonUberHDL", "max_issues_repo_head_hexsha": "f7ae2293d6efaca7986d62540798cdf061383d06", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "myHDL_DigLogicFundamentals/myHDL_ShiftRegisters/ShiftRegistersInmyHDL.ipynb", "max_forks_repo_name": "cfelton/PythonUberHDL", "max_forks_repo_head_hexsha": "f7ae2293d6efaca7986d62540798cdf061383d06", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2018-02-09T15:36:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-20T21:39:12.000Z", "avg_line_length": 30.491323677, "max_line_length": 8952, "alphanum_fraction": 0.3876142906, "converted": true, "num_tokens": 38812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4610167793123159, "lm_q2_score": 0.23091976292927185, "lm_q1q2_score": 0.10645788538521643}} {"text": "```\nimport torch, os, ipywidgets, json\nimport torch.nn as nn\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport torch.nn.functional as F\nimport torchvision as tv\nimport matplotlib.pyplot as plt\nfrom google.colab import drive\nfrom google.colab import widgets as clwidgets\nfrom tqdm import tqdm_notebook as tqdm\nfrom torchvision.datasets import MNIST\nfrom torch.utils.data import DataLoader\nPI = torch.tensor(np.pi, dtype=torch.float32)\nsns.set_style('whitegrid')\n%matplotlib inline\n```\n\n\n```\nsave_path = '/content/gdrive/My Drive/Tutorials/InfoGAN/MNIST/'\ndrive.mount('/content/gdrive', force_remount=True)\nos.makedirs(os.path.join(save_path, 'ckpt'), exist_ok=True)\n```\n\n# A quick tour on Information Maximising Generative Adversarial Networks (InfoGAN)\n*Yen Yu* $\\diamond$ *yen.yu@araya.org* $\\diamond$ *yen.yu.10@ucl.ac.uk*\n\n*17/Jun/2019*\n\n\n\n## Background\n* This Notebook traces my steps from my only knowing about GAN on a conceptual level to actually trying to learn the theory behind by implementing it. \n* I intend this Notebook to be a tutorial accessible to those who wants to learn more about geneartive modelling (there is a plan to cover other GANs and VAE+Normalising Flows).\n\n## Goal\n* The goal of this Notebook is to to provide theoretical details of GAN just enough to motivate an implementation.\n* The main focus will be on the implementation of InfoGAN\n* To successfully train an InfoGAN on the MNIST dataset.\n\n## References\n[1] The origin GAN paper by Goodfellow\n\n[2] The InfoGAN paper by Xi Chen\n\n[3] Ferenc Huszar's [blog post](https://www.inference.vc/infogan-variational-bound-on-mutual-information-twice/)\u2014very insightful\n\n[4] [Useful GAN hacks (multiple authors)](https://github.com/soumith/ganhacks#authors)\n\n## Classic GAN\n### Theory\n#### Layperson example\n\nYou are visiting a friend who is working in the packaging department of a factory. His/her job is to load the products onto a conveyer belt to initiate automatic packaging. You noticed, from the control room, that the conveyer belt has a layout resembling the shape of the letter Y. Products are loaded onto the branches on the top end and get merged into a single line at the junction in the middle. The packaging mechnism is located at the bottom end. An electronic gate at the junction controls which branch gets to pass. Whenever the indicator light on the gate shines red the left branch feeds into the bottom branch. And when a green light is shone, the opposite is the case.\n\nYou know the factory has a long-term partnership with a particular supplier who provides the standard products for this line. When the supply is short, the factory seeks additional supply from another supplier but thier quality can be a hit and miss.\n\nToday, both branches are operating but you have no idea whether there has been a supply shortage. If there is a shortage, one of the branches must be carrying products of \"somewhat questionable quality\".\n\nFrom the control room terminal, you see the gate camera view, along with its indicator light. Out of curiosity, you decide to challenge yourself to answer whether there *is* a shortage by finding out whether different lights signal a discernable impression on the product quality.\n\nThis time, you are positive because the non-standard products have a grey-ish tint.\n\nHopefully, on your next visit, knowing that there is still a shortage going on, you will find yourself having a hard time arriving at a definite conclusion. This can almost mean the additional supplier has boosted their quality to be on a par with the standard.\n\n#### Remarks\n\n* A Generative Adversarial Network (GAN) typically consists of a generator and a discriminator, both as trainable neural networks. The generator tries to generate samples to fool the discriminator, making it believe the generated samples are as good as the real ones. And the discriminator tries to unfool itself.\n* The standard supplier provides the *real* data: $x_{real}$.\n* The second supplier, which is the generator, provides the *fake* data: $x_{fake}$\n* You, the discriminator, serve to differentiate the real from the fake. But you also help the fake fake better.\n* The role of the indicator light $d$ will become clear in the following part.\n* The layperson example lends itself to an information-theoretical interpretation as described below.\n\n#### An information-theoretic perspective\n\nLet us consider two random variables $x_{real}$ and $x_{fake}$, one representing data coming from a *real* distribution and another trying to mimic it. And $d \\sim \\mathrm{Bern}(0.5)$ being a mixing factor allows the following interaction: \n\n$$\n\\begin{align} \nx &= \\left\\{ \n \\begin{array}{l}\n x_{real},\\; \\hbox{ if } d=1 \\\\\n x_{fake},\\; \\hbox{ if } d = 0\n \\end{array}\n \\right.\n\\end{align}\n$$\n\nThe task for a classic GAN is to make $x_{fake}$ as real as possible such that knowing $d$ provides minimal, if not none, insights into the actual identity of $x$. In the language of information theory, this is equivalent to saying\n\n$$\\min I(x : d)$$\n\nThe learning rule of the classic GAN can be derived from here by considering:\n\n$$\n\\begin{align}\nI(x : d) &= H(d) - H(d | x) \\\\\n &= H(d) + \\mathbb E_x \\mathbb E_{d|x} \\log p(d|x) \\\\\n &= H(d) + \\mathbb E_{d,x} \\frac {\\log p(d|x)} {\\log q(d|x)} \\log q(d|x) \\\\\n &\\ge H(d) + \\mathbb E_{d,x} \\log q(d|x)\n\\end{align}\n$$\n\nHere, an auxiliary density $q_{d|x}$ is introduced to create a lower bound on the mutual information. This bound is tight when $p(d|x) = q(d|x)$. Then the following holds:\n\n$$\n\\begin{align}\nI(x:d) &= H(d) + \\max_{q_{d|x}} \\mathbb E_{d,x} \\log q(d|x) \\\\\n &\\ge H(d) + \\max_\\varphi \\mathbb E_{d,x} \\log q(d|x; \\varphi)\n\\end{align}\n$$\n\nwhere the last line turns $q_{d|x}$ into a parametric family by assumption.\n\nThen, one can expand the expectation to rewrite the expression since we knew $q$ is a coin-flip:\n\n$$\n\\begin{align}\nI(x:d) &\\ge H(d) + \\max_\\varphi \\left( \n \\mathbb E_{x_{real}} \\log q(1 | x_{real};\\varphi) +\n \\mathbb E_{x_{fake}} \\log q(0 | x_{fake}; \\varphi)\n \\right)\n\\end{align}\n$$\n\n\n\n### Implementation\nFollowing the derivation immediately above, it becomes clear that we can treat the approximate density $q(d=\\{0, 1\\} | x; \\varphi)$ as a deep neural network that works as a discriminator ($\\mathcal D(x; \\varphi)$) . Specifically, we want this D network to take as input, say, an image tensor and output a scalar between 0 and 1. This D network should be able to tell $x$ apart by learning to assign 1 to $x_{real}$ and 0 to $x_{fake}$:\n$$\n\\begin{align}\n&= H(d) + \\max_\\varphi \\left( \n \\mathbb E_{x_{real}} \\log \\mathcal D(x_{real};\\varphi) + \n \\mathbb E_{x_{fake}} \\log [1 - \\mathcal D(x_{fake}; \\varphi)]\n \\right) \n\\end{align}\n$$\nOn the other hand, $x_{fake}$ is something we have to create from nothing (well, from noise, to be exact). And we can assign this task to another neural network that we'd like to call the generator $\\mathcal G(z; \\vartheta)$, where $z$ is the noise vector. By replaceing $x_{fake}$ with $\\mathcal D(z; \\vartheta)$ we sidestep the need to actually evaluate the expectation $\\mathbb E_{x_{fake}}$. Instead, we are now dealing with a much simpler $\\mathbb E_z$, whcih can be Monte Carlo approximated:\n\n$$\n\\begin{align}\n&=H(d) + \\max_\\varphi \\left( \n \\mathbb E_{x_{real}} \\log \\mathcal D(x_{real};\\varphi) + \n \\mathbb E_{z} \\log [1 - \\mathcal D(\\mathcal G(z; \\vartheta); \\varphi)]\n \\right)\n\\end{align}\n$$\n\nwhere $\\vartheta$ parametrises the generator network $\\mathcal G$ and $z \\sim N(0,1)$ is some Gaussian noise conventionally used by GAN.\n\nThe G network has to learn to fool the D network as the D network tries to tell the real and fake apart. This brings us back to our initial objective of minimising the mutual information. We can now write down the classic GAN objective:\n\n$$\n\\begin{align}\n\\min_\\vartheta \\max_\\varphi \\left( \n \\mathbb E_{x_{real}} \\log \\mathcal D(x_{real};\\varphi) + \n \\mathbb E_{z} \\log [1 - \\mathcal D(\\mathcal G(z; \\vartheta); \\varphi)]\n \\right)\n\\end{align}\n$$\n\n\n#### Task\nTo implement a classic GAN, we need to program the following:\n* A discriminator network $\\mathcal D$ that maps an input to a real number between 0 and 1 (as representing probability)\n* A generator $\\mathcal G$ that takes some noise vector of arbitrary dimension \n* Write down the objective function\n\n\n```\n%%script false\n# This is only a demonstration cell\n\n# Taks 1: Program a discriminator network\nclass Discriminator(nn.Module):\n \"\"\"A Discriminator network represents a function \n that maps an input to a scalar between (0, 1).\"\"\"\n def __init__(self):\n super(Discriminator, self).__init__()\n \n # The following contains the main component of the\n # desired function. It can take any valid architecture.\n # But for the sake of demonstration, I will only include \n # minimal numbers of network layers, assuming the inputs\n # are mini-batches of MNIST images which are monochromatic,\n # 28x28 images.\n self.layers = nn.Sequential(\n # NB. setting bias=False only because BN is used. The beta parameter\n # in a BN works just like a bias.\n nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1, bias=False),\n nn.BatchNorm2d(64),\n nn.LeakyReLU(0.1),\n nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1, bias=False),\n nn.BatchNorm2d(128),\n nn.LeakyReLU(0.1),\n nn.Conv2d(128, 1024, (7, 7), stride=(1, 1), padding=0, bias=False),\n nn.BatchNorm2d(1024)\n nn.LeakyReLU(0.1),\n nn.Conv2d(1024, 1, (1, 1), stride=(1, 1), padding=0, bias=True),\n nn.Sigmoid()\n )\n \n def forward(self, x):\n return self.layers(x)\n\n# Task 2: Program a generator network\nclass Generator(nn.Module):\n \"\"\"A Generator is a function that maps a latent code (usually a noise vector) of\n arbitrary dimension (here 74 is used) to a tensor that mimics real dataset.\"\"\"\n def __init__(self):\n super(Generator, self).__init__()\n \n self.layers = nn.Sequential(\n nn.Conv2d(74, 1024, (1, 1), (1, 1), bias=False, padding=0),\n nn.LeakyReLU(0.1),\n nn.BatchNorm2d(1024),\n nn.ConvTranspose2d(1024, 128, (7, 7), (1, 1), bias=False, padding=0),\n nn.BatchNorm2d(128),\n nn.LeakyReLU(0.1),\n nn.ConvTranspose2d(128, 64, (4, 4), (2, 2), bias=False, padding=1),\n nn.BatchNorm2d(64),\n nn.ConvTranspose2d(64, 1, (4, 4), (2, 2), bias=True, padding=1),\n nn.Sigmoid())\n\n self.apply(initialise_weights)\n \n def forward(self, x):\n return self.layers(x)\n\n# Task 3: Write down the GAN loss\nclass ClassicGANCriterion:\n def __call__(self, prob, label):\n prob = prob.view(-1)\n label = torch.zeros_like(prob).fill_(label)\n return F.binary_cross_entropy(prob, label)\n```\n\n## InfoGAN\n\n\nThe classic GAN generator places no restrictions on how the noise vector should be used to generate data. It can inadvertently come to ignore a big part of that noise vector by essentially map a range of value to the same output. As a result, the generator will fail to capture the true data distribution and quite often end up with one with very low entropy. Naturally, a generator behaviing like this will do a very poor job in fooling the discriminator. This failure mode is marked by high generator loss and 0 discriminator loss, otherwise known as \"mode collapse\" of GAN.\n\nThe authors of InfoGAN attempted to prevent mode collapse by introducing one regulariser as restriction. This restriction is such that any generator output, give a noise vector, has to retain maximum information about that vector. This is the same as saying:\n\n$$\n\\max I(z, x_{fake})\n$$\n\nInfoGAN does this by allowing a partition in the noise vector. In classic GAN, a noise vector can be as simple as $z \\sim N(\\boldsymbol{0}, Id)$, where $\\boldsymbol{0}$ is a zero vector and $Id$ is an identity matrix of arbitrary dimension. InfoGAN chooses to have $z = \\{ c, z'\\}$, where $c$ is a noise vector from a known distribution whose sufficient statistics must be recovered after the generation process (in the form of posterior estimates, but more on that later!). Whereas, $z'$ is the same old noise vector ($z$) used by classic GAN, meaning there will be no restrictions placed on $z'$.\n\nWe can then rewrite the mutual information above to only include $c$, i.e., $I(c, x_{fake})$. For InforGAN, the following is to be minimised\n\n$$\nI(d, x) - \\lambda I(c, x_{fake})\n$$\nwhere $\\lambda$ serves to adjust scale (entropy and mutual information are not scale-invariant).\n\n### Implementation\nTo see how this can be implemented using neural network, we trace our steps in the previous section and attempt a similar work on the newly introduced regulariser. We have\n\n$$\n\\begin{align}\nI(c, x_{fake}) &= H(c) - H(c | x_{fake}) \\\\\n&= H(c) + \\mathbb E_{z\\sim p(z)} \\mathbb E_{c \\sim p(c)} \\log p(c | x_{fake}) \\\\\n&= H(c) + \\mathbb E_{z, c} \\log p(c | \\mathcal G(c, z)) \n\\end{align}\n$$\n\nIf we create a variational lower bound just like we did earlier we would have a difficult time evaluating the log-probability in the second term. However, by Law of Total Expectation, we can now see this expression in a different light:\n\n$$\n\\begin{align}\n\\mathbb E_{z, c} \\log p(c |\\mathcal G(c, z)) &= \n \\mathbb E_{z, c} \\mathbb E_{c' \\sim p(c'|x_{fake})} \\log p(c' | \\mathcal G (c, z)) \\\\\n&= \\mathbb E_{z, c} \\mathbb E_{c' \\sim p(c'|x_{fake})} \\log \\frac {p(c'|x_{fake})}{q(c'|x_{fake})} q(c'|x_{fake}) \\\\\n&\\ge \\mathbb E_{z, c} \\mathbb E_{c' \\sim p(c'|x)} \\log q(c'|x_{fake})\n\\end{align}\n$$\n\nThis means we will use $c$ and $z$ to get our $x_{fake}$ through the generator $\\mathcal G$. Then program a neural network as $q(c'|x_{fake})$\u2014assuming a known family of probability distributions\u2014that takes as input $x_{fake}$ and outputs the sufficient statistics of the approximate density $q_{c'|x_{fake}}$ (which is called a Recognition Model in the paper). There! We can work out the log-probability with much less effort.\n\n#### Generator and utility objects\nFirst, let us take care of the usual generator and we will get to the discriminator and recognition model after that. We have done this bit in the classic GAN section and there is nothing new in particular, only we have introduced a different normalisation module (which I might want to compare aginst BatchNorm later).\n\n\n```\ndef initialise_weights(module):\n if isinstance(module, nn.Conv2d) or isinstance(module, nn.ConvTranspose2d):\n nn.init.xavier_normal_(module.weight.data)\n if isinstance(module.bias, torch.Tensor):\n module.bias.data.fill_(0.)\n\n\nclass GroupNorm2d(nn.Module):\n \"\"\"Group Normalisation Layer.\"\"\"\n def __init__(self, channels, groups, eps=1e-5):\n super(GroupNorm2d, self).__init__()\n \n self.gamma = nn.Parameter(torch.ones(1, channels, 1, 1))\n self.beta = nn.Parameter(torch.zeros(1, channels, 1, 1))\n self.num_groups = groups\n self.eps = eps\n\n def forward(self, x):\n N, C, H, W = x.size()\n G = self.num_groups\n\n x = x.view(N, G, -1)\n mean = x.mean(dim=2, keepdim=True)\n var = (x - mean).pow(2).sum(2, keepdim=True) / x.size(2)\n\n x = (x - mean) / (var + self.eps).sqrt()\n x = x.view(N, C, H, W)\n\n return x * self.gamma + self.beta\n\n\nclass Generator(nn.Module):\n \"\"\"The Generator Network.\"\"\"\n def __init__(self):\n super(Generator, self).__init__()\n # Network architecture follows the original InfoGAN paper (Chen et al.)\n self.layer = nn.Sequential(\n nn.Conv2d(74, 1024, (1, 1), (1, 1), bias=False, padding=0),\n nn.LeakyReLU(0.1),\n GroupNorm2d(1024, 32),\n nn.ConvTranspose2d(1024, 128, (7, 7), (1, 1), bias=False, padding=0),\n GroupNorm2d(128, 32),\n nn.LeakyReLU(0.1),\n nn.ConvTranspose2d(128, 64, (4, 4), (2, 2), bias=False, padding=1),\n GroupNorm2d(64, 32),\n nn.ConvTranspose2d(64, 1, (4, 4), (2, 2), bias=True, padding=1),\n nn.Tanh())\n\n self.apply(initialise_weights)\n \n def sample_latent(self, batch_size, device):\n # all dimension information is mentioned in the original InfoGAN paper\n # incompressible noise (z)\n z = torch.randn(batch_size, 62, device=device)\n\n # categorical latent code (cd)\n # c ~ Categ(K=10, p=0.1)\n index = torch.randint(0, 10, (batch_size,), device=device)\n cd = torch.zeros(batch_size, 10, device=device)\n cd[torch.arange(batch_size), index] = 1\n\n # continuous latent codes (cc)\n # c ~ Unif(-1., 1.)\n cc = torch.rand((batch_size, 2), device=device) * 2. - 1.\n\n noise = torch.cat([z, cd, cc], dim=1).view(-1, 74, 1, 1)\n return noise, index\n \n def forward(self, x):\n return self.layer(x)\n```\n\n#### Discriminator and Recognition model\nHere, we have split the discriminator network into two parts (the \"SharedNetwork\" and \"DiscriminatorEnd\") and have one part shared with the Recognition model. This can be interpreted as using the Recognition model (which works as inferring the sufficient statistics of the seed noise vector) as an auxiliary task for the discriminator network. Auxiliary task tends to help obtain good representation. The same technique is also seen in reinforcement learning. We also attached another network to the discriminator to classify images from the true dataset which seems to help stablise training.\n\n\n```\nclass SharedNetwork(nn.Module):\n \"\"\"Network shared between the Discriminator and Recognition model.\"\"\"\n def __init__(self):\n super(SharedNetwork, self).__init__()\n \n # Average pooling is used here in place of the usual max pooling or strides\n # Empirically, this avoids sparse gradients and tends to prevent mode collapse.\n self.net_base = nn.Sequential(\n nn.Conv2d(1, 64, (4, 4), (1, 1), bias=False, padding=0),\n nn.LeakyReLU(0.1),\n nn.AvgPool2d((2, 2)),\n GroupNorm2d(64, 32),\n nn.Conv2d(64, 128, (4, 4), (1, 1), bias=False, padding=0),\n nn.LeakyReLU(0.1),\n nn.AvgPool2d((2, 2)),\n GroupNorm2d(128, 32),\n nn.Conv2d(128, 1024, (4, 4), (1, 1), bias=False, padding=0),\n nn.LeakyReLU(0.1),\n GroupNorm2d(1024, 32))\n \n self.apply(initialise_weights)\n \n def forward(self, x):\n x = self.net_base(x)\n return x\n\n\nclass DiscriminatorEnd(nn.Module):\n \"\"\"The Discriminator.\"\"\"\n def __init__(self):\n \n super(DiscriminatorEnd, self).__init__()\n \n self.net_d = nn.Sequential(\n nn.Conv2d(1024, 1, (1, 1), (1, 1), bias=True, padding=0),\n nn.Sigmoid())\n\n self.apply(initialise_weights)\n\n def forward(self, x):\n return self.net_d(x).squeeze(3).squeeze(2)\n\n\nclass RecognitionEnd(nn.Module):\n \"\"\"The Recognition model.\"\"\"\n def __init__(self):\n super(RecognitionEnd, self).__init__()\n \n self.net_r = nn.Sequential(\n nn.Conv2d(1024, 128, (1, 1), (1, 1), bias=True, padding=0),\n nn.LeakyReLU(0.1),\n GroupNorm2d(128, 32))\n \n self.cat_r = nn.Conv2d(128, 10, (1, 1), (1, 1), bias=True, padding=0)\n self.gau_r = nn.Conv2d(128, 4, (1, 1), (1, 1), bias=True, padding=0)\n \n self.apply(initialise_weights)\n \n def forward(self, x):\n r = self.net_r(x)\n cat = self.cat_r(r).squeeze(3).squeeze(2)\n gau = self.gau_r(r).squeeze(3).squeeze(2)\n mean, logv = torch.chunk(gau, 2, dim=1)\n return cat, mean, logv\n\n \nclass ClassifierEnd(nn.Module):\n \"\"\"Auxiliary Classification for the D Network.\"\"\"\n # Introducing an auxiliary task whenever possible tends to help stabalise training.\n def __init__(self):\n super(ClassifierEnd, self).__init__()\n self.net_c = nn.Sequential(\n nn.Conv2d(1024, 10, (1, 1), (1, 1), bias=True, padding=0))\n \n def forward(self, x):\n logits = self.net_c(x).view(-1, 10)\n return F.softmax(logits, dim=1)\n```\n\n## Training InfoGAN to generate MNIST digits\n### Loss functions\n\n\n```\nclass NaNError(Exception):\n pass\n\n\nclass CriterionDiscriminator:\n \"\"\"Binary Cross Entropy Loss.\"\"\"\n def __call__(self, prob, label):\n prob = prob.view(-1)\n label = torch.zeros_like(prob).fill_(label)\n return F.binary_cross_entropy(prob, label)\n \n \nclass CriterionRecognitionCategorial:\n \"\"\"Categorial Cross Entropy Loss\"\"\"\n def __call__(self, logits, index):\n return F.cross_entropy(logits, index)\n\n\nclass CriterionRecognitionNLL:\n \"\"\"Negative log-likelihood for factorised Normal distribution.\"\"\"\n def __call__(self, sample, mean, logv):\n sample = sample.squeeze()\n scale = (0.5 * logv).exp()\n normal = torch.distributions.Normal(mean, scale)\n # variance = logv.exp()\n # nll = (0.5 * torch.log(2 * PI).add(logv) + \n # (sample - mean).pow(2).div(2 * variance + 1e-5))\n\n return - normal.log_prob(sample).sum(dim=1).mean()\n```\n\n### Setting up training\n- Download the MNIST dataset and prepare data sampler\n- Create new instances of network\n- Define optimisers (for G and D networks, respectively)\n\n\n```\n%%capture\n# Download the MNIST dataset\ntransforms = tv.transforms.Compose([\n tv.transforms.ToTensor(),\n tv.transforms.Normalize((0.5,), (0.5,))\n])\nmnist_data = {\n 'train': MNIST('.', train=True, transform=transforms, download=True),\n 'test': MNIST('.', train=False, transform=transforms, download=True)}\n```\n\n\n```\n# Training settings and save names\ncheckpoint = 0\nsettings = {\n 'batch_size': 128,\n 'num_epochs': 300,\n 'lr_g': 2e-4,\n 'lr_d': 2e-4,\n 'beta1': 0.5,\n 'beta2': 0.999,\n 'save_every': 50,\n 'trained_d': 'trained_d.pth',\n 'trained_g': 'trained_g.pth',\n 'trained_r': 'trained_r.pth',\n 'trained_c': 'trained_c.pth',\n 'trained_shared': 'trained_shared.pth',\n 'training_log': 'training_log.csv'}\n\n# Pytorch DataLoader\ndata_loader = {\n 'train': DataLoader(mnist_data['train'], batch_size=settings['batch_size'], shuffle=True, drop_last=True),\n 'test': DataLoader(mnist_data['test'], batch_size=settings['batch_size'], shuffle=True, drop_last=True)}\n\n# --- Create networks:\n# One can potentially introduce an auxiliary classifier network as part of the D network.\n# An auxiliary task, empirically, tends to help the stability of GAN training.\nnets = {'G': Generator().cuda(), 'D': DiscriminatorEnd().cuda(), 'R': RecognitionEnd().cuda(), \n 'Shared': SharedNetwork().cuda(), 'C': ClassifierEnd().cuda()}\n\n# --- Optimisers:\n# ADAM was chosen for G network and its hyperparameters (betas) follow that of DC-GAN\n# SGD may be chosen for D network (empirically, this prevents discriminator mode collapse, i.e., D loss becomes zero)\noptimiser = {\n 'G': torch.optim.Adam([{'params': nets['G'].parameters()}, \n {'params': nets['R'].parameters()}], \n lr=settings['lr_g'],\n betas=[settings['beta1'], settings['beta2']]),\n 'D': torch.optim.SGD([{'params': nets['D'].parameters()}, \n {'params': nets['C'].parameters()},\n {'params': nets['Shared'].parameters()}], \n lr=settings['lr_d'])}\n\n# --- Loss functions:\n# For D network, use binary cross entropy\n# For Recognition (R) network, use categorical cross entropy for discrete latent codes;\n# use negative Gaussian log-likelihood for continuous latent codes.\ncrit_d = CriterionDiscriminator()\ncrit_r_ce = CriterionRecognitionCategorial()\ncrit_r_nll = CriterionRecognitionNLL()\n\n# Save the settings for later reference.\nwith open(os.path.join(save_path, 'notebook_settings.json'), 'w') as file:\n json.dump(settings, file, sort_keys=True)\n```\n\n\n```\n%%script false # Comment out this line to enable this cell\n\n# Recover from previous checkpoint \ncheckpoint = 250\nckpt_path = os.path.join(save_path, 'ckpt')\nnets['D'].load_state_dict(torch.load(os.path.join(ckpt_path, 'ckpt_d_{:03d}.pth'.format(checkpoint-1))))\nnets['G'].load_state_dict(torch.load(os.path.join(ckpt_path, 'ckpt_g_{:03d}.pth'.format(checkpoint-1))))\nnets['R'].load_state_dict(torch.load(os.path.join(ckpt_path, 'ckpt_r_{:03d}.pth'.format(checkpoint-1))))\nnets['C'].load_state_dict(torch.load(os.path.join(ckpt_path, 'ckpt_c_{:03d}.pth'.format(checkpoint-1))))\nnets['Shared'].load_state_dict(torch.load(os.path.join(ckpt_path, 'ckpt_shared_{:03d}.pth'.format(checkpoint-1))))\ntraining_log = pd.read_csv(os.path.join(ckpt_path, 'ckpt_training_log_{:03d}.csv').format(checkpoint-1))\nprint('Recovered checkpoint: {}'.format(checkpoint))\n```\n\n### Main training script\n\n\n\n```\n%%script false\n\ntraining_log = pd.DataFrame(columns=['epoch', 'phase', 'd_loss', 'g_loss', 'r_loss_cat', 'r_loss_gaus', 'c_loss'])\npgrid = clwidgets.Grid(1, 6)\n\n# Main training script\nfor epoch in tqdm(range(checkpoint, settings['num_epochs'])):\n for phase in ['train']:\n for img_real, label in data_loader[phase]:\n i = len(training_log)\n img_real = img_real.to('cuda')\n label = label.to('cuda')\n noise, cat_index = nets['G'].sample_latent(settings['batch_size'], 'cuda')\n \n # Improve the Discriminator\n with torch.set_grad_enabled(phase == 'train'):\n optimiser['D'].zero_grad()\n shared_real = nets['Shared'](img_real)\n pr_real = nets['D'](shared_real)\n pr_auxc = nets['C'](shared_real)\n img_fake = nets['G'](noise)\n pr_fake = nets['D'](nets['Shared'](img_fake.detach()))\n\n loss_d = crit_d(pr_real, 1) + crit_d(pr_fake, 0)\n loss_c = F.cross_entropy(pr_auxc, label)\n \n if phase == 'train':\n (loss_d + loss_c).backward()\n optimiser['D'].step()\n \n # Improve the Generator and Recognition model\n if phase == 'train':\n optimiser['G'].zero_grad()\n shared_fake = nets['Shared'](img_fake)\n pr_fake = nets['D'](shared_fake)\n r_logit, r_mean, r_logv = nets['R'](shared_fake)\n\n loss_g = crit_d(pr_fake, 1) \n loss_cat = crit_r_ce(r_logit, cat_index) \n loss_gaus = crit_r_nll(noise[:, -2:], r_mean, r_logv)\n \n (loss_g + loss_cat + 0.1 * loss_gaus).backward()\n optimiser['G'].step()\n \n training_log.loc[i] = [epoch, phase, loss_d.item(), loss_g.item(), loss_cat.item(), loss_gaus.item(), loss_c.item()]\n \n # some sanity check for training\n if training_log.loc[i].isnull().any():\n raise NaNError\n else:\n # if not in training phase, treat g-losses as missing\n training_log.loc[i] = [epoch, phase, loss_d.item(), np.nan, np.nan, np.nan, loss_c.item()]\n \n # keep a training checkpoint\n if (epoch + 1) % settings['save_every'] == 0:\n torch.save(nets['D'].state_dict(), os.path.join(save_path, 'ckpt', 'ckpt_d_{:03d}.pth'.format(epoch)))\n torch.save(nets['G'].state_dict(), os.path.join(save_path, 'ckpt', 'ckpt_g_{:03d}.pth'.format(epoch)))\n torch.save(nets['R'].state_dict(), os.path.join(save_path, 'ckpt', 'ckpt_r_{:03d}.pth'.format(epoch)))\n torch.save(nets['C'].state_dict(), os.path.join(save_path, 'ckpt', 'ckpt_c_{:03d}.pth'.format(epoch)))\n torch.save(nets['Shared'].state_dict(), os.path.join(save_path, 'ckpt', 'ckpt_shared_{:03d}.pth'.format(epoch)))\n training_log.to_csv(os.path.join(save_path, 'ckpt', 'ckpt_training_log_{:03d}.csv'.format(epoch)))\n \n # plot progress at the end of each epoch\n with pgrid.output_to(0, 0):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n plt.imshow(img_real[0, 0].detach().cpu().numpy())\n plt.gca().set_title('Real Example ({})'.format(epoch))\n plt.gca().grid(False)\n plt.gca().set_axis_off()\n with pgrid.output_to(0, 1):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n plt.imshow(img_fake[0, 0].detach().cpu().numpy())\n plt.gca().set_title('Fake Example ({})'.format(epoch))\n plt.gca().grid(False)\n plt.gca().set_axis_off()\n with pgrid.output_to(0, 2):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='d_loss', data=training_log.tail(20000))\n with pgrid.output_to(0, 3):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='g_loss', data=training_log.tail(20000))\n with pgrid.output_to(0, 4):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='r_loss_cat', data=training_log.tail(20000))\n with pgrid.output_to(0, 5):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='r_loss_gaus', data=training_log.tail(20000))\n\n# save model and training log for later use\ntorch.save(nets['D'].state_dict(), os.path.join(save_path, settings['trained_d']))\ntorch.save(nets['G'].state_dict(), os.path.join(save_path, settings['trained_g']))\ntorch.save(nets['R'].state_dict(), os.path.join(save_path, settings['trained_r']))\ntorch.save(nets['C'].state_dict(), os.path.join(save_path, settings['trained_c']))\ntorch.save(nets['Shared'].state_dict(), os.path.join(save_path, settings['trained_shared']))\ntraining_log.to_csv(os.path.join(save_path, settings['training_log']))\n```\n\n\n```\n# %%script false\n# load trained model \nwith open(os.path.join(save_path, 'notebook_settings.json'), 'r') as file:\n settings = json.load(file)\n\nnets['D'].load_state_dict(torch.load(os.path.join(save_path, settings['trained_d'])))\nnets['G'].load_state_dict(torch.load(os.path.join(save_path, settings['trained_g'])))\nnets['R'].load_state_dict(torch.load(os.path.join(save_path, settings['trained_r'])))\nnets['C'].load_state_dict(torch.load(os.path.join(save_path, settings['trained_c'])))\nnets['Shared'].load_state_dict(torch.load(os.path.join(save_path, settings['trained_shared'])))\ntraining_log = pd.read_csv(os.path.join(save_path, settings['training_log']))\n\n# Download the MNIST dataset\ntransforms = tv.transforms.Compose([\n tv.transforms.ToTensor(),\n tv.transforms.Normalize((0.5,), (0.5,))\n])\nmnist_data = {\n 'train': MNIST('.', train=True, transform=transforms, download=True),\n 'test': MNIST('.', train=False, transform=transforms, download=True)}\n```\n\n\n```\nnoise, _ = nets['G'].sample_latent(1, 'cuda')\nimg_fake = nets['G'](noise)\nimg_real = mnist_data['test'][0][0].view(1, 1, 28, 28)\npgrid = clwidgets.Grid(1, 6)\nwith pgrid.output_to(0, 0):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n plt.imshow(img_real[0, 0].detach().cpu().numpy())\n plt.gca().set_title('Real Example')\n plt.gca().grid(False)\n plt.gca().set_axis_off()\nwith pgrid.output_to(0, 1):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n plt.imshow(img_fake[0, 0].detach().cpu().numpy())\n plt.gca().set_title('Generated Example')\n plt.gca().grid(False)\n plt.gca().set_axis_off()\nwith pgrid.output_to(0, 2):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='d_loss', data=training_log)\nwith pgrid.output_to(0, 3):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='g_loss', data=training_log)\nwith pgrid.output_to(0, 4):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='r_loss_cat', data=training_log)\nwith pgrid.output_to(0, 5):\n pgrid.clear_cell()\n plt.figure(figsize=(3, 3))\n sns.lineplot(x='epoch', y='r_loss_gaus', data=training_log)\n```\n\n###Plot a number of random samples\nBelow shows 100 images, arranged into a 10x10 grid, generated by the GAN we just trained. \n\n\n\n```\nz = torch.randn(100, 62, 1, 1, device='cuda')\nk = torch.zeros(100, 10, 1, 1, device='cuda')\nk[torch.arange(100), torch.randint(0, 10, (100,))] = 1.\nc = torch.stack(torch.meshgrid(torch.linspace(-1, 1, 10), torch.linspace(-1, 1, 10)), dim=2).view(100, 2, 1, 1).cuda()\n\nwith torch.no_grad():\n latc = torch.cat([z, k, c], dim=1)\n img_fake = nets['G'](latc)\n img_fake = img_fake.view(10, 10, 28, 28).permute(0, 2, 1, 3).contiguous().view(280, 280)\n img_fake = img_fake.cpu().numpy()\n plt.figure(figsize=(8, 8))\n plt.imshow(img_fake)\n plt.gca().grid(False)\n plt.gca().set_axis_off()\n```\n\n### Examining the categorical latent code\nThe 10x10 grid below shows how the continuous latent code affects the generated images. The continuous latent code has 2 dimensions, one varies from -1 to 1 along the horizontal axis and the other along the vertical axis. The noise vector $z$ is kept constant for each generation. \n\nThe slider on top of the plot controls the 1 position in the categorical latent code.\n\n\n```\n@ipywidgets.interact(index=(0, 9, 1))\ndef generate(index):\n k.fill_(0.)\n k[:, index] = 1\n\n with torch.no_grad():\n latc = torch.cat([z, k, c], dim=1)\n img_fake = nets['G'](latc)\n img_fake = img_fake.view(10, 10, 28, 28).permute(0, 2, 1, 3).contiguous().view(280, 280)\n img_fake = img_fake.cpu().numpy()\n plt.figure(figsize=(8, 8))\n plt.imshow(img_fake)\n plt.gca().grid(False)\n plt.gca().set_axis_off()\n```\n\n\n interactive(children=(IntSlider(value=4, description='index', max=9), Output()), _dom_classes=('widget-interac\u2026\n\n\n\n```\n\n```\n", "meta": {"hexsha": "718c92e0821088090bb153389f7d973b84afdac3", "size": 404889, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "information-maximising-gan.ipynb", "max_stars_repo_name": "arayabrain/Tutorials", "max_stars_repo_head_hexsha": "1447f9af188932a7761616ca9d1ea9ca565cdbc2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "information-maximising-gan.ipynb", "max_issues_repo_name": "arayabrain/Tutorials", "max_issues_repo_head_hexsha": "1447f9af188932a7761616ca9d1ea9ca565cdbc2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "information-maximising-gan.ipynb", "max_forks_repo_name": "arayabrain/Tutorials", "max_forks_repo_head_hexsha": "1447f9af188932a7761616ca9d1ea9ca565cdbc2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 241.4364937388, "max_line_length": 167157, "alphanum_fraction": 0.8939832892, "converted": true, "num_tokens": 9096, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4649015713733885, "lm_q2_score": 0.22815650216092537, "lm_q1q2_score": 0.1060703163736701}} {"text": "+ This notebook is part of *Final exam review* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]\n+ Created by me, Dr Juan H Klopper\n + Head of Acute Care Surgery\n + Groote Schuur Hospital\n + University Cape Town\n + Email me with your thoughts, comments, suggestions and corrections \n
Linear Algebra OCW MIT18.06 IPython notebook [2] study notes by Dr Juan H Klopper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.\n\n+ [1] OCW MIT 18.06\n+ [2] Fernando P\u00e9rez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org\n\n\n```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\nfrom sympy import init_printing, Matrix, symbols, sqrt, Rational\nfrom sympy.solvers import solve\nfrom numpy import matrix, transpose, sqrt, eye\nfrom numpy.linalg import pinv, inv, det, svd, norm, eig\nfrom scipy.linalg import pinv2\nfrom warnings import filterwarnings\n```\n\n\n```python\ninit_printing(use_latex = 'mathjax')\nfilterwarnings('ignore')\n```\n\n# Final examination review\n\n## Previous examination questions\n\n### Question 1\n\n+ If A is a *m* × *n* matrix of rank *r* and the following holds\n + No solution\n $$ Ax=\\begin{bmatrix}1\\\\0\\\\0\\end{bmatrix} $$\n + One solution\n $$ Ax=\\begin{bmatrix}0\\\\1\\\\0\\end{bmatrix} $$\n\n+ How many rows in this matrix?\n + *m* = 3\n\n+ What is the rank?\n + If there are no solutions then *r* < *m*\n + If there is only a single solution then the nullspace has only the zero vector as so *r* = *n*\n\n+ How many columns?\n + For one solution (as above) *r* = *n* and with *m* = 3 and *r* < *m* we have *r* = *n* < 3\n\n+ Write down a matrix that fits the description above\n$$ A=\\begin{bmatrix}0&0\\\\1&0\\\\0&1\\end{bmatrix} $$\n\n+ True or False for the above\n + Determinant of ATA is same as determinant of AAT\n + False\n + ATA is invertible\n + If *r* = *n* (independent columns of A) then TRUE\n + AATA is positive definite\n + False (it is going to be 3 × 3, but still with only rank 2)\n\n\n```python\nA = Matrix([[0, 0], [1, 0], [0, 1]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}0 & 0\\\\1 & 0\\\\0 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\n(A.transpose() * A).inv()\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0\\\\0 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\n(A.transpose() * A).det() == (A * A.transpose())\n```\n\n\n\n\n False\n\n\n\n\n```python\nA * A.transpose()\n```\n\n\n\n\n$$\\left[\\begin{matrix}0 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$$\n\n\n\n+ Prove that AT*y* = *c* has at least one solution for every *c* and in fact infinitely many solution for every *c*\n + It has at least one solution because the new number of rows (*n*) is equal to *r*\n + The dimension of the nullspace of AT is *m* - *r*, which in our example here would be > 0, thus infinitely many solutions\n\n### Question 2\n\n+ Suppose we have a matrix A with columns containing vectors *v*1, *v*2, and *v*3\n\n+ Solve A**x** = *v*1 - *v*2 + *v*3\n + This is simple multiplication by columns\n $$ x=\\begin{bmatrix}1\\\\-1\\\\1\\end{bmatrix} $$\n\n+ Suppose *v*1 - *v*2 + *v*3 = 0\n + Is the solution unique or are there more\n + Uniqueness means nothing in the nullspace except the zero vector, so in this cane the solutions are not unique\n+ Suppose the columns are orthonormal (would be called *q*1, *q*2, *q*3)\n + What combination of *v*1 and *v*2 are closet to *v*3?\n + Zero for each *v*1 and *v*2\n\n### Question 3\n\n+ Consider the Markov matrix\n$$ \\begin{bmatrix}0.2&0.4&0.3\\\\0.4&0.2&0.3\\\\0.4&0.4&0.4\\end{bmatrix} $$\n\n+ Calculate the eigenvalues\n + The matrix is singular (note how ½ of columns 1 plys ½ of column 2 equals columns 3) so one eigenvalue will be zero\n + Another must be 1\n + The trace adds to 0.8 and so must the sums of the eigenvalues, thus the last eigenvalue is -0.2\n\n+ If for the following the *u*(0) vector is as indicated, what would teh solution be after *k* steps?\n$$ {u}_{k}={A}^{k};\\quad u\\left({0}\\right)=\\begin{bmatrix}0\\\\10\\\\0\\end{bmatrix} $$\n + The following will hold\n $$ {u}_{k}={A}^{k};\\quad u\\left({0}\\right)={c}_{1}{\\lambda}_{1}^{k}{x}_{1}+{c}_{2}{\\lambda}_{2}^{k}{x}_{2}+{c}_{3}{\\lambda}_{3}^{k}{x}_{3} \\\\ {u}_{k}={A}^{k};\\quad u\\left({0}\\right)=0+{c}_{2}\\left({1}\\right){x}_{2}+{c}_{3}\\left({-0.2}\\right)^{k}{x}_{3} $$\n + So at ∞ the only term that survives is *c*2*x*2\n + Indeed, the key eigenvalue in any Markov matrix is 1\n\n+ Consider the eigenvector and calculate *u* at ∞\n + We already know that we have to use the λ = 1 eigenvalue\n + The distribution at ∞ will be as follows (see python code below)\n $$ u\\left({\\infty}\\right)=\\begin{bmatrix}3\\\\3\\\\4\\end{bmatrix} $$\n\n\n```python\nA = Matrix([[0.2, 0.4, 0.3], [0.4, 0.2, 0.3], [0.4, 0.4, 0.4]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}0.2 & 0.4 & 0.3\\\\0.4 & 0.2 & 0.3\\\\0.4 & 0.4 & 0.4\\end{matrix}\\right]$$\n\n\n\n\n```python\nA.eigenvects() # Looking for eigenvector of eigenvalue 1\n# Have to distribute the totals into 10 (were 10 total intiallly)\n```\n\n\n\n\n$$\\begin{bmatrix}\\begin{pmatrix}-0.2, & 1, & \\begin{bmatrix}\\left[\\begin{matrix}-1.0\\\\1.0\\\\0\\end{matrix}\\right]\\end{bmatrix}\\end{pmatrix}, & \\begin{pmatrix}0, & 1, & \\begin{bmatrix}\\left[\\begin{matrix}-0.5\\\\-0.5\\\\1.0\\end{matrix}\\right]\\end{bmatrix}\\end{pmatrix}, & \\begin{pmatrix}1.0, & 1, & \\begin{bmatrix}\\left[\\begin{matrix}0.75\\\\0.75\\\\1.0\\end{matrix}\\right]\\end{bmatrix}\\end{pmatrix}\\end{bmatrix}$$\n\n\n\n### Question 4\n\n+ Calculate the projection onto the following line\n$$ a=\\begin{bmatrix}4\\\\-3\\end{bmatrix} $$\n + The projection matrix is\n $$ P=\\frac{{a}{a}^{T}}{{a}^{T}{a}} $$\n\n\n```python\na = matrix([[4], [-3]]) # Using scipy\n(a * transpose(a)) / (transpose(a) * a)\n```\n\n\n\n\n matrix([[ 0.64, -0.48],\n [-0.48, 0.36]])\n\n\n\n+ Consider the matrix with eigenvalues 0 and 3 and the following eigenvectors\n$$ 0,\\begin{bmatrix}1\\\\2\\end{bmatrix}\\quad 3,\\begin{bmatrix}2\\\\1\\end{bmatrix} $$\n + We use the following decomposition\n $$ A={S}{\\Lambda}{S}^{-1} $$\n\n\n```python\nS = matrix([[1, 2], [2, 1]])\nL = matrix([[0, 0], [0, 3]])\nS_inv = inv(S)\n```\n\n\n```python\nA = S * L * S_inv\nA\n```\n\n\n\n\n matrix([[ 4., -2.],\n [ 2., -1.]])\n\n\n\n+ Give a 2 × 2 matrix A such that A ≠ BTB for any B\n + BT is always symmetric, so A can be any non-symmetric matrix\n\n+ A matrix that has orthogonal eigenvectors, but is not symmetric\n + Any skew-symmetric matrix (transpose = negative of matrix)\n $$ \\begin{bmatrix}0&1\\\\-1&0\\end{bmatrix} $$\n + Any orthogonal matrix\n $$ \\begin{bmatrix}\\cos&-\\sin\\\\\\sin&\\cos\\end{bmatrix} $$\n\n### Question 5\n\n+ Consider the following system A**x**=**b**, with the least squares solution shown and calculate the projection of **b** onto the columnspace of A\n$$ \\begin{bmatrix} 1 & 0 \\\\ 1 & 1 \\\\ 1 & 2 \\end{bmatrix}=\\begin{bmatrix} c \\\\ d \\end{bmatrix}=\\begin{bmatrix} 3 \\\\ 4 \\\\ 1 \\end{bmatrix}, \\quad \\begin{bmatrix} \\hat { c } \\\\ \\hat { d } \\end{bmatrix}=\\begin{bmatrix} \\frac { 11 }{ 3 } \\\\ -1 \\end{bmatrix} $$\n + The least square solution is given, so simply multiply each entry by its column\n $$ \\frac{ 11 }{ 3 } \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\end{bmatrix}-1\\begin{bmatrix} 0 \\\\ 1 \\\\ 2 \\end{bmatrix} $$\n\n+ Calculate a different vector **b** such that all the least square solutions are zero\n + This requires **b** to be orthogonal to those columns, such as the following\n $$ \\begin{bmatrix}1\\\\-2\\\\1\\end{bmatrix} $$\n\n### Question 6 (from recitation)\n\n+ Consider then 3 × 3 matrix A, with λ1=1 and λ2=2 and the first two pivots *d*1=*d*2=1\n$$ A=\\begin{bmatrix}1&0&1\\\\0&1&1\\\\1&1&0\\end{bmatrix} $$\n\n+ Find λ3 and *d*3\n + The sum of the eigenvalues must equal the trace, thus λ3=-1\n + Constant multiples of a row subtracted from another won't change the determinant leaving *d*1×*d*2×*d*3=|A| (just watching out for singular matrices which will have a zero on the main diagonal; here though we have three non-zero eigenvalues, so the matrix is not-singular), leaving *d*3=-2 (product of eigenvalues is also the determinant of A)\n\n+ Calculate the smallest *a*33 entry that will make positive semi-definite\n + For positive definite the eigenvalues must all be ≥ zero\n + The determinant must also be ≥ 0\n\n\n```python\na33 = symbols('a33')\nA = Matrix([[1, 0, 1], [0, 1, 1], [1, 1, a33]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & 1\\\\0 & 1 & 1\\\\1 & 1 & a_{33}\\end{matrix}\\right]$$\n\n\n\n\n```python\nA.det() # Thus a33 must be grteater tha or equal to 2\n```\n\n\n\n\n$$a_{33} - 2$$\n\n\n\n+ Calculate the smallest values of *c* such that the following is still positive semi-definite\n$$ A-cI $$\n + We can calculate the determinant using sympy (see below) or we can make use of the fact that adding a constant multiple of the identity matrix will only add that constant to each eigenvalue, leaving the eigenvectors intact\n $$ 1+c,\\quad 2+c,\\quad -1+c $$\n + Each must be ≥ 0, so the smallest value of *c* is 1\n\n\n```python\nc = symbols('c')\nA = Matrix([[1, 0, 1], [0, 1, 1], [1, 1, 0]])\n(A - c * eye(3))\n```\n\n\n\n\n$$\\left[\\begin{matrix}- 1.0 c + 1 & 0 & 1\\\\0 & - 1.0 c + 1 & 1\\\\1 & 1 & - 1.0 c\\end{matrix}\\right]$$\n\n\n\n\n```python\n(A - c * eye(3)).det() # From here we can calulcate the smallest value of c such\n# that the determinant is still greater than or equal to zero\n```\n\n\n\n\n$$- 1.0 c^{3} + 2.0 c^{2} + 1.0 c - 2$$\n\n\n\n\n```python\nf = -c ** 3 + 2 * c ** 2 + c - 2\nf\n```\n\n\n\n\n$$- c^{3} + 2 c^{2} + c - 2$$\n\n\n\n\n```python\nsolve(f, c) # solve the equation f for the variable c\n```\n\n\n\n\n$$\\begin{bmatrix}-1, & 1, & 2\\end{bmatrix}$$\n\n\n\n+ Consider now one of the starting vectors *u*0 below and with *u*k+1 = ½A*u*k calculate the limiting behavior of *u*k as *k* approaches ∞\n$$ {u}_{0}=\\begin{bmatrix}3\\\\0\\\\0\\end{bmatrix},\\quad\\begin{bmatrix}0\\\\3\\\\0\\end{bmatrix},\\quad\\begin{bmatrix}0\\\\0\\\\3\\end{bmatrix} $$\n + Notice that ½ is a Markov matrix\n + We cannot be sure that there will be a steady state as there are zero entries in ½A\n + Multiplying a matrix by a constant scalar will not change the eigenvectors, but will change the eigenvalues by the same scalar multiple and we will have λ1=½ and λ2=1 and λ3=-½\n + We do have an eigenvalue of 1, so we will reach a steady-state\n + The eigenvector of λ2=1 is the following (see below)\n $$ \\begin{bmatrix}1\\\\1\\\\1\\end{bmatrix} $$\n + This already sums to 3, so will be *u*\n\n\n```python\n\n```\n", "meta": {"hexsha": "eae67a86f3b3148be89c01f382c765abced0e1fc", "size": 25812, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/III_10_Final_exam_review.ipynb", "max_stars_repo_name": "solomonxie/jupyter-notebooks", "max_stars_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-13T05:52:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T09:52:35.000Z", "max_issues_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/III_10_Final_exam_review.ipynb", "max_issues_repo_name": "solomonxie/jupyter-notebooks", "max_issues_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "forks/MIT_OCW_Linear_Algebra_18_06-master/III_10_Final_exam_review.ipynb", "max_forks_repo_name": "solomonxie/jupyter-notebooks", "max_forks_repo_head_hexsha": "65999f179e037242138de72f512dda4bf00c7379", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 29.1661016949, "max_line_length": 708, "alphanum_fraction": 0.4920579575, "converted": true, "num_tokens": 4733, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.42632159254749036, "lm_q2_score": 0.24798742624020276, "lm_q1q2_score": 0.10572239448647654}} {"text": "# What do word vectors represent?\n\nOn Monday we saw the result of running a word embedding algorithm on two collections. We \"embed\" words in a vector space, such that words that are subtitutable (\"ship\" and \"boat\") or that occur together (\"Stop\" and \"thief!\").\n\nIn today's work we will look at the intuition for how the properties of embedding vectors relate to properties we can directly observe in texts. They are really just representations of the words that occur near a given word.\n\nWhat can we tell about a word from the words that occur near it?\n\n\n```python\nimport numpy, sys, math\nfrom IPython.display import display, clear_output, Markdown, Latex\n\nfrom collections import Counter\n```\n\n\n```python\n## Helper functions to nicely display numeric word scores\n\ndef show(sorted_words, n=20):\n markdown_table = \"|Score | Word|\\n|---:|:---|\\n\"\n for score, word in sorted_words[:n]:\n markdown_table += \"|{:.3f}|{}|\\n\".format(score, word)\n display(Markdown(markdown_table))\n \ndef show_counter(counter, n=20):\n markdown_table = \"|Count | Word|\\n|---:|:---|\\n\"\n for word, score in counter.most_common(n):\n markdown_table += \"|{}|{}|\\n\".format(score, word)\n display(Markdown(markdown_table))\n```\n\nFirst we'll read the texts. I've already split the tokens with Spacy and written the output to a file with one sentence per line, so punctuation will be included as distinct tokens.\n\nWhile we read this, we'll also count the frequency of each word type in `all_counter`.\n\n\n```python\ntext_filename = \"../data/Sagas/sagas_en_split.txt\"\n\nsentences = []\nall_counter = Counter()\n\nwith open(text_filename, encoding=\"utf-8\") as reader:\n for line in reader:\n ## The file has already been tokenized, so we can split on whitespace\n tokens = line.strip().split()\n all_counter.update(tokens)\n \n sentences.append(tokens)\n```\n\nLet's start by looking at the context that words appear in. The next block defines a *key word in context* (KWIC) view.\n\n\n```python\nwindow_size = 5\n```\n\n\n```python\ndef keyword_in_context(query):\n table_markdown = \"|left context|word|right context|\\n|--:|--|:--|\\n\"\n for sentence in sentences:\n \n if not query in sentence:\n continue\n \n for i, word in enumerate(sentence):\n if word == query:\n start = max(i-window_size, 0)\n left_context = sentence[start:i]\n right_context = sentence[(i+1):(i+window_size+1)]\n table_markdown += \"|{}|{}|{}|\\n\".format(\" \".join(left_context), word, \" \".join(right_context))\n \n display(Markdown(table_markdown))\n```\n\n### Part 1\n\nI've given you an example, for *Shetland*, a chain of islands north of Scotland near the Orkney islands. \n\nAdd 10 additional cells, each with one call to the `keyword_in_context` function. Choose five pairs of words that you think might be similar (e.g. *Shetland* and *Orkneys*). Select a variety of parts of speech, such as nouns, verbs, adjectives, prepositions, and proper names.\n\nDiscuss what you notice about the similarities and differences between the contexts of these words.\n\n**Answer here**\n\n\n```python\nkeyword_in_context(\"Shetland\")\n```\n\n\n```python\n# add more `keyword_in_context` cells here\n```\n\nNow let's look at the distribution of words immediately preceding (*left* or *previous* context) and immediately following (*right* or *next* context) a word. This block creates two dictionaries, which map a string to the `Counter` of the words that follow that word and precede it, respectively.\n\n\n```python\nprevious_context_counters = {} # count words that precede the key\nnext_context_counters = {} # count words that follow the key\n\nfor sentence in sentences:\n for i in range(len(sentence) - 1): # stop at the next-to-last token\n word = sentence[i]\n next_word = sentence[i+1]\n \n if not word in next_context_counters:\n next_context_counters[word] = Counter()\n if not next_word in previous_context_counters:\n previous_context_counters[next_word] = Counter()\n \n next_context_counters[word][next_word] += 1\n previous_context_counters[next_word][word] += 1\n```\n\n### Part 2\n\nIn the next code cell I'm demonstrating how to get the most frequent following words for a query word.\n\nUse this function like a \"predictive text\" feature. Generate two Viking sentences of 10-20 words.\n* In the first sentence, start with \"Then\" and pick the most frequent following word. Record your sentence, and comment on why always picking the most common word might not be a good idea.\n* In the second sentence, start with \"Then\" but choose the next word based on both the frequency distribution and your artistic sensibilities.\n\n**First sentence here**\n\n\n**Comment on first sentence**\n\n\n**Second sentence here**\n\n\n\nAdd cells to show previous *and* next context words for at least 10 more words. Use a selection of nouns, verbs, adjectives, prepositions, and names. These may be the same words you looked at before, but you may also want to add additional examples.\n\nDiscuss whether the words to the right or left of a word indicate its part of speech. Cite examples to support your argument. Are the two contexts equally informative for a given part of speech, and is that consistent across different parts of speech?\n\n**Answer here**\n\n\n```python\nshow_counter(next_context_counters[\"she\"])\n```\n\n\n```python\nshow_counter(previous_context_counters[\"she\"])\n```\n\n\n```python\n## add cells here\n```\n\nNext we'll look at sums over the full five-word context window. This code creates one `Counter` for each word type, which adds up all the words that appear within the window around the word.\n\n\n```python\nword_context_counters = {}\n\nfor sentence in sentences:\n \n for i, word in enumerate(sentence):\n start = max(i-window_size, 0)\n left_context = sentence[start:i]\n right_context = sentence[(i+1):(i+window_size+1)]\n \n if not word in word_context_counters:\n word_context_counters[word] = Counter()\n \n word_context_counters[word].update(left_context)\n word_context_counters[word].update(right_context)\n```\n\n### Part 3\n\nThis next cell is an example showing output for the full context counts of a word, essentially adding up all the words you saw in the KWIC view earlier.\n\nShow output for at least 10 words, from a mix of parts of speech.\n\nDiscuss how this view of a word's context differs from the single-previous-word and single-next-word context views we saw in Part 2.\n\n**Answer here**\n\n\n```python\nshow_counter(word_context_counters[\"Shetland\"], 15)\n```\n\n\n```python\n## add cells here\n```\n\nFinally, let's look at a way of comparing the word frequencies we actually observed to the word frequencies in the collection as a whole. We'll use a method called *pointwise mutual information*.\n\nPMI is closely related to KL divergence. In this case, the two distributions we want to compare are the probability of context word $c$ *near* word $w$ and the probabilty of $c$ anywhere. The word *the* is common throughout the collection, so we expect to see it. This metric measures the ratio between the frequency that we actually saw it in the context and our expectation for any random context.\n\nNotation: \n* $N(c|w)$ is `word_context_counters[w][c]`\n* $N(w)$ is `sum(word_context_counters[w].values()`\n* $N(c)$ is `all_counter[c]`\n* $N$ is `all_sum`\n\n$$\n\\begin{align}\nPMI(c, w) & = P(c, w) \\log \\frac{P(c,w)}{P(c)P(w)} \\\\\n& = P(c, w) \\log \\frac{P(c|w)P(w)}{P(c)P(w)} \\\\\n& = P(c, w) \\log \\frac{P(c|w)}{P(c)} \\\\\n& \\propto N(c|w) \\log \\frac{\\frac{N(c|w)}{N(w)} }{ \\frac{N(c)}{N} } \\\\\n& = N(c|w) \\log \\frac{N(c|w)N}{N(w)N(c)}\n\\end{align}$$\n\n\n\n```python\ndef log_ratio(word):\n counter = word_context_counters[word]\n \n all_sum = sum(all_counter.values()) ## N\n word_sum = sum(counter.values()) ## N(w)\n \n comparisons = []\n for c in counter.keys():\n score = counter[c] * math.log((counter[c] * all_sum) / (word_sum * all_counter[c]))\n comparisons.append((score, c))\n \n return sorted(comparisons, reverse=True)\n\n```\n\n### Part 4\n\nCompare results using this `log_ratio` function to the output of the `nearest` function used in Monday's notebook.\n\nProvide some examples, and describe how they are similar or different from the output of the word embedding. If there are \"missing\" words in the output here that are close in the embedding space, show the `log_ratio` output for those words. Do the two words have similar context words? Describe whether this is true and mention examples.\n\n**Answer here**\n\n\n```python\n## 'spae' is a Scots word for prophecy. Gunnhilda was the wife of Eric Bloodaxe.\n## She was ordered to be drowned in a bog by King Harald Bluetooth, the namesake\n## of the wireless standard. Think about that next time you put on some headphones.\n\nshow(log_ratio(\"queen\"))\n```\n\n\n```python\n## add cells with examples here\n```\n\n**Extra bonus for those interested** The embedding algorithm adds an additional step: subsampling the most frequent words. Here's code that generates this subsampling probability.\n\n\n```python\nsampling_probs = {}\nall_sum = sum(all_counter.values())\nfor word in all_counter.keys():\n p_word = all_counter[word] / all_sum\n score = 1.0 / (10000 * p_word)\n sampling_probs[word] = math.sqrt(score) + score\n```\n", "meta": {"hexsha": "f3be82e8f5b401f903dc19253ea38f7fdf386067", "size": 14544, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week11/KeywordsToEmbeddings.ipynb", "max_stars_repo_name": "mimno/info-3350-fall-2019", "max_stars_repo_head_hexsha": "b4e11340f753b22ef9d4db6ebe2fdbbd9e99b156", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-11-05T17:34:38.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-02T19:37:03.000Z", "max_issues_repo_path": "week11/KeywordsToEmbeddings.ipynb", "max_issues_repo_name": "mimno/info-3350-fall-2019", "max_issues_repo_head_hexsha": "b4e11340f753b22ef9d4db6ebe2fdbbd9e99b156", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week11/KeywordsToEmbeddings.ipynb", "max_forks_repo_name": "mimno/info-3350-fall-2019", "max_forks_repo_head_hexsha": "b4e11340f753b22ef9d4db6ebe2fdbbd9e99b156", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2019-09-04T15:23:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-10T23:16:31.000Z", "avg_line_length": 32.1769911504, "max_line_length": 408, "alphanum_fraction": 0.5736386139, "converted": true, "num_tokens": 2247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881357207956, "lm_q2_score": 0.21469141408759984, "lm_q1q2_score": 0.10566856685503712}} {"text": "
\n
\n
\n

Natural Language Processing For Everyone

\n

Text Representation

\n

Bruno Gon\u00e7alves
\n www.data4sci.com
\n @bgoncalves, @data4sci

\n
\n\nIn this lesson we will see in some details how we can best represent text in our application. Let's start by importing the modules we will be using:\n\n\n```python\nimport string\nfrom collections import Counter\nfrom pprint import pprint\nimport gzip\nimport matplotlib.pyplot as plt \nimport numpy as np\n\nimport watermark\n\n%matplotlib inline\n%load_ext watermark\n```\n\nList out the versions of all loaded libraries\n\n\n```python\n%watermark -n -v -m -g -iv\n```\n\n autopep8 1.5\n numpy 1.18.1\n json 2.0.9\n Mon May 04 2020 \n \n CPython 3.7.3\n IPython 6.2.1\n \n compiler : Clang 4.0.1 (tags/RELEASE_401/final)\n system : Darwin\n release : 19.4.0\n machine : x86_64\n processor : i386\n CPU cores : 8\n interpreter: 64bit\n Git hash : 8c3b24b7cfb0371a17c86c11dece1a06155ce164\n\n\nSet the default style\n\n\n```python\nplt.style.use('./d4sci.mplstyle')\n```\n\nWe choose a well known nursery rhyme, that has the added distinction of having been the first audio ever recorded, to be the short snippet of text that we will use in our examples:\n\n\n```python\ntext = \"\"\"Mary had a little lamb, little lamb,\n little lamb. Mary had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, Mary went. Everywhere\n that Mary went,\n The lamb was sure to go\"\"\"\n```\n\n## Tokenization\n\nThe first step in any analysis is to tokenize the text. What this means is that we will extract all the individual words in the text. For the sake of simplicity, we will assume that our text is well formed and that our words are delimited either by white space or punctuation characters.\n\n\n```python\nprint(string.punctuation)\n```\n\n !\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~\n\n\n\n```python\ndef extract_words(text):\n temp = text.split() # Split the text on whitespace\n text_words = []\n\n for word in temp:\n # Remove any punctuation characters present in the beginning of the word\n while word[0] in string.punctuation:\n word = word[1:]\n\n # Remove any punctuation characters present in the end of the word\n while word[-1] in string.punctuation:\n word = word[:-1]\n\n # Append this word into our list of words.\n text_words.append(word.lower())\n \n return text_words\n```\n\nAfter this step we now have our text represented as an array of individual, lowercase, words:\n\n\n```python\ntext_words = extract_words(text)\nprint(text_words)\n```\n\n ['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb', 'mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went', 'everywhere', 'that', 'mary', 'went', 'the', 'lamb', 'was', 'sure', 'to', 'go']\n\n\nAs we saw during the video, this is a wasteful way to represent text. We can be much more efficient by representing each word by a number\n\n\n```python\nword_dict = {}\nword_list = []\nvocabulary_size = 0\ntext_tokens = []\n\nfor word in text_words:\n # If we are seeing this word for the first time, create an id for it and added it to our word dictionary\n if word not in word_dict:\n word_dict[word] = vocabulary_size\n word_list.append(word)\n vocabulary_size += 1\n \n # add the token corresponding to the current word to the tokenized text.\n text_tokens.append(word_dict[word])\n```\n\nWhen we were tokenizing our text, we also generated a dictionary **word_dict** that maps words to integers and a **word_list** that maps each integer to the corresponding word.\n\n\n```python\nprint(\"Word list:\", word_list, \"\\n\\n Word dictionary:\")\npprint(word_dict)\n```\n\n Word list: ['mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'went', 'the', 'sure', 'to', 'go'] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThese two datastructures already proved their usefulness when we converted our text to a list of tokens.\n\n\n```python\nprint(text_tokens)\n```\n\n [0, 1, 2, 3, 4, 3, 4, 3, 4, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0, 14, 0, 14, 0, 14, 12, 13, 0, 14, 15, 4, 7, 16, 17, 18]\n\n\nUnfortunately, while this representation is convenient for memory reasons it has some severe limitations. Perhaps the most important of which is the fact that computers naturally assume that numbers can be operated on mathematically (by addition, subtraction, etc) in a way that doesn't match our understanding of words.\n\n## One-hot encoding\n\nOne typical way of overcoming this difficulty is to represent each word by a one-hot encoded vector where every element is zero except the one corresponding to a specific word.\n\n\n```python\ndef one_hot(word, word_dict):\n \"\"\"\n Generate a one-hot encoded vector corresponding to *word*\n \"\"\"\n \n vector = np.zeros(len(word_dict))\n vector[word_dict[word]] = 1\n \n return vector\n```\n\nSo, for example, the word \"fleece\" would be represented by:\n\n\n```python\nfleece_hot = one_hot(\"fleece\", word_dict)\nprint(fleece_hot)\n```\n\n [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n\nThis vector has every element set to zero, except element 6, since:\n\n\n```python\nprint(word_dict[\"fleece\"])\nfleece_hot[6] == 1\n```\n\n 6\n\n\n\n\n\n True\n\n\n\n\n```python\nprint(fleece_hot.sum())\n```\n\n 1.0\n\n\n## Bag of words\n\nWe can now use the one-hot encoded vector for each word to produce a vector representation of our original text, by simply adding up all the one-hot encoded vectors:\n\n\n```python\ntext_vector1 = np.zeros(vocabulary_size)\n\nfor word in text_words:\n hot_word = one_hot(word, word_dict)\n text_vector1 += hot_word\n \nprint(text_vector1)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nIn practice, we can also easily skip the encoding step at the word level by using the *word_dict* defined above:\n\n\n```python\ntext_vector = np.zeros(vocabulary_size)\n\nfor word in text_words:\n text_vector[word_dict[word]] += 1\n \nprint(text_vector)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nNaturally, this approach is completely equivalent to the previous one and has the added advantage of being more efficient in terms of both speed and memory requirements.\n\nThis is known as the __bag of words__ representation of the text. It should be noted that these vectors simply contains the number of times each word appears in our document, so we can easily tell that the word *mary* appears exactly 6 times in our little nursery rhyme.\n\n\n```python\ntext_vector[word_dict[\"mary\"]]\n```\n\n\n\n\n 6.0\n\n\n\nA more pythonic (and efficient) way of producing the same result is to use the standard __Counter__ module:\n\n\n```python\ntext_words\n```\n\n\n\n\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'little',\n 'lamb',\n 'little',\n 'lamb',\n 'mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow',\n 'and',\n 'everywhere',\n 'that',\n 'mary',\n 'went',\n 'mary',\n 'went',\n 'mary',\n 'went',\n 'everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']\n\n\n\n\n```python\nword_counts = Counter(text_words)\npprint(word_counts)\n```\n\n Counter({'mary': 6,\n 'lamb': 5,\n 'little': 4,\n 'went': 4,\n 'had': 2,\n 'a': 2,\n 'was': 2,\n 'everywhere': 2,\n 'that': 2,\n 'whose': 1,\n 'fleece': 1,\n 'white': 1,\n 'as': 1,\n 'snow': 1,\n 'and': 1,\n 'the': 1,\n 'sure': 1,\n 'to': 1,\n 'go': 1})\n\n\nFrom which we can easily generate the __text_vector__ and __word_dict__ data structures:\n\n\n```python\nitems = list(word_counts.items())\n\n# Extract word dictionary and vector representation\nword_dict2 = dict([[items[i][0], i] for i in range(len(items))])\ntext_vector2 = [items[i][1] for i in range(len(items))]\n```\n\n\n```python\nword_counts['mary']\n```\n\n\n\n\n 6\n\n\n\nAnd let's take a look at them:\n\n\n```python\ntext_vector\n```\n\n\n\n\n array([6., 2., 2., 4., 5., 1., 1., 2., 1., 1., 1., 1., 2., 2., 4., 1., 1.,\n 1., 1.])\n\n\n\n\n```python\nprint(\"Text vector:\", text_vector2, \"\\n\\nWord dictionary:\")\npprint(word_dict2)\n```\n\n Text vector: [6, 2, 2, 4, 5, 1, 1, 2, 1, 1, 1, 1, 2, 2, 4, 1, 1, 1, 1] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThe results using this approach are slightly different than the previous ones, because the words are mapped to different integer ids but the corresponding values are the same:\n\n\n```python\nfor word in word_dict.keys():\n if text_vector[word_dict[word]] != text_vector2[word_dict2[word]]:\n print(\"Error!\")\n```\n\nAs expected, there are no differences!\n\n## Term Frequency\n\nThe bag of words vector representation introduced above relies simply on the frequency of occurence of each word. Following a long tradition of giving fancy names to simple ideas, this is known as __Term Frequency__.\n\nIntuitively, we expect the the frequency with which a given word is mentioned should correspond to the relevance of that word for the piece of text we are considering. For example, **Mary** is a pretty important word in our little nursery rhyme and indeed it is the one that occurs the most often:\n\n\n```python\nsorted(items, key=lambda x:x[1], reverse=True)\n```\n\n\n\n\n [('mary', 6),\n ('lamb', 5),\n ('little', 4),\n ('went', 4),\n ('had', 2),\n ('a', 2),\n ('was', 2),\n ('everywhere', 2),\n ('that', 2),\n ('whose', 1),\n ('fleece', 1),\n ('white', 1),\n ('as', 1),\n ('snow', 1),\n ('and', 1),\n ('the', 1),\n ('sure', 1),\n ('to', 1),\n ('go', 1)]\n\n\n\nHowever, it's hard to draw conclusions from such a small piece of text. Let us consider a significantly larger piece of text, the first 100 MB of the english Wikipedia from: http://mattmahoney.net/dc/textdata. For the sake of convenience, text8.gz has been included in this repository in the **data/** directory. We start by loading it's contents into memory as an array of words:\n\n\n```python\ndata = []\n\nfor line in gzip.open(\"data/text8.gz\", 'rt'):\n data.extend(line.strip().split())\n```\n\nNow let's take a look at the first 50 words in this large corpus:\n\n\n```python\ndata[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'as',\n 'a',\n 'term',\n 'of',\n 'abuse',\n 'first',\n 'used',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'the',\n 'diggers',\n 'of',\n 'the',\n 'english',\n 'revolution',\n 'and',\n 'the',\n 'sans',\n 'culottes',\n 'of',\n 'the',\n 'french',\n 'revolution',\n 'whilst',\n 'the',\n 'term',\n 'is',\n 'still',\n 'used',\n 'in',\n 'a',\n 'pejorative',\n 'way',\n 'to',\n 'describe',\n 'any',\n 'act',\n 'that',\n 'used',\n 'violent',\n 'means',\n 'to',\n 'destroy',\n 'the']\n\n\n\nAnd the top 10 most common words\n\n\n```python\ncounts = Counter(data)\n\nsorted_counts = sorted(list(counts.items()), key=lambda x: x[1], reverse=True)\n\nfor word, count in sorted_counts[:10]:\n print(word, count)\n```\n\n the 1061396\n of 593677\n and 416629\n one 411764\n in 372201\n a 325873\n to 316376\n zero 264975\n nine 250430\n two 192644\n\n\nSurprisingly, we find that the most common words are not particularly meaningful. Indeed, this is a common occurence in Natural Language Processing. The most frequent words are typically auxiliaries required due to gramatical rules.\n\nOn the other hand, there is also a large number of words that occur very infrequently as can be easily seen by glancing at the word freqency distribution.\n\n\n```python\ndist = Counter(counts.values())\ndist = list(dist.items())\ndist.sort(key=lambda x:x[0])\ndist = np.array(dist)\n\nnorm = np.dot(dist.T[0], dist.T[1])\n\nplt.loglog(dist.T[0], dist.T[1]/norm)\nplt.xlabel(\"count\")\nplt.ylabel(\"P(count)\")\nplt.title(\"Word frequency distribution\")\nplt.gcf().set_size_inches(11, 8)\n```\n\n## Stopwords\n\nOne common technique to simplify NLP tasks is to remove what are known as Stopwords, words that are very frequent but not meaningful. If we simply remove the most common 100 words, we significantly reduce the amount of data we have to consider while losing little information.\n\n\n```python\nstopwords = set([word for word, count in sorted_counts[:100]])\n\nclean_data = []\n\nfor word in data:\n if word not in stopwords:\n clean_data.append(word)\n\nprint(\"Original size:\", len(data))\nprint(\"Clean size:\", len(clean_data))\nprint(\"Reduction:\", 1-len(clean_data)/len(data))\n```\n\n Original size: 17005207\n Clean size: 9006229\n Reduction: 0.470384041782026\n\n\n\n```python\nclean_data[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'term',\n 'abuse',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'diggers',\n 'english',\n 'revolution',\n 'sans',\n 'culottes',\n 'french',\n 'revolution',\n 'whilst',\n 'term',\n 'still',\n 'pejorative',\n 'way',\n 'describe',\n 'any',\n 'act',\n 'violent',\n 'means',\n 'destroy',\n 'organization',\n 'society',\n 'taken',\n 'positive',\n 'label',\n 'self',\n 'defined',\n 'anarchists',\n 'word',\n 'anarchism',\n 'derived',\n 'greek',\n 'without',\n 'archons',\n 'ruler',\n 'chief',\n 'king',\n 'anarchism',\n 'political',\n 'philosophy',\n 'belief',\n 'rulers']\n\n\n\nWow, our dataset size was reduced almost in half!\n\nIn practice, we don't simply remove the most common words in our corpus but rather a manually curate list of stopwords. Lists for dozens of languages and applications can easily be found online.\n\n## Term Frequency/Inverse Document Frequency\n\nOne way of determining of the relative importance of a word is to see how often it appears across multiple documents. Words that are relevant to a specific topic are more likely to appear in documents about that topic and much less in documents about other topics. On the other hand, less meaningful words (like **the**) will be common across documents about any subject.\n\nTo measure the document frequency of a word we will need to have multiple documents. For the sake of simplicity, we will treat each sentence of our nursery rhyme as an individual document:\n\n\n```python\nprint(text)\n```\n\n Mary had a little lamb, little lamb,\n little lamb. Mary had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, Mary went. Everywhere\n that Mary went,\n The lamb was sure to go\n\n\n\n```python\ncorpus_text = text.split('.')\ncorpus_words = []\n\nfor document in corpus_text:\n doc_words = extract_words(document)\n corpus_words.append(doc_words)\n```\n\nNow our corpus is represented as a list of word lists, where each list is just the word representation of the corresponding sentence:\n\n\n```python\nprint(len(corpus_words))\n```\n\n 4\n\n\n\n```python\npprint(corpus_words)\n```\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\nLet us now calculate the number of documents in which each word appears:\n\n\n```python\ndocument_count = {}\n\nfor document in corpus_words:\n word_set = set(document)\n \n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n\npprint(document_count)\n```\n\n {'a': 2,\n 'and': 1,\n 'as': 1,\n 'everywhere': 2,\n 'fleece': 1,\n 'go': 1,\n 'had': 2,\n 'lamb': 3,\n 'little': 2,\n 'mary': 4,\n 'snow': 1,\n 'sure': 1,\n 'that': 2,\n 'the': 1,\n 'to': 1,\n 'was': 2,\n 'went': 2,\n 'white': 1,\n 'whose': 1}\n\n\nAs we can see, the word __Mary__ appears in all 4 of our documents, making it useless when it comes to distinguish between the different sentences. On the other hand, words like __white__ which appear in only one document are very discriminative. Using this approach we can define a new quantity, the ___Inverse Document Frequency__ that tells us how frequent a word is across the documents in a specific corpus:\n\n\n```python\ndef inv_doc_freq(corpus_words):\n number_docs = len(corpus_words)\n \n document_count = {}\n\n for document in corpus_words:\n word_set = set(document)\n\n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n \n IDF = {}\n \n for word in document_count:\n IDF[word] = np.log(number_docs/document_count[word])\n \n return IDF\n```\n\nWhere we followed the convention of using the logarithm of the inverse document frequency. This has the numerical advantage of avoiding to have to handle small fractional numbers. \n\nWe can easily see that the IDF gives a smaller weight to the most common words and a higher weight to the less frequent:\n\n\n```python\ncorpus_words\n```\n\n\n\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\n\n\n```python\nIDF = inv_doc_freq(corpus_words)\n\npprint(IDF)\n```\n\n {'a': 0.6931471805599453,\n 'and': 1.3862943611198906,\n 'as': 1.3862943611198906,\n 'everywhere': 0.6931471805599453,\n 'fleece': 1.3862943611198906,\n 'go': 1.3862943611198906,\n 'had': 0.6931471805599453,\n 'lamb': 0.28768207245178085,\n 'little': 0.6931471805599453,\n 'mary': 0.0,\n 'snow': 1.3862943611198906,\n 'sure': 1.3862943611198906,\n 'that': 0.6931471805599453,\n 'the': 1.3862943611198906,\n 'to': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'went': 0.6931471805599453,\n 'white': 1.3862943611198906,\n 'whose': 1.3862943611198906}\n\n\nAs expected **Mary** has the smallest weight of all words 0, meaning that it is effectively removed from the dataset. You can consider this as a way of implicitly identify and remove stopwords. In case you do want to keep even the words that appear in every document, you can just add a 1. to the argument of the logarithm above:\n\n\\begin{equation}\n\\log\\left[1+\\frac{N_d}{N_d\\left(w\\right)}\\right]\n\\end{equation}\n\nWhen we multiply the term frequency of each word by it's inverse document frequency, we have a good way of quantifying how relevant a word is to understand the meaning of a specific document.\n\n\n```python\ndef tf_idf(corpus_words):\n IDF = inv_doc_freq(corpus_words)\n \n TFIDF = []\n \n for document in corpus_words:\n TFIDF.append(Counter(document))\n \n for document in TFIDF:\n for word in document:\n document[word] = document[word]*IDF[word]\n \n return TFIDF\n```\n\n\n```python\ntf_idf(corpus_words)\n```\n\n\n\n\n [Counter({'a': 0.6931471805599453,\n 'had': 0.6931471805599453,\n 'lamb': 0.8630462173553426,\n 'little': 2.0794415416798357,\n 'mary': 0.0}),\n Counter({'a': 0.6931471805599453,\n 'as': 1.3862943611198906,\n 'fleece': 1.3862943611198906,\n 'had': 0.6931471805599453,\n 'lamb': 0.28768207245178085,\n 'little': 0.6931471805599453,\n 'mary': 0.0,\n 'snow': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'white': 1.3862943611198906,\n 'whose': 1.3862943611198906}),\n Counter({'and': 1.3862943611198906,\n 'everywhere': 0.6931471805599453,\n 'mary': 0.0,\n 'that': 0.6931471805599453,\n 'went': 2.0794415416798357}),\n Counter({'everywhere': 0.6931471805599453,\n 'go': 1.3862943611198906,\n 'lamb': 0.28768207245178085,\n 'mary': 0.0,\n 'sure': 1.3862943611198906,\n 'that': 0.6931471805599453,\n 'the': 1.3862943611198906,\n 'to': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'went': 0.6931471805599453})]\n\n\n\nNow we finally have a vector representation of each of our documents that takes the informational contributions of each word into account. Each of these vectors provides us with a unique representation of each document, in the context (corpus) in which it occurs, making it posssible to define the similarity of two documents, etc.\n\n## Porter Stemmer\n\nThere is still, however, one issue with our approach to representing text. Since we treat each word as a unique token and completely independently from all others, for large documents we will end up with many variations of the same word such as verb conjugations, the corresponding adverbs and nouns, etc. \n\nOne way around this difficulty is to use stemming algorithm to reduce words to their root (or stem) version. The most famous Stemming algorithm is known as the **Porter Stemmer** and was introduced by Martin Porter in 1980 [Program 14, 130 (1980)](https://dl.acm.org/citation.cfm?id=275705)\n\nThe algorithm starts by defining consonants (C) and vowels (V):\n\n\n```python\nV = set('aeiouy')\nC = set('bcdfghjklmnpqrstvwxz')\n```\n\nThe stem of a word is what is left of that word after a speficic ending has been removed. A function to do this is easy to implement:\n\n\n```python\ndef get_stem(suffix, word):\n \"\"\"\n Extract the stem of a word\n \"\"\"\n \n if word.lower().endswith(suffix.lower()): # Case insensitive comparison\n return word[:-len(suffix)]\n\n return None\n```\n\nIt also defines words (or stems) to be sequences of vowels and consonants of the form:\n\n\\begin{equation}\n[C](VC)^m[V]\n\\end{equation}\n\nwhere $m$ is called the **measure** of the word and [] represent optional sections. \n\n\n```python\ndef measure(orig_word):\n \"\"\"\n Calculate the \"measure\" m of a word or stem, according to the Porter Stemmer algorthim\n \"\"\"\n \n word = orig_word.lower()\n\n optV = False\n optC = False\n VC = False\n m = 0\n\n pos = 0\n\n # We can think of this implementation as a simple finite state machine\n # looks for sequences of vowels or consonants depending of the state\n # in which it's in, while keeping track of how many VC sequences it\n # has encountered.\n # The presence of the optional V and C portions is recorded in the\n # optV and optC booleans.\n \n # We're at the initial state.\n # gobble up all the optional consonants at the beginning of the word\n while pos < len(word) and word[pos] in C:\n pos += 1\n optC = True\n\n while pos < len(word):\n # Now we know that the next state must be a vowel\n while pos < len(word) and word[pos] in V:\n pos += 1\n optV = True\n\n # Followed by a consonant\n while pos < len(word) and word[pos] in C:\n pos += 1\n optV = False\n \n # If a consonant was found, then we matched VC\n # so we should increment m by one. Otherwise, \n # optV remained true and we simply had a dangling\n # V sequence.\n if not optV:\n m += 1\n\n return m\n```\n\nLet's consider a simple example. The word __crepusculars__ should have measure 4:\n\n[cr] (ep) (usc) (ul) (ars)\n\nand indeed it does.\n\n\n```python\nword = \"crepusculars\"\nprint(measure(word))\n```\n\n 4\n\n\n(agr) = (VC)\n\n\n```python\nword = \"agr\"\nprint(measure(word))\n```\n\n 1\n\n\nThe Porter algorithm sequentially applies a series of transformation rules over a series of 5 steps (step 1 is divided in 3 substeps and step 5 in 2). The rules are only applied if a certain condition is true. \n\nIn addition to possibily specifying a requirement on the measure of a word, conditions can make use of different boolean functions as well: \n\n\n```python\ndef ends_with(char, stem):\n \"\"\"\n Checks the ending of the word\n \"\"\"\n return stem[-1] == char\n\ndef double_consonant(stem):\n \"\"\"\n Checks the ending of a word for a double consonant\n \"\"\"\n if len(stem) < 2:\n return False\n\n if stem[-1] in C and stem[-2] == stem[-1]:\n return True\n\n return False\n\ndef contains_vowel(stem):\n \"\"\"\n Checks if a word contains a vowel or not\n \"\"\"\n return len(set(stem) & V) > 0 \n```\n\nFinally, we define a function to apply a specific rule to a word or stem:\n\n\n```python\ndef apply_rule(condition, suffix, replacement, word):\n \"\"\"\n Apply Porter Stemmer rule.\n if \"condition\" is True replace \"suffix\" by \"replacement\" in \"word\"\n \"\"\"\n \n stem = get_stem(suffix, word)\n\n if stem is not None and condition is True:\n # Remove the suffix\n word = stem\n\n # Add the replacement suffix, if any\n if replacement is not None:\n word += replacement\n\n return word\n```\n\nNow we can see how rules can be applied. For example, this rule, from step 1b is successfully applied to __pastered__:\n\n\n```python\nword = \"plastered\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n True\n\n\n\nWhile try applying the same rule to **bled** will fail to pass the condition resulting in no change.\n\n\n```python\nmeasure('agr')\n```\n\n\n\n\n 1\n\n\n\n\n```python\nword = \"bled\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'bled'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'bl'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n False\n\n\n\nFor a more complex example, we have, in Step 4:\n\n\n```python\nword = \"adoption\"\nsuffix = \"ion\"\nstem = get_stem(suffix, word)\napply_rule(measure(stem) > 1 and (ends_with(\"s\", stem) or ends_with(\"t\", stem)), suffix, None, word)\n```\n\n\n\n\n 'adopt'\n\n\n\n\n```python\nends_with(\"t\", stem)\n```\n\n\n\n\n True\n\n\n\n\n```python\nends_with(\"s\", stem)\n```\n\n\n\n\n False\n\n\n\n\n```python\nmeasure(stem)\n```\n\n\n\n\n 2\n\n\n\nIn total, the Porter Stemmer algorithm (for the English language) applies several dozen rules (see https://tartarus.org/martin/PorterStemmer/def.txt for a complete list). Implementing all of them is both tedious and error prone, so we abstain from providing a full implementation of the algorithm here. High quality implementations can be found in all major NLP libraries such as [NLTK](http://www.nltk.org/howto/stem.html).\n\nThe dificulties of defining matching rules to arbitrary text cannot be fully resolved without the use of Regular Expressions (typically implemented as Finite State Machines like our __measure__ implementation above), a more advanced topic that is beyond the scope of this course.\n\n
\n \n
\n", "meta": {"hexsha": "808ac28a32955db4095a696eb7f918988b47219f", "size": 249678, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1. Text Representation.ipynb", "max_stars_repo_name": "musabaloyi/NLP", "max_stars_repo_head_hexsha": "2746551f5f81e82704a3a31f15d48dc15b562707", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-15T12:24:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-15T12:24:49.000Z", "max_issues_repo_path": "1. Text Representation.ipynb", "max_issues_repo_name": "adbmd/NLP", "max_issues_repo_head_hexsha": "2746551f5f81e82704a3a31f15d48dc15b562707", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1. Text Representation.ipynb", "max_forks_repo_name": "adbmd/NLP", "max_forks_repo_head_hexsha": "2746551f5f81e82704a3a31f15d48dc15b562707", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-19T11:37:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-19T11:37:27.000Z", "avg_line_length": 121.497810219, "max_line_length": 198040, "alphanum_fraction": 0.8716186448, "converted": true, "num_tokens": 7923, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.23091975763650377, "lm_q1q2_score": 0.10556189985655215}} {"text": "```python\n#from IPython.display import Image\n#Image(filename='i_could_care_less.png') \n#
\nfrom IPython.display import Image\nfrom IPython.display import HTML\nhtml1 = '
'\nHTML(html1)\n\n#from IPython.display import Image\n#from IPython.core.display import HTML \n#Image(filename= \"i_could_care_less.png\",width=500, height=500)\n```\n\n\n\n\n
\n\n\n\n# Word Representation With One-Hot Vectors\n\nOne-hot encoding is the most common, most basic way to turn a token into a vector. It consists in associating a unique integer index to every word, then turning this integer index $i$ into a binary vector of size $V$, which is the size of our vocabulary, that would be all-zeros except for the $i$-th entry, which would be 1.\n\n\n```python\nimport numpy as np\nsamples = ['The cat sat on the mat.', 'The dog ate my homework.']\ntoken_index = {}\nfor sample in samples:\n for word in sample.split():\n if word not in token_index:\n token_index[word] = len(token_index) + 1\n\nsentence = \"the cat ate the god\"\nseq = []\nfor word in sentence.split():\n one_hot = np.zeros((1,len(token_index)))\n if word not in token_index:\n seq.append(one_hot.squeeze())\n else:\n one_hot[0,token_index[word]-1] = one_hot[0,token_index[word]-1] + 1\n seq.append(one_hot.squeeze())\n```\n\n\n```python\nseq\n```\n\n\n\n\n [array([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]),\n array([0., 0., 0., 0., 0., 0., 1., 0., 0., 0.]),\n array([0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]),\n array([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]),\n array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])]\n\n\n\n\n```python\nimport numpy as np\nwith open('cats100.test.txt') as file_handler:\n data = file_handler.read()\n\ntoken_idx = {}\nfor word in data.split():\n if word not in token_idx:\n token_idx[word] = len(token_idx) + 1\n\nsentence = \"Ge\u00e7ti\u011fimiz zamanlarda k\u00f6t\u00fc olaylar ya\u015fand\u0131\"\none_hot_sentence = np.zeros((len(sentence.split()),len(token_idx)),dtype=np.int32)\nfor word_idx,word in enumerate(sentence.split()):\n one_hot_sentence[word_idx,token_idx[word] - 1] = 1\n```\n\n
\n \n
Figure is taken from Stanford cs-230 cheatsheet
\n
\n\n There are two major issues with this approach. First issue is the curse of dimensionality, which refers to all sorts of problems that arise with data in high dimensions. This requires exponentially large memory space. Most of the matrix is taken up by zeros, so useful data becomes sparse. Imagine we have a vocabulary of 50,000. (There are roughly a million words in English language.) Each word is represented with 49,999 zeros and a single one, and we need 50,000 squared = 2.5 billion units of memory space. Not computationally efficient.\n \n\nSecond issue is that every one hotted vector is orthogonal to each other. You cannot measure the\nsimilarity (like cosine similarity) on this vectors.\n\n\nThose vectors can be visualized in 3 or 2-dimensional space as an example. In $\\mathbb{R}^3$ (in other words, vocabulary size of 3, $\\mid V \\mid=3$), the word vectors become our span set which is \n$\\left\\{\\begin{bmatrix} 1\\\\0\\\\0\\end{bmatrix},\n\\begin{bmatrix} 0\\\\1\\\\0\\end{bmatrix},\n\\begin{bmatrix} 0\\\\0\\\\1\\end{bmatrix}\\right\\}$. The span set is linearly independent set. Therefore we cannot compute similarity metrics. The vectors $\\vec{u_1}, \\vec{u_2}, \\vec{u_3}$ are orthogonal. Similarity metrics gives the result of zero. For example the cosine similarity:\n
\n
\n \n\n$$ \\text{cos-sim}(\\vec{u_1},\\vec{u_2}) = \\frac{\\langle \\vec{u_1}, \\vec{u_2} \\rangle}{\\lVert \\vec{u_1} \\lVert_2 \\times \\lVert \\vec{u_2} \\lVert_2 } = \\frac{\\sum_{i=0}^n u_{1_i} \\times u_{2_i}}{\\sqrt{\\sum_{i=0}^n u_{1_i}^2} \\times \\sqrt{\\sum_{i=0}^n u_{2_i}^2}} = \\frac{1 \\times 0 + 0 \\times 1 + 0 \\times 0}{1 + 1} = 0$$\n
\n \n\n\nSo, how to deal with those problems?\n\n# Lexical Semantics And Distributional Linguistics\n\n\nWords that occur in similar contexts tend to have similar meanings. The link between similarity and words are distributed and similarity in what they mean is called distributional hypothesis or distributional semantics in the field of Computational Linguistics. So what can be counted when we say similar contexts? For example if you surf on the Wikipedia page of linguistics, the words in this page somehow related with each other in the context of linguistics. This was formulated firsty by Martin Joos (Description of Language Design, 1950), Zellig Harris (Distributional Structure, 1954), John Rupert Firth (Applications of General Linguistics, 1957).\n\n\nSome words have similar meanings, for example word cat and dog similar. Also, words can be antonyms, for example hot and cold. And words have connotations (TR: \u00e7a\u011fr\u0131\u015f\u0131m), for example happy->positive connotation and sad->negative connotation. Can you feel the similarity of words, [study, exam, night, FF]?\n\n\nAlso each word can have multiple meanings. The word mouse can refeer to the rodent or the cursor control device. We call each of these aspects of the meaning of mouse a word sense. In other words words can be polysemous (have multiple senses), which can lead us to make word interpretations difficult! \n

\n\n- Word Sense Disambiguation: \"Mouse info\" (person who types this into a web search engine, looking for a pet info or a tool?) (determining which sense of a word is being used in a particular context)\n\nThe word **similarity** is very useful in larger semantics tasks. Knowing how similar two words are can help in computing how similar the meaning of two phrases or sentences are, a very important component of natural language understanding tasks like **question answering**, **summarization** etc.\n\n| Word1 | Word2 | Similarity (0-10) |\n| ----------- | ----------- | ----------- |\n| Vanish | Disappear | 9.8 |\n| Behave | Obey | 7.3 |\n| Belief | Impression | 5.95 |\n| Muscle | Bone | 3.65 |\n| Modest | Flexible | 0.98 |\n| Hole | Agreement | 0.3 |\n\nWe should look at **word relatedness** , the meaning of two words can be related in ways other than similarity. One such class of connections is called word **relatedness**, also traditionally called word **association** in pysch.\n\n- The word *cup* and *coffee*.\n- The word *inzva* and *deep learning*.\n\nAlso words can affective meanings. Osgood et al. 1957, proposed that words varied along three important dimensions of affective meaning: *valence, arousal, dominance*.\n- **valence**: the pleasantness of the stimulus.\n- **arousal**: the intensity of emotion provoked by the stimulus.\n- **dominance**: the degree of control exerted by the stimulus.\n\nExamples: \n- happy(1) $\\uparrow$, satisfied(1) $\\uparrow$; annoyed(1) $\\downarrow$, unhappy(1)$\\downarrow$\n- excited(2) $\\uparrow$, frenzied(2) $\\uparrow$; relaxed(1) $\\downarrow$, calm(2) $\\downarrow$\n- important(3)$\\uparrow$, controlling(3)$\\uparrow$; awed(3)$\\downarrow$, influenced(3)$\\downarrow$\n\n*Question: does word embeddings has these dimesions?*\n\n# Word Embeddings\n\nHow can we build a computational model that successfully deals with the different aspects of word meaning we saw above (word senses, word similarity, word relatedness, connotation etc.)?\n\n**Instead of representing words with one-hot vector, sparse; with word embeddings we represents words with dense vectors**. \n\n**The idea of vector semantics is thus to represent a word as a point in some multidimensional semantic space.** Vectors for representing words are generally called **embeddings**. We **LEARN** this embeddings form an arbitrary context.\n\n
\n
\n \n
Figure is taken from this medium post
\n
\n
\n\nMain two advantages of this word embeddings are:\n\n- Now we represent words with dense vectors, which leads low memory requirements.\n\n
\n \n
Figure is taken from Stanford cs-230 cheatsheet
\n
\n\n- We can now calculate similarity metrics on these vectors!\n\n## Visualizing Word Embeddings\n\nWord embeddings can be visualized with various dimensionality reduction/matrix factorization algorithms.\n\n

\n\n \n \n \n

\n\n\n- left-figure source: [Zero-Shot Learning Through Cross-Modal Transfer (Socher et al., 2013, NeurIPS)](https://nlp.stanford.edu/~socherr/SocherGanjooManningNg_NIPS2013.pdf)\n- right-figure source: [Word representations: A simple and general method for semi-supervised learning (Turian et al., 2010)](http://metaoptimize.s3.amazonaws.com/cw-embeddings-ACL2010/embeddings-mostcommon.EMBEDDING_SIZE=50.png)\n\n

\n \n \n \n

\n\n\n- left-figure source: [Natural Language Processing (almost) from Scratch (Collobert et al., 2011)](https://arxiv.org/abs/1103.0398v1.pdf)\n- right-figure source: [Bilingual Word Embeddings for Phrase-Based Machine Translation (Socher et al., 2013, EMNLP)](https://ai.stanford.edu/~wzou/emnlp2013_ZouSocherCerManning.pdf)\n\n## Using Word Embeddings\n\n### Named Entity Recognition (NER)\nIn Natural language processing, Named Entity Recognition (NER) is a process where a sentence or a chunk of text is parsed through to find entities that can be put under categories like names, organizations, locations, quantities, monetary values, percentages, etc. Traditional NER algorithms included only names, places, and organizations.\n\nSince the embeddings can capture word senses etc., it is practical and beneficial to use word embeddings in NER task. Embeddings can capture entity informations while capturing word relations.\n\n

\n \n \n

\n\n\nfigure sources: [link](https://towardsdatascience.com/named-entity-recognition-ner-meeting-industrys-requirement-by-applying-state-of-the-art-deep-698d2b3b4ede)\n\n### Transfer Learning\n\n- To learn word embeddings, huge size of training data is always useful. For example GloVe is trained on 5 separate corpora:\n * 2010 Wikipedia dump with 1 billion tokens\n * 2014 Wikipedia dump with 1.6 billion tokens\n * Gigaword 5 which has 4.3 billion tokens\n * the combination Gigaword5 + Wikipedia2014, which has 6 billion tokens\n * 42 billion tokens of web data, from Common Crawl\n

\n\n- You can download pre-trained word embeddings online.\n * [GloVe pre-trained vectors](https://nlp.stanford.edu/projects/glove/)\n * Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors, 1.75 GB download)\n * Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download)\n * Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 25d, 50d, 100d, & 200d vectors, 1.42 GB download)\n * [Hellinger PCA vectors](http://lebret.ch/words/)\n * [word2vec pre-trained vectors](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/)\n * etc.\n

\n- Transfer embedding to new task with smaller training set.\n * Language Modelling\n * Predictive Typing\n * Spelling/Grammar Correction\n * Summarization\n * NMT\n * etc.\n

\n\n- Finetune the word embeddings with new data (if your training data relatively big).\n\n\n### Dependency Parsing\nA dependency parser analyzes the grammatical structure of a sentence, establishing relationships between \"head\" words and words which modify those heads. The figure below shows a dependency parse of a short sentence.\n\nSyntactic Parsing or Dependency Parsing is the task of recognizing a sentence and assigning a syntactic structure to it. The most widely used syntactic structure is the parse tree which can be generated using some parsing algorithms. These parse trees are useful in various applications like grammar checking or more importantly it plays a critical role in the semantic analysis stage.\n\nDependency parsing is the task of analyzing the syntactic dependency structure of a given input sentence $S$. The output of a dependency parser is a dependency tree where the words of the input sentence are connected by typed dependency relations. Formally, the dependency parsing problem asks to create a mapping from the input sentence with words $S = w_0w_1...w_n$ (where $w_0$ is the ROOT) to its dependency tree graph $G$.\n\n

\n \n \n

\n\n[A Fast and Accurate Dependency Parser using Neural Networks (Chen et al., 2014)](https://www.aclweb.org/anthology/D14-1082/)\n\nright-figure source [CS224n Part IV](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes04-dependencyparsing.pdf)\n\n### Representation Learning\n\nVarious tasks can be modeled with representations. Since words can be represented by embeddings, images can be (or even signals/audio) represented by a latent space $z$. Autoencoders is a way to learn latent representation of data.\n
\n\n
\n

\n \n \n \n

\n\n## Semantic Properties Of Embeddings\n\nWord embeddings have very important property: analogies. Analogy is another semantic property of embeddings that can capture relational meanings. Simply in words, analogy is to find **X**:\n\n- **A is to B as C is to X**\n\nFor example **\u201cwoman is to queen as man is to X**. In this example **X** should be the word *king*.\n\nInterestingly, such embeddings exhibit seemingly linear behaviour in analogies. This linear behaviour can be formulated as\n\n- $w_a$ is to $w_a'$ as $w_b$ is to $w_b'$ $\\rightarrow \\rightarrow \\rightarrow$ $w_a' - w_a + w_b \\approx w_b'$\n\n- vec('*queen*') - vec('*woman*') + vec('*man*') $\\approx$ vec('*king*')\n\nor\n\n- vec('*queen*') - vec('*woman*') $\\approx$ vec('*king*') - vec('*man*')\n\n\nAnother example:\n\n- vec('*Paris*') - vec('*France*') $\\approx$ vec('*Italy*') - vec('*Rome*')\n\n\n\n

\n \n \n \n

\n\n- left-figure source: [Linguistic Regularities in Continuous Space Word Representations (Mikolov et al., 2013)](https://www.aclweb.org/anthology/N13-1090.pdf)\n- mid-figure source: [Speech and Language Processing, Daniel Jurafsky, Third Edition](https://web.stanford.edu/~jurafsky/slp3/)\n- right-figure source: [Efficient Estimation of Word Representations in Vector Space (Mikolov etl al., 2013)](https://arxiv.org/pdf/1301.3781.pdf)\n\nDo vector embeddings capture syntactic relationships? Yes. Capture with linear behaviours? Yes.\n\n\n

\n \n \n

\n\n- left-figure source: [Linguistic Regularities in Continuous Space Word Representations (Mikolov et al., 2013)](https://www.aclweb.org/anthology/N13-1090.pdf)\n- right-figure source: [Speech and Language Processing, Daniel Jurafsky, Third Edition](https://web.stanford.edu/~jurafsky/slp3/)\n\n\nFor more formal definition of analogy? Check [Analogies Explained: Towards Understanding Word Embeddings (Allen et al., 2019)](https://arxiv.org/pdf/1901.09813.pdf)\n\n## Evaluating Word Vectors\n\n- Extrinsic Evaluation\n * This is the evaluation on a real task.\n * Can be slow to compute performance.\n * Unclear if the subsystem is the problem, or our system.\n * If replacing subsystem improves performance, the change is likely good.\n * NER, Question Answering etc.\n\n\n- Intrinsic Evaluation\n * Fast to compute\n * Helps to understand subsystem\n * Needs positive correlation with real task to determine usefulness\n \n \nSimLex-999 dataset (Hill et al., 2015) gives values on a scale from 0 to 10 by asking humans to judge how similar one word is to another. Other datasets for evaluating word vectors: \n- WordSim 353 \n- TOEFL Dataset\n- SCWS Dataset\n- Word-in-Context (WiC)\n- Miller & Charles Dataset\n- Rubenstein & Goodenough Dataset\n- Stanford Rare Word (RW)\n\n

\n \n \n

\n\nSome evaluation metrics from [GloVe: Global Vectors for Word Representation (Pennington et al., 2014)](https://www.aclweb.org/anthology/D14-1162.pdf)\n\n# Learning Word Embeddings\n\n## Embedding Matrix\nEmbedding matrix can be represented as a single matrix $E \\in \\mathbb{R}^{d \\times \\mid V \\mid}$ (or $E \\in \\mathbb{R}^{ \\mid V \\mid \\times d }$, it does not even matter), where d is embedding size and $\\mid V \\mid$ is size of the vocabulary $V$.\n\n
\n\n$$E = \\begin{bmatrix}\n\\text{hello} & \\text{i} & \\text{love} & \\text{inzva} & \\cdots & \\text{sanctuary}\\\\\n0.1 & 0.137 & -0.03 & -0.44 & \\cdots & 0.36\\\\\n-0.78 & -0.25 & 2.09 & 0.19 & \\cdots & 0.32\\\\\n-3.1 & 1.54 & -2.52 & 0.2 & \\cdots & -1.51\\\\\n1.13 & -0.78 & -0.56 & 0.95 & \\cdots & 0.2112\\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n- 0.6 & 0.13 & 3.89 & -0.071 & -0.27 & 0.27 \n\\end{bmatrix} \\in \\mathbb{R}^{d \\times \\mid V \\mid}$$\n\nBut how to learn them?\n\n## A Neural Probabilistic Language Model (Bengio et al., 2003)\n\n*In neural language models, the prior context is represented by embeddings of the previous words*. Representing the prior context as embeddings, rather than by exact words as used in [n-gram](https://web.stanford.edu/~jurafsky/slp3/3.pdf) language models, allows neural language models to generalize to unseen data much better than n-gram language models.\n\nFor example in our training set we see this sentence,\n\n Today, after school, I am planning to go to cinema.\n \nbut we have never seen the word \"concert\" after the words \"go to\". In our test set we are trying to predict what comes after the prefix \"Today, after school, I am planning to go to\". An n-gram language model will predict \"cinema\" but not \"concert\". But a neural language model, which can make use of the fact that \"cinema\" and \"concert\" have similar embeddings, will be able to assign a reasonably high probability to \"concert\" as well as \"cinema\", merely because they have similar vectors.\n\nIn 2003 Bengio proposed a neural language model that can learns and uses embeddings to predict the next word in a sentence. Formally representation, for a sequence of words\n\n$$x^{(1)}, x^{(2)}, ..., x^{(t)}$$\n\nthe probability distribution of the next word (output) is\n\n$$p(x^{(t+1)} \\mid x^{(1)}, x^{(2)}, ..., x^{(t)})$$\n\nBengio proposed a fixed-window neural Language Model which can be seen as the same approach of n-grams.\n\n
\n\" as the proctor started the clock the students opened their ____.\"\n
\n
\nWe have a moving window at time $t$ with an embedding vector representing each of the window size previous words. For window size 3, words $w_{t-1}, w_{t-2}, w_{t-3}$. These 3 vectors are concatenated together to produce input x. The task is to predict $w_t$.\n\n

\n \n \n

\n\n- left-figure source: [CS224n lecture 5](http://web.stanford.edu/class/cs224n/slides/cs224n-2021-lecture05-rnnlm.pdf)\n- right-figure source: [Speech and Language Processing, Daniel Jurafsky, Third Edition](https://web.stanford.edu/~jurafsky/slp3/)\n\nFor this task, we are going to represent each of the $N$ prevuios words as a one-hot-vector of length $\\mid V \\mid$.\n\nThe forward equation for neural language model\n\n- Input $x_i \\in R^{1 \\times \\mid V \\mid}$\n\n- Learning word embeddings: $e = concat(x_1 E^T, x_2 E^T, x_3 E^T)$\n * $E \\in \\mathbb{R}^{d \\times \\mid V \\mid}$\n * $e \\in \\mathbb{R}^{1 \\times 3d}$\n \n \n- $h = \\sigma(e W^T + b_1)$\n * $W \\in \\mathbb{R}^{d_h \\times 3d}$\n * $h \\in \\mathbb{R}^{1 \\times d_h}$\n \n \n- $z = h U^T + b_2$\n * $U \\in \\mathbb{R}^{\\mid V \\mid \\times d_h}$\n * $z \\in \\mathbb{R}^{1 \\times \\mid V \\mid}$\n \n \n- $\\hat{y} = softmax(z) \\in \\mathbb{R}^{1 \\times \\mid V \\mid}$\n\n\nThen the model is trained, at each word $w_t$, the negative log likelihood loss is:\n\n$$ L = - \\log p(w_t \\mid w_{(t-1)}, w_{(t-2)}, ..., w_{(t-n+1)}) = softmax(z) $$\n\n
\n$$ \\theta_{t+1} = \\theta_t - \\eta \\frac{\\partial - \\log p(w_t \\mid w_{(t-1)}, w_{(t-2)}, ..., w_{(t-n+1)})}{\\partial \\theta}$$\n\n# word2vec\n\nword2vec is proposed in the paper called [Distributed Representations of Words and Phrases and their Compositionality (Mikolov et al., 2013)](https://arxiv.org/abs/1310.4546). It allows you to learn the high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. Word2vec algorithm uses Skip-gram [Efficient Estimation of Word Representations in Vector Space\n (Mikolov et al., 2013)](https://arxiv.org/abs/1301.3781) model to learn efficient vector representations. Those learned word vectors has interesting property, words with semantic and syntactic affinities give the necessary result in mathematical similarity operations.\n \nSuppose that you have a sliding window of a fixed size moving along a sentence: the word in the middle is the \u201ctarget\u201d and those on its left and right within the sliding window are the context words.\n\nThe skip-gram model is trained to predict the probabilities of a word being a context word for the given target.\n\n

\n \n \n

\n\n\nFor example consider this sentence,\n\n \"A change in Quantity also entails a change in Quality\"\nOur target and context pairs for window size of 5:\n\n\n| Sliding window (size = 5) | Target word | Context |\n| ----------- | ----------- | ----------- |\n| \\[A change in\\] | a | change, in |\n| \\[A change in Quantity \\] | change | a, in, quantitiy |\n| \\[A change in Quantity also\\] | in | a, change, quantitiy,also |\n| ... | ... | ... |\n| \\[entails a change in Quality\\] | change | entails, a, in, Quality |\n| \\[a change in Quality\\] | in | a, change, Quality |\n| \\[change in Quality\\] | quality | change, in |\n\nEach context-target pair is treated as a new observation in the data. \n\nFor each position $t=1,..,T$ predict context words within a window of fixed size $m$, given center word $w_j$. In Skip-gram connections we have an objective to maximize, likelihood (or minimize log-likelihood):\n\n
\n$$\\max \\limits_{\\theta} \\prod_{\\text{center}} \\prod_{\\text{context}} p(\\text{context}|\\text{center} ;\\theta)$$\n\n
\n$$= \\max \\limits_{\\theta} \\prod_{t=1}^T \\prod_{-c \\leq j \\leq c, j \\neq c} p(w_{t+j}|w_t; \\theta)$$\n\n
\n$$= \\min \\limits_{\\theta} -\\frac{1}{T} \\prod_{t=1}^T \\prod_{-c \\leq j \\leq c, j \\neq c} p(w_{t+j}|w_t; \\theta)$$\n\n
\n$$= \\min \\limits_{\\theta} -\\frac{1}{T} \\sum_{t=1}^T \\sum_{-c \\leq j \\leq c, j \\neq c} \\log p(w_{t+j}|w_t; \\theta)$$\n\nSo, How can we calculate those probabilities? Softmax gives the normalized probabilities.\n\n## Parameterization Of Skip-Gram Model\n\nLet's say $w_t$ is our target word and $w_c$ is current context word. The softmax is defined as\n\n
\n$$p(w_c \\mid w_t) = \\frac{\\exp(v_{w_c}^T v_{w_t})}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})}$$\n\nmaximizing this log-likelihood function under $v_{w_t}$ gives you the most likely value of the $v_{w_t}$ given the data.\n\n
\n$$ \\frac{\\partial}{\\partial v_{w_t}}\\cdot \\log \\frac{\\exp(v_{w_c}^T v_{w_t})}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})}$$\n\n
\n\n$$ = \\frac{\\partial}{\\partial v_{w_t}}\\cdot \\log \\underbrace{\\exp(v_{w_c}^T v_{w_t})}_{\\text{numerator}} - \\frac{\\partial}{\\partial v_{w_t}}\\cdot \\log \\underbrace{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})}_{\\text{denominator}}$$\n\n
\n\n$$ \\frac{\\partial}{\\partial v_{w_t}} \\cdot v_{w_c}^T v_{w_t} = v_{w_c} \\; \\; (\\text{numerator})$$\n\nNow, it is time to derive denominator.\n
\n\n$$\\frac{\\partial}{\\partial v_{w_t}}\\cdot \\log \\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t}) = \\frac{1}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})} \\cdot \\frac{\\partial}{\\partial v_{w_t}} \\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})$$\n\n
\n\n$$ = \\frac{1}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})} \\cdot \\sum_{i=0}^{\\mid V \\mid} \\frac{\\partial}{\\partial v_{w_t}} \\cdot \\exp(v_{w_i}^T v_{w_t})$$\n\n
\n\n$$ = \\frac{1}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})} \\cdot \\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t}) \\frac{\\partial}{\\partial v_{w_t}} v_{w_i}^T v_{w_t}$$\n\n
\n\n$$ = \\frac{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t}) \\cdot v_{w_i}}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})} \\;\\; (\\text{denominator}) $$\n\n\nTo sum up,\n\n$$\\frac{\\partial}{\\partial w_t} \\log p(w_c \\mid w_t) = v_{w_c} - \\frac{\\sum_{j=0}^{\\mid V \\mid} \\exp(v_{w_j}^T v_{w_t}) \\cdot v_{w_j}}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})}$$\n\n
\n\n$$ = v_{w_c} - \\sum_{j=0}^{\\mid V \\mid} \\frac{\\exp(v_{w_j}^T v_{w_t})}{\\sum_{i=0}^{\\mid V \\mid} \\exp(v_{w_i}^T v_{w_t})} \\cdot v_{w_j}$$\n\n
\n\n$$ \\underbrace{= v_{w_c} - \\sum_{j=0}^{\\mid V \\mid} p(w_j \\mid w_t) \\cdot v_{w_j}}_{\\nabla_{w_t}\\log p(w_c \\mid w_t)}$$\n\nThis is the observed representation subtract $\\mathop{\\mathbb{E}}[w_j \\mid w_t]$.\n\n## Negative Sampling (Noise Contrastive Estimation (NCE))\n\nThe Noise Contrastive Estimation (NCE) metric intends to differentiate the target word from noise samples using a logistic regression classifier [(Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, Gutmann et al., 2010)](http://proceedings.mlr.press/v9/gutmann10a/gutmann10a.pdf). \n\nIn softmax computation, look at the denominator. The summation over $\\mid V\\mid$ is computationally expensive. The training or evaluation takes asymptotically $O(\\mid V \\mid)$. In a very large corpora, the most frequent words can easily occur hundreds or millions of times (\"in\", \"and\", \"the\", \"a\" etc.). Such words provides less information value than the rare words. For example, while the skip-gram model benefits from observing co-occurences of \"inzva\" and \"deep learning\", it benefits much less from observing the frequent co-occurences of \"inzva\" and \"the\".In a very large corpora, the most frequent words can easily occur hundreds or millions of times (\"in\", \"and\", \"the\", \"a\" etc.). Such words provides less information value than the rare words. For example, while the skip-gram model benefits from observing co-occurences of \"inzva\" and \"deep learning\", it benefits much less from observing the frequent co-occurences of \"inzva\" and \"the\". \n\nFor every training step, instead of looping over the entire vocabulary, we can just sample several negative examples! We \"sample\" from\na noise distribution $P_n(w)$ whose probabilities match the ordering of the frequency of the vocabulary.\n\nConsider a pair $(w_t, w_c)$ of word and context. Did this pair come from the training data? Let\u2019s denote by $p(D=1 \\mid w_t,w_c)$ the probability that $(w_t, w_c)$ came from the corpus data. Correspondingly $p(D=0 \\mid w_t,w_c)$ will be the probability that $(w_t, w_c)$ didn't come from the corpus data. First, let\u2019s model $p(D=1 \\mid w_t,w_c)$ with sigmoid:\n
\n\n$$p(D=1 \\mid w_t,w_c) = \\sigma(v_{w_c}^T v_{w_t}) = \\frac{1}{1 + \\exp(- v_{w_c}^T v_{w_t})}$$\n\nNow, we build a new objective function that tries to maximize the probability of a word and context being in the corpus data if it indeed is, and maximize the probability of a word and context not being in the corpus data if it indeed is not. Maximum likelihood says:\n\n$$ \\max \\prod_{(w_t, w_c) \\in D} p(D=1 \\mid w_t,w_c) \\times \\prod_{(w_t, w_c) \\in D'} p(D=0 \\mid w_t,w_c)$$\n\n
\n\n$$ = \\max \\prod_{(w_t, w_c) \\in D} p(D=1 \\mid w_t,w_c) \\times \\prod_{(w_t, w_c) \\in D'} 1 - p(D=1 \\mid w_t,w_c)$$\n\n
\n\n$$ = \\max \\sum_{(w_t, w_c) \\in D} \\log p(D=1 \\mid w_t,w_c) + \\sum_{(w_t, w_c) \\in D'} \\log (1 - p(D=1 \\mid w_t,w_c))$$\n\n
\n\n$$ = \\max \\sum_{(w_t, w_c) \\in D} \\log \\frac{1}{1 + \\exp(- v_{w_c}^T v_{w_t})} + \\sum_{(w_t, w_c) \\in D'} \\log \\left(1 - \\frac{1}{1 + \\exp(- v_{w_c}^T v_{w_t})}\\right)$$\n\n
\n\nNote that $\\frac{\\exp(-x)}{(1 + \\exp(-x))} \\times \\frac{\\exp(x)}{\\exp(x)} = \\frac{1}{(1 + \\exp(x))}$\n\n\n$$ = \\max \\sum_{(w_t, w_c) \\in D} \\log \\frac{1}{1 + \\exp(- v_{w_c}^T v_{w_t})} + \\sum_{(w_t, w_c) \\in D'} \\log \\frac{1}{1 + \\exp(v_{w_c}^T v_{w_t})}$$\n\nMaximizing the likelihood is the same as minimizing the negative log likelihood:\n\n
\n\n$$L = - \\sum_{(w_t, w_c) \\in D} \\log \\frac{1}{1 + \\exp(- v_{w_c}^T v_{w_t})} - \\sum_{(w_t, w_c) \\in D'} \\log \\frac{1}{1 + \\exp(v_{w_c}^T v_{w_t})}$$\n\nNote that $D'$ is a \"false\" or \"negative\" corpus. Where we would have sentences like \"the school is eaten by pilgrims\". Unnatural sentences that should get a low probability of ever occurring. We can generate $D'$ on the fly by randomly sampling this negative from the word bank.\n\nThe Negative Sampling (NEG) proposed in the original word2vec paper. NEG approximates the binary classifier\u2019s output with sigmoid functions as follows:\n\n$$\\begin{align}\np(d=1 \\vert v_{w_c}, v_{w_t}) &= \\sigma(v_{w_c}^T v_{w_t}) \\\\\np(d=0 \\vert v_{w_c}, v_{w_t}) &= 1 - \\sigma(v_{w_c}^T v_{w_t}) = \\sigma(-v_{w_c}^T v_{w_t})\n\\end{align}$$\n\nSo the objective is\n\n$$L = - [ \\log \\sigma(v_{w_c}^T v_{w_t}) + \\sum_{\\substack{i=1 \\\\ \\tilde{w}_i \\sim Q}}^K \\log \\sigma(v_{\\tilde{w}_i}^T v_{w_t})]$$\n\nIn the above formulation, ${v_{\\tilde{w}_i} \\mid i = 1 . . . K}$ are sampled from $P_n(w)$. How to define $P_n(w)$? In the word2vec paper $P_n(w)$ defined as \n\n$$P_n(w_i) = 1 - \\sqrt{\\frac{t}{freq(w_i)}} \\;\\; t \\approx 10^{-5}$$\n\nThis distribution assigns lower probability for lower frequency words, higher probability for higher frequency words.\n\nHence, this distribution is sampled form a unigram distribution $U(w)$ raised to the $\\frac{3}{4}$rd power.\n\nThe unigram distribuiton is defined as\n\n$$P_n(w)= \\left(\\frac{U(w)}{Z}\\right)^\\alpha$$\n\nOr just by Andrew NG's definition:\n\n$$P_n(w_i) = \\frac{freq(w_i)^\\frac{3}{4}}{\\sum_{j=0}^M freq(w_j)^\\frac{3}{4}}$$\n\nRaising the unigram distribution $U(w)$ to the power of $\\alpha$ has an effect of smoothing out the distribution. It attempts to combat the imbalance between common words and rare words by decreasing the probability of drawing common words, and increasing the probability drawing rare words.\n\n
\n\n\n```python\nimport numpy as np\nunig_dist = {'inzva': 0.023, 'deep': 0.12, 'learning': 0.34, 'the': 0.517}\nsum(unig_dist.values())\n```\n\n\n\n\n 1.0\n\n\n\n\n```python\nalpha = 3 / 4\nnoise_dist = {key: val ** alpha for key, val in unig_dist.items()}\nZ = sum(noise_dist.values())\nnoise_dist_normalized = {key: val / Z for key, val in noise_dist.items()}\nnoise_dist_normalized\n```\n\n\n\n\n {'inzva': 0.044813853132981724,\n 'deep': 0.15470428538870049,\n 'learning': 0.33785130228003507,\n 'the': 0.4626305591982827}\n\n\n\n\n```python\nsum(noise_dist_normalized.values())\n```\n\n\n\n\n 1.0\n\n\n\n\n```python\nK = 10\nnp.random.choice(list(noise_dist_normalized.keys()), size=K, p=list(noise_dist_normalized.values()))\n```\n\n\n\n\n array(['the', 'the', 'deep', 'learning', 'the', 'learning', 'the',\n 'inzva', 'the', 'learning'], dtype=' You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n####Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n##Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n###Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n###Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n###But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```\nimport pymc as pm\n\nalpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nlambda_1 = pm.Exponential(\"lambda_1\", alpha)\nlambda_2 = pm.Exponential(\"lambda_2\", alpha)\n\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```\nprint \"Random output:\", tau.random(), tau.random(), tau.random()\n```\n\n Random output: 39 10 32\n\n\n\n```\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. \n\n\n```\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```\n### Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [****************100%******************] 40000 of 40000 complete\n\n\n\n```\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n###Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/n_is_never_large).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```\n\n```\n", "meta": {"hexsha": "200485437869b0add671eb5f2422e64aab16cb80", "size": 413239, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_stars_repo_name": "finderabd135/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "a423eda8538ed0b51cccd7b10e70b1404cfba8e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-09T19:44:20.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-09T19:44:20.000Z", "max_issues_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_issues_repo_name": "finderabd135/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "a423eda8538ed0b51cccd7b10e70b1404cfba8e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_forks_repo_name": "finderabd135/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "a423eda8538ed0b51cccd7b10e70b1404cfba8e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 386.9279026217, "max_line_length": 110815, "alphanum_fraction": 0.9045177246, "converted": true, "num_tokens": 11151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2877678157610531, "lm_q2_score": 0.36658973632215985, "lm_q1q2_score": 0.10549272770184832}} {"text": "\n\n## Data-driven Design and Analyses of Structures and Materials (3dasm)\n\n## Lecture 13\n\n### Miguel A. Bessa | M.A.Bessa@tudelft.nl | Associate Professor\n\n**What:** A lecture of the \"3dasm\" course\n\n**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)\n\n**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)\n\n**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.\n* If working offline: Go through this notebook and read the book.\n* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.\n* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.\n\n**Optional reference (the \"bible\" by the \"bishop\"... pun intended \ud83d\ude06) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.\n\n**References/resources to create this notebook:**\n* Chapter 11 of Murphy's book.\n\nApologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.\n\n## **OPTION 1**. Run this notebook **locally in your computer**:\n1. Confirm that you have the 3dasm conda environment (see Lecture 1).\n\n2. Go to the 3dasm_course folder in your computer and pull the last updates of the [repository](https://github.com/bessagroup/3dasm_course):\n```\ngit pull\n```\n3. Open command window and load jupyter notebook (it will open in your internet browser):\n```\nconda activate 3dasm\njupyter notebook\n```\n4. Open notebook of this Lecture.\n\n## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):\n\n1. go to https://colab.research.google.com\n2. login\n3. File > Open notebook\n4. click on Github (no need to login or authorize anything)\n5. paste the git link: https://github.com/bessagroup/3dasm_course\n6. click search and then click on the notebook for this Lecture.\n\n\n```python\n# Basic plotting tools needed in Python.\n\nimport matplotlib.pyplot as plt # import plotting tools to create figures\nimport numpy as np # import numpy to handle a lot of things!\nfrom IPython.display import display, Math # to print with Latex math\n\n%config InlineBackend.figure_format = \"retina\" # render higher resolution images in the notebook\n#plt.style.use(\"seaborn\") # style for plotting that comes from seaborn\nplt.rcParams[\"figure.figsize\"] = (8,4) # rescale figure size appropriately for slides\n```\n\n## Outline for today\n\n* Derivation of different Linear Regression models\n - Picking up where we left off in Lecture 8.\n\n**Reading material**: This notebook + Chapter 11 of the book.\n\n## Recap of Lectures 8 and 9\n\nRecall our view of Linear regression models from a Bayesian perspective: it's all about the choice of **likelihood** and **prior**!\n\n| Likelihood | Prior (on the weights) | Posterior | Name of the model | Book section |\n|--- |--- |--- |--- |--- |\n| Gaussian | Uniform | Point estimate | Least Squares regression | 11.2.2 |\n| Gaussian | Gaussian | Point estimate | Ridge regression | 11.3 |\n| Gaussian | Laplace | Point estimate | Lasso regression | 11.4 |\n| Student-$t$ | Uniform | Point estimate | Robust regression | 11.6.1 |\n| Laplace | Uniform | Point estimate | Robust regression | 11.6.2 |\n| Gaussian | Gaussian | Gaussian | Bayesian linear regression | 11.7 |\n\nLet's continue along the lines of the Homework of Lecture 8, and derive a few of these models for the multidimensional case.\n\nWe are now totally prepared to derive any ML model in any dimension!\n\nIn Lecture 8 and its Homework we derived linear regression models using 1D input $x$, 1D output $y$, and a polynomial basis function $\\boldsymbol{\\phi}(x)$.\n\nWe will quickly recap what we did then, and then show how this generalizes to multidimensional inputs $\\mathbf{x}$ and for any kind of basis function $\\boldsymbol{\\phi}(\\mathbf{x})$.\n\n* Note: without loss of generality, we will keep considering a single output $y$.\n\n## Linear Least Squares: Linear regression with Gaussian likelihood, Uniform prior and posterior via Point estimate\n\n| Likelihood | Prior (on the weights) | Posterior | Name of the model | Book section |\n|--- |--- |--- |--- |--- |\n| Gaussian | Uniform | Point estimate | Least Squares regression | 11.2.2 |\n\nThis model assumes a Gaussian observation distribution with constant variance and \"linear\" mean (recall: linear in the unknowns $\\mathbf{z}$). If considering 1D input $x$ and 1D output $y$ the model is written as:\n1. Gaussian observation distribution: $p(y|x, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(x), \\sigma_{y|z}^2 = \\sigma^2)$\n\nwhere $\\mathbf{z} = (\\mathbf{w}, \\sigma)$ are all the unknown model parameters (hidden rv's).\n\n2. Uniform prior distribution for each hidden rv in $\\mathbf{z}$: $p(\\mathbf{z}) \\propto 1$\n\n3. MLE point estimate for posterior: $\\hat{\\mathbf{z}}_{\\text{mle}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|x=x_i, \\mathbf{z})}\\right]$\n\nFinal prediction is given by the PPD: $\\require{color}\n{\\color{orange}p(y|x, \\mathcal{D})} = \\int p(y|x,\\mathbf{z}) \\delta(\\mathbf{z}-\\hat{\\mathbf{z}}) dz = p(y|x, \\mathbf{z}=\\hat{\\mathbf{z}})$\n\n#### Notes\n\nCompared to the previous lectures, pay attention to the following updates in the notation of our 1D linear regression model:\n\n1. We are explicitly including the input $x$ in the probability densities, as we will no longer fix $x$ to a particular value like we did up to now\n\n2. We are now considering more than one unknown rv and grouping them in the vector $\\mathbf{z}$.\n\n### Recall the car stopping distance problem\n\n\n\nLet's focus (again) on our favorite problem, but now we will not keep the velocity of the car $x$ fixed.\n\nIf we knew the \"ground truth\" of this problem, then it would be given by:\n\n$\\require{color}y = {\\color{red}z_1}\\cdot x + {\\color{red}z_2}\\cdot x^2$\n\n- $y$ is the **output**: the car stopping distance (in meters)\n- ${\\color{red}z_1}$ is a hidden variable: an rv representing the driver's reaction time (in seconds)\n- ${\\color{red}z_2}$ is another hidden variable: an rv that depends on the coefficient of friction, the inclination of the road, the weather, etc. (in m$^{-1}$s$^{-2}$).\n- $x$ is the **input**: constant car velocity (in m/s).\n\nwhere $z_1 \\sim \\mathcal{N}(\\mu_{z_1}=1.5,\\sigma_{z_1}^2=0.5^2)$, and $z_2 \\sim \\mathcal{N}(\\mu_{z_2}=0.1,\\sigma_{z_2}^2=0.01^2)$.\n\nUnsurprisingly, in Exercise 1 of Lecture 9 we saw that a linear model with a **quadratic polynomial basis function** predicts the stopping distance for this problem very well:\n\n1. Gaussian observation distribution: $p(y|x, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(x), \\sigma_{y|z}^2 = \\sigma^2)$\n\nwhere $\\mathbf{z} = (\\mathbf{w}, \\sigma)$ are all the hidden rv's of the model, i.e. the model parameters.\n* the vector $\\mathbf{w} = [w_0, w_1, w_2 ..., w_{M-1}]^T$ includes the **bias** term $w_0$ and the remaining **weights** $w_m$ with $m=0,..., M-1$.\n* the vector $\\boldsymbol{\\phi}(x) = [1, x, x^2, ..., x^{M-1}]^T$ includes the **basis functions**, which now correspond to a polynomial of degree $M-1$. When $M=3$ we have a quadratic polynomial basis (3 unknowns).\n\n2. Uniform prior distribution for each hidden rv in $\\mathbf{z}$: $p(\\mathbf{z}) \\propto 1$\n\n3. MLE point estimate for posterior: $\\hat{\\mathbf{z}}_{\\text{mle}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{n=1}^{N}\\log{ p(y=y_n|x_n, \\mathbf{z})}\\right]$\n\nFor other problems, the polynomial degree $M-1$ of the basis functions may need to be different.\n\n* For example, also in Lecture 9 we saw that for a problem whose ground truth is $x\\sin{x}$ then the polynomial basis function needs to have a higher degree. However, even then the approximation is not brilliant because the ground truth is not really a polynomial!\n\nThere are other basis functions that can be adopted. For example, spline basis functions (Section 11.5 in the book), among many other possibilities (kernels!).\n\nAs we also mentioned, as long as the basis functions $\\boldsymbol{\\phi}(x)$ do not depend on any rv $\\mathbf{z}$ and the mean of observation distribution is defined linearly as a function of the rv's, then we still have a linear regression model.\n\nBut now let's consider problems that still have only one output $y$ but that can have multiple inputs $\\mathbf{x} = [x_1, x_2, ..., x_D]^T$ where $x_d$ is feature $d$ and where $d=1, ..., D$.\n\nIn this case, we can write the multidimensional linear regression model as:\n\n1. Gaussian observation distribution: $p(y|\\mathbf{x}, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_{y|z}^2 = \\sigma^2)$\n\nwhere $\\mathbf{z} = (\\mathbf{w}, \\sigma)$ are all the hidden rv's of the model, i.e. the model parameters.\n* the vector $\\mathbf{w} = [w_0, w_1, w_2 ..., w_{M-1}]^T$ includes the **bias** term $w_0$ and the remaining **weights** $w_m$ with $m=0,..., M-1$.\n* and the basis functions remain a vector but where each element also acts on a vector $\\mathbf{x}$, where $x_d$ has $D$ features: $\\boldsymbol{\\phi}(\\mathbf{x}) = [\\phi_0(\\mathbf{x}), \\phi_1(\\mathbf{x}), \\phi_2(\\mathbf{x}) ..., \\phi_{M-1}(\\mathbf{x})]^T$\n\nand where the remaining choices for the linear regression model remain the same:\n\n2. Uniform prior distribution for each hidden rv in $\\mathbf{z}$: $p(\\mathbf{z}) \\propto 1$\n\n3. MLE point estimate for posterior: $\\hat{\\mathbf{z}}_{\\text{mle}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{n=1}^{N}\\log{ p(y=y_n|\\mathbf{x} = \\mathbf{x}_n, \\mathbf{z})}\\right]$\n\nFinal prediction is given by the PPD: \n\n$$\\require{color}\n{\\color{orange}p(y|x, \\mathcal{D})} = \\int p(y|x,\\mathbf{z}) \\delta(\\mathbf{z}-\\hat{\\mathbf{z}}) dz = p(y|x, \\mathbf{z}=\\hat{\\mathbf{z}})$$\n\nTherefore, we are capable of predicting the PPD by discovering the unknowns $\\mathbf{z}$ via the point estimate of the posterior, which requires solving the $\\mathrm{argmin}$ of the negative log likelihood.\n\nNow, let's focus on estimating the unknowns $\\mathbf{z}$ via the MLE point estimate of the posterior (maximum likelihood estimation).\n\nAs we saw in Lecture 8, finding the MLE is the same as finding the location of the minimum of the negative log likelihood.\n\nSince our observation distribution is a multivariate Gaussian,\n\n$p(y|x, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_{y|z}^2 = \\sigma^2)$\n\nthen the likelihood is given by (Lecture 5 but now with vectors):\n\n$$\n\\begin{align}\np(y=\\mathcal{D}_y | \\mathbf{x}=\\mathcal{D}_x, \\mathbf{z}) &= \\prod_{n=1}^{N} p(y=y_n|\\mathbf{x}=\\mathbf{x}_n, \\mathbf{z}) \\\\\n&= p(y=y_1|\\mathbf{x}=\\mathbf{x}_1, \\mathbf{z})p(y=y_2|\\mathbf{x}=\\mathbf{x}_2, \\mathbf{z}) \\cdots p(y=y_N|\\mathbf{x}=\\mathbf{x}_N, \\mathbf{z})\n\\end{align}\n$$\n\nwhich we already know that is also a multivariate Gaussian (unnormalized).\n\nBut, since we are not going fully Bayesian, the only thing we need to estimate is the location of the maximum of the likelihood (point estimate!):\n\n$$\\begin{align}\n\\hat{\\mathbf{z}}_{\\text{mle}} &= \\underset{z}{\\mathrm{argmin}}\\left[\\text{NLL}(\\mathbf{z})\\right]\n\\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{n=1}^{N}\\log{ p(y=y_n|\\mathbf{x}=\\mathbf{x}_n, \\mathbf{z})}\\right]\n\\end{align}\n$$\n\nIn Lecture 9 we allowed scikit-learn to find the minimum for us! But today we will actually determine this minimum...\n\nYou already did this in the Homework of Lecture 8 for the 1D case with a linear polynomial basis and fixing $x$. The multivariate case for a general basis function and for different $\\mathbf{x}$ is just as easy! Especially when considering the variance of the observation distribution to be the same everywhere!\n\n$$\n\\begin{align}\n\\hat{\\mathbf{z}}_{\\text{mle}} &= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{n=1}^{N}\\log{ p(y=y_n|\\mathbf{x}=\\mathbf{x}_n, \\mathbf{z})}\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{n=1}^{N}\\log{\\left( \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left\\{ -\\frac{1}{2\\sigma^2}\\left[y_n-\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2\\right\\}\\right)}\\right]\\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[\\frac{N}{2}\\log{\\left(2\\pi \\sigma^2\\right)}+\\frac{1}{2 \\sigma^2}\\sum_{n=1}^{N}\\left[y_n-\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2 \\right]\\\\\n\\end{align}\n$$\n\nwhere we recall that the unknowns are $\\mathbf{z} = (\\mathbf{w}, \\sigma)$.\n\nTo find the minimum location we need to take the gradient of the $\\text{NLL}(\\mathbf{z})$ wrt $\\mathbf{z}$ and equal it to zero:\n\n$$\n\\nabla_{\\mathbf{z}} \\text{NLL}(\\mathbf{z}) = \\mathbf{0}\n$$\n\nwhich can be written as,\n\n$$\n\\begin{bmatrix}\n\\frac{\\partial \\text{NLL}(\\mathbf{z})}{\\partial w_0}\\\\\n\\frac{\\partial \\text{NLL}(\\mathbf{z})}{\\partial w_1}\\\\\n\\vdots \\\\\n\\frac{\\partial \\text{NLL}(\\mathbf{z})}{\\partial w_M}\\\\\n\\frac{\\partial \\text{NLL}(\\mathbf{z})}{\\partial \\sigma^2}\\\\\n\\end{bmatrix} =\n\\begin{bmatrix}0\\\\\n0\\\\\n\\vdots \\\\\n0\\\\\n0\\\\\n\\end{bmatrix}\n$$\n\nWe can first solve this system of equations wrt $\\mathbf{w}$, and then solve wrt $\\sigma$\n\nThen, solving first for the weights $\\mathbf{w}$:\n\n$$\n\\nabla_{\\mathbf{w}} \\text{NLL}(\\mathbf{w}, \\sigma^2) = \\mathbf{0}\n$$\n\nwe note that,\n\n$$\\begin{align}\n\\nabla_{\\mathbf{w}} \\text{NLL}(\\mathbf{w}, \\sigma^2) = \\mathbf{0} \\\\\n\\nabla_{\\mathbf{w}} \\left[\\frac{N}{2}\\log{\\left(2\\pi \\sigma^2\\right)}+\\frac{1}{2 \\sigma^2}\\sum_{n=1}^{N}\\left[y_n-\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2 \\right] = \\mathbf{0} \\\\\n\\nabla_{\\mathbf{w}} \\left[\\underbrace{\\frac{1}{2}\\sum_{n=1}^{N}\\left[y_n-\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2}_{\\text{RSS}(\\mathbf{w})} \\right] = \\mathbf{0}\n\\end{align}\n$$\n\nNote: in Statistics the term in the argument is called **residual sum of squares**.\n\nWe can rewrite the above expression in a simpler form:\n\n$$\\begin{align}\n\\nabla_{\\mathbf{w}} \\left[\\frac{1}{2}\\sum_{n=1}^{N}\\left[y_n-\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2 \\right] = 0 \\\\\n\\nabla_{\\mathbf{w}} \\left[\\frac{1}{2}\\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right)^T \\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right) \\right] = 0\n\\end{align}\n$$\n\nwhere we group all output measurements $y_n$ into a $N\\times 1$ vector $\\mathbf{y}$ and where we group all $N$ evaluations of the basis functions into the $N\\times M$ matrix:\n\n$$\n\\boldsymbol{\\Phi} = \\begin{bmatrix} \\phi_0(\\mathbf{x}_1) & \\phi_1(\\mathbf{x}_1) & \\cdots & \\phi_{M-1}(\\mathbf{x}_1) \\\\\n\\phi_0(\\mathbf{x}_2) & \\phi_1(\\mathbf{x}_2) & \\cdots & \\phi_{M-1}(\\mathbf{x}_2) \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\phi_0(\\mathbf{x}_N) & \\phi_1(\\mathbf{x}_N) & \\cdots & \\phi_{M-1}(\\mathbf{x}_N) \\\\\n\\end{bmatrix}\n$$\n\nSetting the gradient wrt all $\\mathbf{w}$ to zero gives,\n\n$$\\begin{align}\n\\nabla_{\\mathbf{w}} \\left[\\frac{1}{2}\\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right)^T \\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right) \\right] = \\mathbf{0}\\\\\n\\frac{1}{2} \\left[ \\left( \\boldsymbol{\\Phi}^T\\boldsymbol{\\Phi}+\\boldsymbol{\\Phi}^T\\boldsymbol{\\Phi} \\right)\\mathbf{w} -\\boldsymbol{\\Phi}^T\\mathbf{y}-\\boldsymbol{\\Phi}^T\\mathbf{y} \\right] = \\mathbf{0}\n\\end{align}\n$$\n\nwhere we used the identity $ \\frac{\\partial \\mathbf{x}^T\\mathbf{A}\\mathbf{x}}{\\partial \\mathbf{x}} = \\left( \\mathbf{A}+\\mathbf{A}^T\\right)\\mathbf{x}$. (See Section 7.8 of Murphy's book if you need to revise matrix calculus).\n\nFrom which we reach the MLE prediction for the weights:\n\n$$\n\\hat{\\mathbf{w}}_{\\text{mle}} = \\left(\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\right)^{-1} \\boldsymbol{\\Phi}^T \\mathbf{y}\n$$\n\nWe conclude that our point estimate for the posterior (MLE) is: $\\hat{\\mathbf{w}}_{\\text{MLE}} = \\left(\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\right)^{-1} \\boldsymbol{\\Phi}^T \\mathbf{y}$\n\nwhere the quantity\n\n$$\\boldsymbol{\\Phi}^{\\dagger} = \\left(\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\right)^{-1} \\boldsymbol{\\Phi}^T $$\n\nis known as the Moore-Penrose pseudo-inverse of the matrix $\\boldsymbol{\\Phi}$. It can be regarded as a generalization of the notion of matrix inverse to **nonsquare matrices**. In the special case of $\\boldsymbol{\\Phi}$ being square and invertible, then using the property $\\left( \\mathbf{A}\\mathbf{B}\\right)^{-1}=\\mathbf{B}^{-1}\\mathbf{A}^{-1}$ we see that $\\boldsymbol{\\Phi}^{\\dagger}=\\boldsymbol{\\Phi}^{-1}$.\n\nAlso note that we could have calculated separetely the bias term $w_0$ (which is convenient because for other models the bias usually has a uniform prior, unlike the remaining weights). If we do that we obtain:\n\n$$\n\\hat{w}_0 = \\bar{y}-\\sum_{m=1}^{M-1} w_m \\bar{\\phi}_m\n$$\n\nwhere we defined $\\bar{y} = \\frac{1}{N}\\sum_{n=1}^N y_n$ and $\\bar{\\phi}_m = \\frac{1}{N}\\sum_{n=1}^{N} \\phi_m(\\mathbf{x}_n)$.\n\nHaving found the solution for all $\\mathbf{w}$, we just need to find one last unknown from the point estimate of the posterior:\n\n$$\n\\nabla_{\\mathbf{\\sigma^2}} \\text{NLL}(\\mathbf{w}, \\sigma^2) = 0\n$$\n\nwhich is particularly simple:\n\n$$\n\\hat{\\sigma}_{\\text{mle}} = \\frac{1}{N}\\sum_{n=1}^{N} \\left[y_n -\\hat{\\mathbf{w}}^T_{\\text{mle}}\\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2\n$$\n\nIn summary, the MLE point estimate of the posterior leads to the following estimation of parameters:\n\n$$\n\\hat{\\mathbf{w}}_{\\text{mle}} = \\left(\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\right)^{-1} \\boldsymbol{\\Phi}^T \\mathbf{y}\n$$\n\n$$\n\\hat{\\sigma}_{\\text{mle}} = \\frac{1}{N}\\sum_{n=1}^{N} \\left[y_n -\\hat{\\mathbf{w}}^T_{\\text{mle}}\\boldsymbol{\\phi}(\\mathbf{x}_n)\\right]^2\n$$\n\nwhere the Moore-Penrose pseudo inverse needs to be calculated $\\boldsymbol{\\Phi}^{\\dagger} = \\left(\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\right)^{-1} \\boldsymbol{\\Phi}^T $.\n\nThis calculation can be done efficiently by many libraries, including Numpy.\n\n* For example, scikit-learn uses a solver based on SVD (Single Value Decomposition: the most common dimensionality reduction method) which is efficient when $N > M$ (overdetermined system). Book section 7.5 has an excellent summary of SVD, if you are curious.\n\n* Of course, if $N = M$ then there is a unique solution (you know that from the Midterm!) and the error on the training set becomes zero (linear regression becomes fully interpolatory).\n\n## Ridge regression: Linear regression with Gaussian likelihood, Gaussian prior and posterior via Point estimate\n\n| Likelihood | Prior (on the weights) | Posterior | Name of the model | Book section |\n|--- |--- |--- |--- |--- |\n| Gaussian | Gaussian | Point estimate | Ridge regression | 11.3 |\n\n1. Gaussian observation distribution: $p(y|\\mathbf{x}, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_{y|z}^2 = \\sigma^2)$\n\nwhere $\\mathbf{z} = (\\mathbf{w}, \\sigma)$ are all the unknown model parameters (hidden rv's).\n\n2. But using a Gaussian prior for the weights $\\mathbf{w}$: $p(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}| \\mathbf{0}, \\overset{\\scriptscriptstyle <}{\\sigma}_w \\mathbf{I})$\n\n3. MAP point estimate for posterior: $\\hat{\\mathbf{z}}_{\\text{map}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)} - \\log{p(\\mathbf{w})}\\right]$\n\nFinal prediction is given by the PPD: $\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\int p(y|z) \\delta(z-\\hat{z}_{\\text{map}}) dz = p(y|z=\\hat{z}_{\\text{map}})$\n\n#### Note on the choice of prior for linear regression\n\nThe Gaussian prior is usually imposed only on the weights. The bias and variance term in the observation distribution still have a Uniform prior because they do not contribute to overfitting.\n\nYou can see this from the expressions for $\\hat{w}_{0, \\text{mle}}$ and $\\sigma^2_{\\text{mle}}$, as they act on the global mean and MSE (mean squared error) of the residuals, respectively.\n\nComputing the MAP estimate is very similar to what we did for the MLE:\n\n$$\n\\begin{align}\n\\hat{\\mathbf{w}}_{\\text{map}} &= \\underset{w}{\\mathrm{argmin}}\\left[\\frac{1}{2\\sigma^2}\\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right)^T \\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right) + \\frac{1}{2\\overset{\\scriptscriptstyle <}{\\sigma}_w^2}\\mathbf{w}^T\\mathbf{w}\\right] \\\\\n&= \\underset{w}{\\mathrm{argmin}}\\left[\\text{RSS}(\\mathbf{w}) + \\lambda ||\\mathbf{w}||_2^2\\right]\n\\end{align}\n$$\n\nwhere $\\lambda = \\frac{\\sigma^2}{\\overset{\\scriptscriptstyle <}{\\sigma}_w^2}$ is proportional to the strength of the prior, and\n\n$||\\mathbf{w}||_2^2 = \\sqrt{\\sum_{m=1}^{M-1} |w_m|^2} = \\sqrt{\\mathbf{w}^T\\mathbf{w}}$\n\nis called the $l_2$ norm of the vector $\\mathbf{w}$. Thus, we are penalizing weights that become too large in magnitude. In ML literature this is usually called $l_2$ **regularization** or **weight decay**, and is very widely used.\n\n\nIn the midterm, you will experience the difference between Linear Least Squares and Ridge regression, reporting on the influence of the prior strength for the latter.\n\nThen, solving the MAP first for the weights $\\mathbf{w}$, as we did for the MLE:\n\n$$\\begin{align}\n\\nabla_{\\mathbf{w}} \\left[\\frac{1}{2\\sigma^2}\\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right)^T \\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right) + \\frac{1}{2\\overset{\\scriptscriptstyle <}{\\sigma}_w^2}\\mathbf{w}^T\\mathbf{w}\\right] &= \\mathbf{0} \\\\\n\\nabla_{\\mathbf{w}} \\left[\\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right)^T \\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right) + \\lambda\\mathbf{w}^T\\mathbf{w}\\right] = \\mathbf{0}\n\\end{align}\n$$\n\nfrom which we determine the MAP estimate for the the weights $\\mathbf{w}$ as:\n\n$$\n\\hat{\\mathbf{w}}_{\\text{map}} = \\left( \\boldsymbol{\\Phi}^T\\boldsymbol{\\Phi} + \\lambda \\mathbf{I}_M \\right)^{-1} \\boldsymbol{\\Phi}^T \\mathbf{y} = \\left( \\sum_{n=1}^{N} \\boldsymbol{\\phi}(\\mathbf{x}_n)\\boldsymbol{\\phi}(\\mathbf{x}_n)^T + \\lambda \\mathbf{I}_M \\right)^{-1} \\left( \\sum_{n=1}^{N} \\boldsymbol{\\phi}(\\mathbf{x}_n) y_n \\right)\n$$\n\nOnce again, this can be solved using SVD or other methods to ensure that the Moore-Penrose pseudoi inverse is calculated properly.\n\n## Lasso regression: Linear regression with Gaussian likelihood, Laplace prior and posterior via Point estimate\n\n| Likelihood | Prior (on the weights) | Posterior | Name of the model | Book section |\n|--- |--- |--- |--- |--- |\n| Gaussian | Laplace | Point estimate | Lasso regression | 11.4 |\n\n1. Gaussian observation distribution: $p(y|\\mathbf{x}, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_{y|z}^2 = \\sigma^2)$\n\nwhere $\\mathbf{z} = (\\mathbf{w}, \\sigma)$ are all the unknown model parameters (hidden rv's).\n\n2. But using a **Laplace** prior for the weights $\\mathbf{w}$: $p(\\mathbf{w}) = \\prod_{m=1}^{M-1}\\text{Lap}\\left(w_m| 0, 1/\\overset{\\scriptscriptstyle <}{\\lambda}_w\\right) \\propto \\prod_{m=1}^{M-1} \\exp{\\left[ -\\overset{\\scriptscriptstyle <}{\\lambda}_w |w_m|\\right]}$\n\n3. MAP point estimate for posterior: $\\hat{\\mathbf{z}}_{\\text{map}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)} - \\log{p(\\mathbf{w})}\\right]$\n\nFinal prediction is given by the PPD: $\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\int p(y|z) \\delta(z-\\hat{z}_{\\text{map}}) dz = p(y|z=\\hat{z}_{\\text{map}})$\n\n#### Note about the sparsity parameter $\\overset{\\scriptscriptstyle <}{\\lambda}_w$\n\nPlease note that the $\\overset{\\scriptscriptstyle <}{\\lambda}_w$ parameter defining the strength of the Laplace prior is different from the $\\lambda$ parameter defined in Ridge regression.\n\n#### Note about number of parameters $M$ and number of input dimensions $D$\n\nIf $M=D$ the method is called Lasso, but if there are more parameters than input variables $M>D$ then it is called Group Lasso (Section 11.4.7).\n\n* The next cells describe Lasso, which introduces sparsity by making a weight associated to a particular variable tending to zero.\n\n* Group Lasso leads to a sparsity of more than one parameter that is associated to a given variable (which is an interesting way to induce sparsity in overparameterized models such as Artificial Neural Networks). It is derived in a very similar manner.\n\nComputing the MAP estimate:\n\n$$\n\\begin{align}\n\\hat{\\mathbf{w}}_{\\text{map}} &= \\underset{w}{\\mathrm{argmin}}\\left[\\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right)^T \\left(\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{y}\\right) + \\overset{\\scriptscriptstyle <}{\\lambda}_w||\\mathbf{w}||_1\\right]\n\\end{align}\n$$\n\nwhere $||\\mathbf{w}||_1 = \\sum_{m=1}^{M-1}|w_m|$ is called the $l_1$ norm of $\\mathbf{w}$. In ML literature this is called $l_1$ regularization. \n\n* Calculating the MAP for Lasso is not done the same way as for Ridge because the term $||\\mathbf{w}||_1$ is not differentiable whenever $w_m = 0$. In this case, the solution is found using hard- or soft-thresholding (See Section 11.4.3 in the book) because the gradient becomes a branch function.\n\n* A more important point is to see that **different types of prior distributions** introduce **different regularizations** on the weights, aleviating overfitting in a different manner.\n - For example, in the case of Lasso, since the Laplace prior puts more density around the mean (which is zero here) than the Gaussian prior, then it has a tendency to lead the weights to zero, i.e. it introduces **sparsity** when estimating the weights via MAP. The book has a beautiful discussion about this.\n\n## Bayesian linear regression: Linear regression with Gaussian likelihood, Gaussian prior and Gaussian posterior (Bayesian solution)\n\nAs we saw in the beginning of the Lecture, there are many more models we can define! The book covers quite a few!\n\n| Likelihood | Prior (on the weights) | Posterior | Name of the model | Book section |\n|--- |--- |--- |--- |--- |\n| Gaussian | Gaussian | Gaussian | Bayesian linear regression | 11.7 |\n\n1. Gaussian observation distribution (with known variance): $p(y|x, \\mathbf{z}) = \\mathcal{N}(y| \\mu_{y|z} = \\mathbf{w}^T \\boldsymbol{\\phi}(x), \\sigma_{y|z}^2 = \\sigma^2)$\n\nwhere $\\mathbf{z} = (\\mathbf{w}, \\sigma)$ are all the unknown model parameters (hidden rv's).\n\n2. Gaussian prior for the weights $\\mathbf{w}$: $p(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}| \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\mu}}_w, \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\Sigma}}_w)$\n\n3. Gaussian posterior (obtained from Bayes rule)\n\nFinal prediction is given by the PPD: $\\require{color}\n{\\color{orange}p(y|x, \\mathcal{D})} = \\int p(y|x,\\mathbf{z}) p(\\mathbf{z}|\\mathcal{D}) dz$\n\nAt this point you may notice that we have already derived this model in Lecture 7.\n\nThe only differences are that now we have multiple weights $\\mathbf{z}$, multidimensional inputs $\\mathbf{x}$ and that we allow them to have different values.\n\nYet, the derivation is the same! We just need to bold the letters. Let's do it:\n\nThe likelihood is a (multivariate) Gaussian distribution (product of MVN evaluated at each data point):\n\n$$\np(\\mathcal{D}|\\mathbf{w}, \\sigma^2) = \\prod_{n=1}^N p(y_n | \\mathbf{w}^T\\boldsymbol{\\phi}(\\mathbf{x}), \\sigma^2) = \\mathcal{N}(\\mathbf{y} | \\boldsymbol{\\Phi}\\mathbf{w}, \\sigma^2 \\mathbf{I}_N)\n$$\n\nwhere $\\mathbf{I}_N$ is the $N\\times N$ identity matrix, as defined previously.\n\nTo calculate the posterior, we also use the product of Gaussians rule (Lecture 5 in the cell after the Homework we also defined this rule for multivariate Gaussians!):\n\n$$\np(\\mathbf{w}| \\boldsymbol{\\Phi}, \\mathbf{y}, \\sigma^2) \\propto \\mathcal{N}(\\mathbf{y} | \\boldsymbol{\\Phi}\\mathbf{w}, \\sigma^2 \\mathbf{I}_N) \\mathcal{N}(\\mathbf{w}| \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\mu}}_w, \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\Sigma}}_w) = \\mathcal{N}(\\mathbf{w}| \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}_w, \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w)\n$$\n\nwhere the mean and covariance of the posterior are given by:\n\n$$\n\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}_w = \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w \\left( \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\Sigma}}_w \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\mu}}_w + \\frac{1}{\\sigma^2}\\boldsymbol{\\Phi}^T\\mathbf{y}\\right)\n$$\n\n$$\n\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w = \\left( \\overset{\\scriptscriptstyle <}{\\boldsymbol{\\Sigma}}_w + \\frac{1}{\\sigma^2}\\boldsymbol{\\Phi}^T\\boldsymbol{\\Phi}\\right)^{-1}\n$$\n\nOften, we use a prior with zero mean $\\overset{\\scriptscriptstyle <}{\\boldsymbol{\\mu}}_w =\\mathbf{0}$ and diagonal covariance $\\overset{\\scriptscriptstyle <}{\\boldsymbol{\\Sigma}}_w = \\overset{\\scriptscriptstyle <}{\\sigma}_w^2 \\mathbf{I}_M$ like the prior we used in Ridge regression. In this case, the posterior mean becomes the same as the MAP estimate obtained from Ridge regression: $\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}_w =\\left( \\lambda \\mathbf{I}_M + \\boldsymbol{\\Phi}^T\\boldsymbol{\\Phi}\\right)^{-1} \\boldsymbol{\\Phi}^T \\mathbf{y}$.\n\n#### Note on the reduction of the posterior mean for Bayesian linear regression to the MAP estimate of Ridge regression \n\nIf we use a prior with zero mean $\\overset{\\scriptscriptstyle <}{\\boldsymbol{\\mu}}_w =\\mathbf{0}$ and diagonal covariance $\\overset{\\scriptscriptstyle <}{\\boldsymbol{\\Sigma}}_w = \\overset{\\scriptscriptstyle <}{\\sigma}_w^2 \\mathbf{I}_M$ then the posterior mean becomes\n\n$$\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}_w = \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_w^2} \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w \\boldsymbol{\\Phi}^T\\mathbf{y}\n$$\n\nwhich is the same as the Ridge regression estimate when we define $\\lambda = \\frac{\\sigma^2}{\\overset{\\scriptscriptstyle <}{\\sigma}_w^2}$,\n\n$\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}_w =\\left( \\lambda \\mathbf{I}_M + \\boldsymbol{\\Phi}^T\\boldsymbol{\\Phi}\\right)^{-1} \\boldsymbol{\\Phi}^T \\mathbf{y}$\n\nHaving determined the posterior, we can determine what we really want: the PPD.\n\n$$\n\\begin{align}\n{\\color{orange}p(y|x, \\mathcal{D})} &= \\int p(y|x,\\mathbf{z}) p(\\mathbf{z}|\\mathcal{D}) d\\mathbf{z} \\\\\np(y|x, \\mathcal{D}, \\sigma^2) &= \\int p(y|x,\\mathbf{w}, \\sigma^2) p(\\mathbf{w}|\\mathcal{D}) d\\mathbf{w} \\\\\n&= \\int \\mathcal{N}(y | \\boldsymbol{\\phi}(\\mathbf{x})^T\\mathbf{w}, \\sigma^2) \\mathcal{N}(\\mathbf{w}| \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}_w, \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w) dw \\\\\n&= \\mathcal{N}\\left(y \\mid \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}^T_w \\boldsymbol{\\phi}(\\mathbf{x}) \\,,\\, \\sigma^2 + \\boldsymbol{\\phi}(\\mathbf{x})^T \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w \\boldsymbol{\\phi}(\\mathbf{x})\\right)\n\\end{align}\n$$\n\nwhere we recall the meaning of each term:\n\n* ($\\mathbf{x}$, $y$) is the point were we want to make a prediction\n* $\\boldsymbol{\\phi}(\\mathbf{x})$ is an $M\\times 1$ vector of basis functions\n* $\\overset{\\scriptscriptstyle <}{\\boldsymbol{\\mu}}_w$ is the $M\\times 1$ vector with the mean of the posterior for the weights\n* and $\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w$ is the $M \\times M$ covariance matrix of the posterior for the weights.\n\n#### Note about integration of the PPD\n\nThe book uses the following notation:\n\n$$\\begin{align}\np(y|x, \\mathcal{D}, \\sigma^2) &= \\int p(y|x,\\mathbf{w}, \\sigma^2) p(\\mathbf{w}|\\mathcal{D}) d\\mathbf{w}\n\\end{align}\n$$\n\nWe could be more explicit and use the following notation:\n\n$$\\begin{align}\np(y|x, \\mathcal{D}, \\sigma^2) &= \\int p(y|x,\\mathbf{w}, \\sigma^2) p(\\mathbf{w}|\\mathcal{D}) d^{M-1}\\mathbf{w} \\\\\n&= \\int\\int\\cdots \\int p(y|x,\\mathbf{w}, \\sigma^2) p(\\mathbf{w}|\\mathcal{D}) dw_1 dw_2 \\cdots dw_{M-1}\n\\end{align}\n$$\n\nwhere it is clear that we are integrating for all variables (not integrating to get a vector).\n\nHowever, despite this notation being more precise, it is also less appealing. Just make sure you realize the type of integral that we are calculating when we are finding the PPD.\n\nObserving the obtained PPD,\n\n$$\n\\begin{align}\np(y|x, \\mathcal{D}, \\sigma^2) &= \\mathcal{N}\\left(y \\mid \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\mu}}^T_w \\boldsymbol{\\phi}(\\mathbf{x}) \\,,\\, \\sigma^2 + \\boldsymbol{\\phi}(\\mathbf{x})^T \\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w \\boldsymbol{\\phi}(\\mathbf{x})\\right)\n\\end{align}\n$$\n\nwe see something very interesting: the variance of the PPD at a point $\\mathbf{x}$ after seeing $N$ data points depends on two terms:\n\n1. the variance of the observation noise, $\\sigma^2$ that we defined to be constant\n\n2. and the variance in the parameters obtained by the posterior $\\overset{\\scriptscriptstyle >}{\\boldsymbol{\\Sigma}}_w$\n\nThis means that the predicted uncertainty increases when $\\mathbf{x}$ is located far from the training data $\\mathcal{D}$, just like we want it to be! We are less certain about points **away** from our observations (training data).\n\nWe have seen this happening in Gaussian processes too... You will see that Gaussian processes are not too different from Bayesian linear regression...\n\n## Other linear regression models\n\nAs we saw in the beginning of the Lecture, there are many more models we can define! The book covers quite a few!\n\n| Likelihood | Prior (on the weights) | Posterior | Name of the model | Book section |\n|--- |--- |--- |--- |--- |\n| Gaussian | Uniform | Point estimate | Least Squares regression | 11.2.2 |\n| Gaussian | Gaussian | Point estimate | Ridge regression | 11.3 |\n| Gaussian | Laplace | Point estimate | Lasso regression | 11.4 |\n| Gaussian | Gaussian$\\times$Laplace | Point estimate | Elastic net | 11.4.8 |\n| Student-$t$ | Uniform | Point estimate | Robust regression | 11.6.1 |\n| Laplace | Uniform | Point estimate | Robust regression | 11.6.2 |\n| Gaussian | Gaussian | Gaussian | Bayesian linear regression | 11.7 |\n\nFor example, the \"Elastic net\" is literally the combination of Ridge regression and Lasso by defining a prior that depends on both a Gaussian and a Laplace distribution. This model was proposed in 2005. Five years later the \"Bayesian Elastic Net\" was also proposed, where the posterior is calculated in a Bayesian way (just like what we did for Bayesian linear regression).\n\n## In summary\n\nAlmost every ML model is derived following 4 steps:\n\n1. Define the Observation distribution and compute the likelihood.\n\n2. Define the prior and its parameters.\n\n3. Compute the posterior to estimate the unknown parameters (whether via a Bayesian approach or via a Point estimate).\n\n4. Compute the PPD.\n\nDone.\n\nIf you only care about the mean of the PPD (the mean of your prediction), then you can compute it directly without even thinking about uncertainty... But then, make sure you characterize the quality of your predictions using an appropriate **error metric** and use strategies like cross-validation, like you did for your Midterm Project!\n\n### See you next class\n\nHave fun!\n", "meta": {"hexsha": "68276fbcc256800b6ab2a62ec23c477a4df19705", "size": 48639, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture13/3dasm_Lecture13.ipynb", "max_stars_repo_name": "shushu-qin/3dasm_course", "max_stars_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-07T18:45:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T21:45:27.000Z", "max_issues_repo_path": "Lectures/Lecture13/3dasm_Lecture13.ipynb", "max_issues_repo_name": "shushu-qin/3dasm_course", "max_issues_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture13/3dasm_Lecture13.ipynb", "max_forks_repo_name": "shushu-qin/3dasm_course", "max_forks_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2022-02-07T18:45:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T19:30:17.000Z", "avg_line_length": 46.1033175355, "max_line_length": 585, "alphanum_fraction": 0.578753675, "converted": true, "num_tokens": 11209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.26284183737131667, "lm_q2_score": 0.3998116407397951, "lm_q1q2_score": 0.1050872262544885}} {"text": "In order to successfully complete this assignment you need to participate both individually and in groups during class on **Wednesday February 19**.\n\n# In-Class Assignment: Instructor template\n\n\n\n\nFigure From: https://www.researchgate.net/publication/221561415_Towards_energy-aware_scheduling_in_data_centers_using_machine_learning\n\n### Agenda for today's class (80 minutes)\n\n

\n\n\n\n1. [(20 minutes) Anthill Review](#anthill)\n1. [(20 minutes) Pre-class Assignment Review](#class_Assignment_Review)\n2. [(20 minutes) Machine Learning Rules of thumb](#Machine_Learning_Rules_of_thumb)\n3. [(20 minutes) Example Application: The Skin Cancer data set](#The_cancer_data_set)\n\n---\n\n# 1. Anthill Review\n\ngit clone https://gitlab.msu.edu/colbrydi/anthill.git\n\n\n\n----\n\n# 2. Pre-class Assignment Review\n\n\n* [0218--ML-pre-class-assignment](0218--ML-pre-class-assignment.ipynb)\n\n\n---\n\n\n\n# 3. Machine Learning Rules of thumb\n\n- [Ugly duckling Theorem](https://en.wikipedia.org/wiki/Ugly_duckling_theorem)\n- [Curse of Dimentionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality)\n\n\n✅ **DO THIS:** The above two properties can dominate how machine learning can be used. Briefly review both and discuss with your group how you think they relate to Machine Learning. Be prepaired to discuss with the rest of the class. \n\n---\n\n\n# 4. Example Application: The Skin Cancer data set\n\nIn this example we will do the same calculation steps but using different dataset provided by scikit learn called the \"cancer\" dataset. \n\n\nThe following commands loads a dataset of measurements computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. \n\n\n```python\n%matplotlib inline \nimport matplotlib.pylab as plt\nimport numpy as np\nimport sympy as sym\nsym.init_printing()\n```\n\n\n```python\nfrom sklearn.datasets import load_breast_cancer\ncancer = load_breast_cancer()\ndata = cancer.data\ntarget = cancer.target\ndata.shape\n\n# Variables used for plotting\nlabels=cancer.target\ncdict={0:'red',1:'green'}\nlabl={0:'Malignant',1:'Benign'}\nmarker={0:'*',1:'o'}\nalpha={0:.5, 1:.5}\n```\n\n\n```python\nprint(cancer.DESCR)\n```\n\n## Step A: Feature Extraction\n\nThe Following is a plot of just the first two features:\n\n\n```python\nplt.scatter(data[:,0],data[:,1], c=labels, s=30, cmap=plt.cm.rainbow);\n```\n\n## Step B: Splitting the dataset for model into training and testing sets\n✅ **DO THIS:** Split the iris data into a training and testing set like we did in the previous example:\n\n\n```python\n##Put your code here\n```\n\n## Step C: Select and train a Classifier using the training dataset\n\n✅ **DO THIS:** Use the train_vectors set and train_labels to train a Support Vector Machine. *Hint:* You should be able to use the same code and parameters we used in the previous example: \n\n\n\n```python\n##Put your code here\n```\n\n## Step D. Show the results of the classification on the testing dataset\n\n✅ **DO THIS:** Test the predictive capabilities of your SVM using the test_vectors and compare the predicted labels to the actual labels. \n\n\n```python\n##Put your code here\n```\n\n-----\n### Congratulations, we're done!\n\n### Course Resources:\n\n- [Syllabus](https://docs.google.com/document/d/e/2PACX-1vTW4OzeUNhsuG_zvh06MT4r1tguxLFXGFCiMVN49XJJRYfekb7E6LyfGLP5tyLcHqcUNJjH2Vk-Isd8/pub)\n- [Preliminary Schedule](https://docs.google.com/spreadsheets/d/e/2PACX-1vRsQcyH1nlbSD4x7zvHWAbAcLrGWRo_RqeFyt2loQPgt3MxirrI5ADVFW9IoeLGSBSu_Uo6e8BE4IQc/pubhtml?gid=2142090757&single=true)\n- [D2L Page](https://d2l.msu.edu/d2l/home/912152)\n- [Git Repository](https://gitlab.msu.edu/colbrydi/cmse802-s20)\n\n© Copyright 2020, Michigan State University Board of Trustees\n", "meta": {"hexsha": "21b67ce070bad42178cb4e07e9f8f0d2a8d344e6", "size": 7429, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "cmse802-s20/0219_ML_in-class-assignment.ipynb", "max_stars_repo_name": "Diane1306/cmse802_git_CompModelling", "max_stars_repo_head_hexsha": "44e529c07ab2f7cdf801cc0df5d67c9992f1823d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cmse802-s20/0219_ML_in-class-assignment.ipynb", "max_issues_repo_name": "Diane1306/cmse802_git_CompModelling", "max_issues_repo_head_hexsha": "44e529c07ab2f7cdf801cc0df5d67c9992f1823d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmse802-s20/0219_ML_in-class-assignment.ipynb", "max_forks_repo_name": "Diane1306/cmse802_git_CompModelling", "max_forks_repo_head_hexsha": "44e529c07ab2f7cdf801cc0df5d67c9992f1823d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-11T07:41:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-11T07:41:07.000Z", "avg_line_length": 28.247148289, "max_line_length": 270, "alphanum_fraction": 0.5695248351, "converted": true, "num_tokens": 1083, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3486451488696663, "lm_q2_score": 0.3007455726738824, "lm_q1q2_score": 0.10485348495677878}} {"text": "\n \n \n
\n Run in Google Colab\n \n View source on GitHub\n
\n\n# Explainable Recommendations\n\n\nWhile the main objective of a recommender system is to identify the items to be recommended to a user, providing explanations to accompany the recommendations would be more persuasive as well as engender trust and transparency. There are different types of explanations. In this tutorial, we explore explainable recommendation approaches that rely on user product aspect-level sentiment for modeling explanations.\n\n## 1. Setup\n\n\n```\n!pip install --quiet cornac==1.6.1\n```\n\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10.1MB 3.1MB/s \n \u001b[?25h\n\n\n```\nimport os\nimport sys\nfrom collections import defaultdict\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport cornac\nfrom cornac.utils import cache\nfrom cornac.datasets import amazon_toy\nfrom cornac.eval_methods import RatioSplit\nfrom cornac.data import Reader, SentimentModality\nfrom cornac.models import EFM, MTER, NMF, BPR\n\nprint(f\"System version: {sys.version}\")\nprint(f\"Cornac version: {cornac.__version__}\")\n\nSEED = 42\nVERBOSE = False\n```\n\n System version: 3.6.9 (default, Apr 18 2020, 01:56:04) \n [GCC 8.4.0]\n Cornac version: 1.6.1\n\n\n## 2. Aspect-Level Sentiments\nTo model fine-grained product aspect-ratings. Several works rely on sentiment analysis to extract aspect sentiment from product reviews. In other words, each review is now a list of aspect sentiments. Along with product rating, we also have aspect sentiments expressed in users' reviews. Here, we work with Toys and Games dataset, a sub-category of [Amazon reviews](http://jmcauley.ucsd.edu/data/amazon/).\n\nBelow are some examples of aspect-level sentiments that have been extracted from users' reviews of items.\n\n\n```\nsentiment = amazon_toy.load_sentiment()\nsamples = sentiment[:10]\npd.DataFrame.from_dict({\n \"user\": [tup[0] for tup in samples],\n \"item\": [tup[1] for tup in samples],\n \"aspect-level sentiment\": [tup[2] for tup in samples]\n})\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
useritemaspect-level sentiment
0A012468118FTQAINEI0OQB00005BZM6[(paint, great, 1)]
1A012468118FTQAINEI0OQB001HA9JOA[(game, great, 1), (money, worth, 1)]
2A012468118FTQAINEI0OQB002BY2BVE[(paint, fun, 1), (item, well, 0)]
3A012468118FTQAINEI0OQB007U7M0LI[(price, sturdy, 1)]
4A012468118FTQAINEI0OQB00804BCO6[(gift, great, 1)]
5A0182108CPDLPRCXQUZQB002IUNLLK[(toy, best, 1), (heavy, cool, 1)]
6A0182108CPDLPRCXQUZQB007WYU7R8[(toy, great, 1)]
7A0182108CPDLPRCXQUZQB00ABY8WVO[(toy, love, 1)]
8A0182108CPDLPRCXQUZQB00AFP86KG[(toy, love, 1)]
9A0182108CPDLPRCXQUZQB00BJT861Q[(figure, well, 1), (toy, well, 1)]
\n
\n\n\n\n\n```\n# Load rating and sentiment information\nreader = Reader(min_item_freq=20)\nrating = amazon_toy.load_feedback(reader)\n\n# Use Sentiment Modality for aspect-level sentiment data\nsentiment_modality = SentimentModality(data=sentiment)\n\nrs = RatioSplit(\n data=rating,\n test_size=0.2,\n exclude_unknowns=True,\n sentiment=sentiment_modality,\n verbose=VERBOSE,\n seed=SEED,\n)\nprint(\"Total number of aspects:\", rs.sentiment.num_aspects)\nprint(\"Total number of opinions:\", rs.sentiment.num_opinions)\n\nid_aspect_map = {v:k for k, v in rs.sentiment.aspect_id_map.items()}\nid_opinion_map = {v:k for k, v in rs.sentiment.opinion_id_map.items()}\n```\n\n rating_threshold = 1.0\n exclude_unknowns = True\n ---\n Training data:\n Number of users = 17433\n Number of items = 2180\n Number of ratings = 66758\n Max rating = 5.0\n Min rating = 1.0\n Global mean = 4.3\n ---\n Test data:\n Number of users = 8955\n Number of items = 2167\n Number of ratings = 15777\n Number of unknown users = 0\n Number of unknown items = 0\n ---\n Total users = 17433\n Total items = 2180\n Total number of aspects: 429\n Total number of opinions: 2604\n\n\n## 3. Explicit Factor Model (EFM)\n\nEFM model extends Non-negative Matrix Factorization (NMF) with the additional information from **aspect-level sentiments**. The objective is to learn user, item, and aspect latent factors to explain user-item ratings, users' interest in certain aspects of the items, as well as the quality of items according to those aspects. In a nutshell, EFM factorizes three matrices: *rating matrix*, *user-aspect attention matrix*, and *item-aspect quality matrix*. Let's take a look at what the later two matrices are.\n\n\n```\nefm = EFM()\nefm.train_set = rs.train_set\n_, X, Y = efm._build_matrices(rs.train_set)\n```\n\n### User-Aspect Attention Matrix\n\nLet $\\mathcal{F} = \\{f_1, f_2, \\dots, f_F\\}$ be the set of aspects (e.g., screen, earphone). \n\nLet $\\mathbf{X} \\in \\mathbb{R}^{N \\times F}$ be a sparse aspect matrix for $N$ users and $F$ aspects, whereby each element $x_{if} \\in \\mathbf{X}$ indicates the degree of **attention** by user $i$ on aspect $f$, defined as follows:\n\n\\begin{equation}\nx_{if} = \\\n\\begin{cases}\n0, & \\text{if user $i$ never mentions aspect $f$} \\\\\n1 + (N-1)\\left(\\frac{2}{1+\\exp(-t_{if})}-1\\right), & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\nwhere $N=5$ is the highest rating score, $t_{if}$ is the frequency of user $i$ mentions aspect $f$ across all her reviews.\n\nFor illustration purpose, we show a small matrix $\\mathbf{X}$ of 5 users and 5 aspects below.\n\n\n```\nn_users = 5\nn_aspects = 5\npd.DataFrame(\n data=X[:n_users, :n_aspects].A,\n index=[f\"User {u + 1}\" for u in np.arange(n_users)],\n columns=[f\"{id_aspect_map[i]}\" for i in np.arange(n_aspects)]\n)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
paintgamemoneyitemprice
User 10.00.0000000.00.0000002.848469
User 20.00.0000000.00.0000000.000000
User 30.04.6205930.02.8484694.620593
User 40.04.8561100.00.0000000.000000
User 50.00.0000000.00.0000000.000000
\n
\n\n\n\nIn the example below, we can see that *User 4* finds the aspect *game* important, whereas *User 3* is concerned with *game* as well as *price*.\n\n### Item-Aspect Quality Matrix\n\n\nLet $\\mathbf{Y} \\in \\mathbb{R}^{M \\times F}$ be a sparse aspect matrix for $M$ items and $F$ aspects, whereby $y_{jf} \\in \\mathbf{Y}$ indicates the **quality** of item $j$ on aspect $f$, defined as follows:\n\n\\begin{equation}\ny_{jf} = \\\n\\begin{cases}\n0, & \\text{if item $j$ was never reviewed on aspect $f$} \\\\\n1 + (N - 1) \\left( \\frac{1}{1+\\exp(-s_{jf})} \\right), & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\nwhere $s_{jf}$ is the sum of sentiment values with which item $j$ has been mentioned with regards to aspect $f$ across all its reviews.\n\nWe show a small matrix $Y$ of 5 items and 5 aspects below:\n\n\n```\nn_items = 5\nn_aspects = 5\npd.DataFrame(\n data=Y[:n_items, :n_aspects].A,\n index=[f\"Item {u + 1}\" for u in np.arange(n_items)],\n columns=[f\"{id_aspect_map[i]}\" for i in np.arange(n_aspects)]\n)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
paintgamemoneyitemprice
Item 13.9242340.0000003.0000000.0000003.924234
Item 20.0000000.0000000.0000004.5231884.523188
Item 30.0000004.5231880.0000000.0000000.000000
Item 40.0000000.0000000.0000000.0000003.924234
Item 53.9242340.0000003.9242344.5231884.523188
\n
\n\n\n\nWe see from the example above that *Item 3* has a positive quality in the aspect *game*, whereas *Item 5* has positive quality on the other 4 aspects.\n\n### Optimization\n\nAs these matrices are sparse, for prediction, EFM jointly factorizes $X$ and $Y$ matrices along with rating matrix $R$. Learning the latent factors can be done via minimizing the following loss function:\n\n\\begin{align}\n&\\mathcal{L}(\\mathbf{U_1, U_2, V, H_1, H_2} | \\lambda_x, \\lambda_y, \\lambda_u, \\lambda_h, \\lambda_v) = ||\\mathbf{U_1} \\mathbf{U_2}^T + \\mathbf{H_1} \\mathbf{H_2}^T - \\mathbf{R}||_F^2 + \\lambda_x ||\\mathbf{U_1} \\mathbf{V}^T - \\mathbf{X}||_F^2 + \\lambda_y ||\\mathbf{U_2} \\mathbf{V}^T - \\mathbf{Y}||_F^2 + \\lambda_u(||\\mathbf{U_1}||_F^2+||\\mathbf{U_2}||_F^2) + \\lambda_h(||\\mathbf{H_1}||_F^2+||\\mathbf{H_2}||_F^2) + \\lambda_v ||\\mathbf{V}||_F^2 \\\\\n&\\text{such that: } \\forall_{i, k} u_{ik} \\ge 0, \\forall_{j, k} v_{jk} \\ge 0\n\\end{align}\n\nThe can be solved as a constrained optimization problem. \n\n\nLet's conduct an experiment with EFM model and compare with NMF as a baseline.\n\n\n```\nefm = EFM(\n num_explicit_factors=40,\n num_latent_factors=60,\n num_most_cared_aspects=15,\n rating_scale=5.0,\n alpha=0.85,\n lambda_x=1,\n lambda_y=1,\n lambda_u=0.01,\n lambda_h=0.01,\n lambda_v=0.01,\n max_iter=100,\n verbose=VERBOSE,\n seed=SEED,\n)\n\n# compare to baseline NMF\nnmf = NMF(k=100, max_iter=100, verbose=VERBOSE, seed=SEED)\n\neval_metrics = [\n cornac.metrics.RMSE(),\n cornac.metrics.NDCG(k=50),\n cornac.metrics.AUC()\n]\n\ncornac.Experiment(\n eval_method=rs, models=[nmf, efm], metrics=eval_metrics\n).run()\n```\n\n \n TEST:\n ...\n | RMSE | AUC | NDCG@50 | Train (s) | Test (s)\n --- + ------ + ------ + ------- + --------- + --------\n NMF | 0.8027 | 0.5418 | 0.0093 | 4.6980 | 8.4513\n EFM | 0.7315 | 0.5536 | 0.0105 | 11.2728 | 12.2117\n \n\n\n### Refining Ranking Prediction\n\nWith EFM model, you can refine the recommendation after training by experimenting with different values of: \n* `num_most_cared_aspects` ($k$): integer, value range $\\in[0, 429]$ as we have $429$ aspects in total\n* `alpha` $\\in [0,1]$\n\nThese parameters will affect ranking performance of the EFM model, as the ranking score is predicted as follow:\n\n$$\nranking\\_score = \\alpha \\cdot \\frac{\\sum_{c \\in C_i}{\\hat{x}_{if}\\cdot\\hat{y}_{jf}}}{k \\cdot N} + (1-\\alpha)\\cdot\\hat{r}_{ij}\n$$\n\n\n```\nalpha = 0.9 # alpha value in range [0,1]\nnum_most_cared_aspects = 100\n\neval_metrics = [\n cornac.metrics.NDCG(k=50),\n cornac.metrics.AUC()\n]\n\ncornac.Experiment(\n eval_method=rs,\n models=[\n EFM(\n alpha=alpha,\n num_most_cared_aspects=num_most_cared_aspects,\n init_params={'U1': efm.U1, 'U2': efm.U2, 'H1': efm.H1, 'H2': efm.H2, 'V': efm.V},\n trainable=False,\n verbose=VERBOSE,\n seed=SEED\n )\n ],\n metrics=eval_metrics\n).run()\n```\n\n \n TEST:\n ...\n | AUC | NDCG@50 | Train (s) | Test (s)\n --- + ------ + ------- + --------- + --------\n EFM | 0.5549 | 0.0107 | 0.0008 | 15.3882\n \n\n\n### Recommendation Explanation with EFM\n\nGiven a user and an item, EFM model is able of predicting **user's attention scores** as well as **item's quality scores** regarding the aspects. Those scores with the corresponding aspects will be the explanation on why a user *likes* or *dislikes* an item.\n\nLet's take a look at an example below. Feel free to explore other users and items!\n\n\n```\nUIDX = 1\nIIDX = 4\nnum_top_cared_aspects = 10\n\nid_aspect_map = {v:k for k, v in rs.sentiment.aspect_id_map.items()}\n\npredicted_user_aspect_scores = np.dot(efm.U1[UIDX], efm.V.T)\npredicted_item_aspect_scores = np.dot(efm.U2[IIDX], efm.V.T)\n\ntop_cared_aspect_ids = (-predicted_user_aspect_scores).argsort()[:num_top_cared_aspects]\ntop_cared_aspects = [id_aspect_map[aid] for aid in top_cared_aspect_ids]\npd.DataFrame.from_dict({\n \"aspect\": top_cared_aspects,\n \"user_aspect_attention_score\": predicted_user_aspect_scores[top_cared_aspect_ids],\n \"item_aspect_quality_score\": predicted_item_aspect_scores[top_cared_aspect_ids]\n})\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
aspectuser_aspect_attention_scoreitem_aspect_quality_score
0toy4.0615834.679122
1pieces3.6106683.844794
2game3.5714184.211304
3furby3.5710044.787230
4doll3.4803514.326976
5quality3.4689483.952711
6really3.3683404.586975
7gift3.3642894.410737
8also3.3608654.076249
9puzzle3.2942654.726530
\n
\n\n\n\nEFM takes an aspect with the **highest score** in `item_aspect_quality_score` as the well-performing aspect, and an aspect with the **lowest score** in `item_aspect_quality_score` as the poorly-performing aspect. See example explanations in their templates below.\n\n\n```\nperform_well_aspect = top_cared_aspects[predicted_item_aspect_scores[top_cared_aspect_ids].argmax()]\nperform_poorly_aspect = top_cared_aspects[predicted_item_aspect_scores[top_cared_aspect_ids].argmin()]\n\nexplanation = \\\nf\"You might interested in [{perform_well_aspect}], on which this product perform well. \\n\\\nYou might interested in [{perform_poorly_aspect}], on which this product perform poorly.\"\nprint(\"EFM explanation:\")\nprint(explanation)\n```\n\n EFM explanation:\n You might interested in [furby], on which this product perform well. \n You might interested in [pieces], on which this product perform poorly.\n\n\n## 4. Multi-Task Explainable Recommendation (MTER)\n\nMTER model extends the concept of exploiting information from *Aspect-Level Sentiments* with tensor factorization (using Tucker Decomposition). The model takes in the input of three tensors. Let's go through each of them and see how they are constructed.\n\n\n\n### Tensor\\#1: User by Item by Aspect ($\\mathbf{X}$)\n\nLet $\\mathbf{R} \\in \\mathbb{R}^{N \\times M}$ be a sparse rating matrix of $N$ users and $M$ items.\n\nLet $\\mathbf{X} \\in \\mathbb{R}_{+}^{N \\times M \\times F}$ be a 3-dimensional tensor, each element $x_{ijf}$ indicates a relationship between user $i$, item $j$, and aspect $f$:\n\n\\begin{equation}\nx_{ijf} = \\\n\\begin{cases}\n0, & \\text{if aspect $f$ has not been mentioned by user $i$ about item $j$} \\\\\n1 + (N-1)\\left(\\frac{1}{1+\\exp(-s_{ijf})}\\right), & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\nwhere $s_{ijf}$ is the sum of sentiment values with which item $j$ has been mentioned by user $i$ with regards to aspect $f$.\n\nWe can extend $\\mathbf{X}$ into $\\mathbf{\\tilde{X}}$ with the rating matrix $\\mathbf{R}$ as the last slice or the $(F + 1)^{\\mathrm{th}}$ aspect (i.e., $\\tilde{x}_{ij(F+1)} = r_{ij}$).\n\n### Tensor\\#2: User by Aspect by Opinion ($\\mathbf{Y}^{U}$)\n\nLet $\\mathbf{Y}^{U} \\in \\mathbb{R}_{+}^{N \\times F \\times O}$ be a 3-dimensional tensor, each element $y^U_{ifo}$ indicates a relationship between user $i$, aspect $f$, and opinion $o$:\n\n\\begin{equation}\ny^U_{ifo} = \\\n\\begin{cases}\n0, & \\text{if user $i$ has not been used opinion $o$ to describe aspect $f$ positively} \\\\\n1 + (N-1)\\left(\\frac{1}{1+\\exp(-t_{ifo})}\\right), & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\nwhere $t_{ifo}$ is the frequency with which user $i$ employs opinion $o$ to describe aspect $f$ positively across all her reviews.\n\n\n### Tensor\\#3: Item by Aspect by Opinion ($\\mathbf{Y}^{I}$)\n\nLet $\\mathbf{Y}^{I} \\in \\mathbb{R}_{+}^{M \\times F \\times O}$ be a 3-dimensional tensor, each element $y^I_{jfo}$ indicates a relationship between item $j$, aspect $f$, and opinion $o$:\n\n\\begin{equation}\ny^I_{jfo} = \\\n\\begin{cases}\n0, & \\text{if item $j$ has not been described positively with opinion $o$ on aspect $f$} \\\\\n1 + (N-1)\\left(\\frac{1}{1+\\exp(-t_{jfo})}\\right), & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\nwhere $t_{jfo}$ is the frequency with which item $j$ has been described positively with opinion $o$ on aspect $f$ positively across all its reviews.\n\n### Optimization\n\nMTER employs Tucker Decomposition to jointly factorize three tensors $\\mathbf{\\tilde{X}}$, $\\mathbf{Y}^U$, and $\\mathbf{Y}^I$. In addition, MTER also optimizes for a ranking objective akin to BPR where:\n* Positive triples $\\mathbf{T} = \\{ j >_{i} j' | x_{ij(F+1)} \\in \\mathbf{R}^+ \\land x_{ij'(F+1)} \\in \\mathbf{R}^- \\}$\n* For aspect (F + 1), which is the overall rating, user $i$ prefers item $j$ to item $j'$\n\nLearning the latent factors can be done via minimizing the following loss function:\n\n\\begin{align}\n&\\mathcal{L}(\\mathbf{U, V, Z, W, C_1, C_2, C_3} | \\lambda_B, \\lambda) = ||\\mathbf{\\tilde{X}} - \\mathbf{\\hat{X}}||_F^2 + ||\\mathbf{Y}^U - \\hat{\\mathbf{Y}}^U||_F^2 + ||\\mathbf{Y}^I - \\hat{\\mathbf{Y}}^I||_F^2 - \\lambda_B \\sum_{j >_i j'} \\ln(1 + \\exp{(-(\\hat{x}_{ij(F+1)} - \\hat{x}_{ij'(F+1)}))}) + \\lambda(||\\mathbf{U}||_F^2+||\\mathbf{V}||_F^2+||\\mathbf{Z}||_F^2+||\\mathbf{W}||_F^2 +||\\mathbf{C_1}||_F^2 +||\\mathbf{C_2}||_F^2 +||\\mathbf{C_3}||_F^2) \\\\\n&\\text{such that: } \\mathbf{U} \\ge 0, \\mathbf{V} \\ge 0, \\mathbf{Z} \\ge 0, \\mathbf{W} \\ge 0, \\mathbf{C_1} \\ge 0, \\mathbf{C_2} \\ge 0, \\mathbf{C_3} \\ge 0\n\\end{align}\n\n\nThe can be solved as a constrained optimization problem. \n\n\nLet's conduct an experiment with MTER model and compare with the BPR baseline.\n\n\n```\nmter = MTER(\n n_user_factors=10,\n n_item_factors=10,\n n_aspect_factors=10,\n n_opinion_factors=10,\n n_bpr_samples=1000,\n n_element_samples=50,\n lambda_reg=0.1,\n lambda_bpr=10,\n max_iter=3000,\n lr=0.5,\n verbose=VERBOSE,\n seed=SEED,\n)\n\n# compare to baseline BPR\nbpr = BPR(k=10, verbose=VERBOSE, seed=SEED)\n\neval_metrics = [\n cornac.metrics.NDCG(k=50),\n cornac.metrics.AUC()\n]\n\n# Instantiate and run an experiment\ncornac.Experiment(\n eval_method=rs, models=[bpr, mter], metrics=eval_metrics,\n).run()\n```\n\n \n TEST:\n ...\n | AUC | NDCG@50 | Train (s) | Test (s)\n ---- + ------ + ------- + --------- + --------\n BPR | 0.6271 | 0.0314 | 1.9081 | 5.7188\n MTER | 0.7185 | 0.0357 | 51.1022 | 6.8672\n \n\n\n### Recommendation Explanation with MTER\n\n* To provide recommendation to user $i$, we rank items $j$ in terms of the predicted rating scores: $\\hat{x}_{ij(F+1)}$\n\n* To determine which aspect $f$ of product $j$ a user $i$ cares about, we rank aspects $f$ in terms of: $\\hat{x}_{ijf}$\n\n* To determine which opinion phrases $o$ to use when describing aspect $f$ while recommending item $j$ to user $i$, we rank phrases in terms of: $\\hat{y}^U_{ifo} \\times \\hat{y}^I_{jfo}$\n\nLet's explore an example below on how we can generate explanations for recommendation by MTER model.\n\n\n```\nUIDX = 10\nIIDX = 10\nnum_top_aspects = 2\nnum_top_opinions = 3\n\nitem_aspect_ids = np.array(list(set([\n tup[0]\n for idx in rs.sentiment.item_sentiment[IIDX].values()\n for tup in rs.sentiment.sentiment[idx]\n])))\n\nitem_opinion_ids = np.array(list(set([\n tup[1]\n for idx in rs.sentiment.item_sentiment[IIDX].values()\n for tup in rs.sentiment.sentiment[idx]\n])))\n\nitem_aspects = [id_aspect_map[idx] for idx in item_aspect_ids]\n\nts1 = np.einsum(\"abc,a->bc\", mter.G1, mter.U[UIDX])\nts2 = np.einsum(\"bc,b->c\", ts1, mter.I[IIDX])\npredicted_aspect_scores = np.einsum(\"c,Mc->M\", ts2, mter.A)\n\ntop_aspect_ids = item_aspect_ids[(-predicted_aspect_scores[item_aspect_ids]).argsort()[:num_top_aspects]]\ntop_aspects = [id_aspect_map[idx] for idx in top_aspect_ids]\n\ntop_aspect_opinions = []\nmter_explanations = []\nfor top_aspect_id, top_aspect in zip(top_aspect_ids, top_aspects):\n ts1_G2 = np.einsum(\"abc,a->bc\", mter.G2, mter.U[UIDX])\n ts2_G2 = np.einsum(\"bc,b->c\", ts1_G2, mter.A[top_aspect_id])\n predicted_user_aspect_opinion_scores = np.einsum(\"c,Mc->M\", ts2_G2, mter.O)\n\n ts1_G3 = np.einsum(\"abc,a->bc\", mter.G3, mter.I[IIDX])\n ts2_G3 = np.einsum(\"bc,b->c\", ts1_G3, mter.A[top_aspect_id])\n predicted_item_aspect_opinion_scores = np.einsum(\"c,Mc->M\", ts2_G3, mter.O)\n\n predicted_aspect_opinion_scores = np.multiply(predicted_user_aspect_opinion_scores, predicted_item_aspect_opinion_scores)\n top_opinion_ids = item_opinion_ids[(-predicted_aspect_opinion_scores[item_opinion_ids]).argsort()[:num_top_opinions]]\n top_opinions = [id_opinion_map[idx] for idx in top_opinion_ids]\n top_aspect_opinions.append(top_opinions)\n\n # Generate explanation for top-1 aspect\n mter_explanations.append(f\"Its {top_aspect} is [{'] ['.join(top_opinions)}].\")\n\npd.DataFrame.from_dict({\"aspect\": top_aspects, \"top_opinions\": top_aspect_opinions, \"explanation\": mter_explanations})\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
aspecttop_opinionsexplanation
0really[disappointed, great, like]Its really is [disappointed] [great] [like].
1addition[disappointed, great, fun]Its addition is [disappointed] [great] [fun].
\n
\n\n\n\n## References\n\n1. Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., & Ma, S. (2014). Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In SIGIR (pp. 83-92).\n2. Wang, N., Wang, H., Jia, Y., & Yin, Y. (2018). Explainable recommendation via multi-task learning in opinionated text data. In SIGIR (pp. 165-174). \n4. Cornac - A Comparative Framework for Multimodal Recommender Systems (https://cornac.preferred.ai/)\n\n", "meta": {"hexsha": "a17606ccedb362658578eba56d1e458b745ee8d4", "size": 51354, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_source/raw/preferredai_07_explanations.ipynb", "max_stars_repo_name": "sparsh-ai/reco-tutorials", "max_stars_repo_head_hexsha": "7be837ca7105424aaf43148b334dc9d2e0e66368", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-08-29T13:18:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T19:48:32.000Z", "max_issues_repo_path": "code/preferredai_07_explanations.ipynb", "max_issues_repo_name": "sparsh-ai/recsys-colab", "max_issues_repo_head_hexsha": "c0aa0dceca5a4d8ecd42b61c4e906035fe1614f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/preferredai_07_explanations.ipynb", "max_forks_repo_name": "sparsh-ai/recsys-colab", "max_forks_repo_head_hexsha": "c0aa0dceca5a4d8ecd42b61c4e906035fe1614f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-06-16T03:07:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T04:22:04.000Z", "avg_line_length": 38.0964391691, "max_line_length": 522, "alphanum_fraction": 0.4285157923, "converted": true, "num_tokens": 8517, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4882833952958347, "lm_q2_score": 0.2146914090629578, "lm_q1q2_score": 0.10483025015810797}} {"text": "```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n### Library import\n\n\n```python\nfrom sympy import init_printing, Matrix, symbols, eye, Rational\ninit_printing()\n```\n\n# Matrix multiplication, inverse and transpose\n\n## Multiplying matrices\n\nIn this section we approach matrix multiplication on four ways. Each method provides its own unique insight into the process of multiplication. We also take a look at _block multiplication_, where a matrix can be _broken up_ into specifically sized blocks that make multiplication easier.\n\n### Method 1\n\nConsider multiplying matrices $A$ and $B$ to result in $C$. We have already seen that the column size of the first must equal the row size of the second, $A_{n \\times m} B_{m \\times p} = C_{n \\times p}$.\n\nEvery element in $C$ (with a row value of $i$ and a column value of $j$) is the dot product of the corresponding row vector in $A$ and the column vector in $B$. In (1) we take the dot product of row `2` in the first matrix and column `1` in the second matrix to get $C_{21}$.\n\n$$ { \\begin{bmatrix} \\cdots & \\cdots & \\cdots \\\\ 3 & 2 & -1 \\\\ \\cdots & \\cdots & \\cdots \\\\ \\cdots & \\cdots & \\cdots \\end{bmatrix} }_{ 4\\times 3 }{ \\begin{bmatrix} 1 & \\vdots \\\\ 2 & \\vdots \\\\ 1 & \\vdots \\end{bmatrix} }_{ 3\\times 2 }={ \\begin{bmatrix} { c }_{ 11 } & { c }_{ 12 } \\\\ \\left( 3\\times 1 \\right) +\\left( 2\\times 2 \\right) +\\left( -1\\times 1 \\right) & { c }_{ 22 } \\\\ { c }_{ 31 } & { c }_{ 32 } \\\\ { c }_{ 41 } & { c }_{ 42 } \\end{bmatrix} }_{ 4\\times 2 } \\tag{1}$$\n\n### Method 2\n\nHere we are concerned with the columns of $B$. We note that each column in $C$ is the result of the matrix $A$ times the corresponding column in $B$. In this example, it would be akin to a matrix multiplied by a vector $A \\underline{x} = \\underline{b}$. $B$ is then made up from four column vectors.\n\n### Method 3\n\nHere every row in $A$ produces the same numbered row in $C$ by multiplying it with the matrix $B$. The rows of $C$ are linear combinations of $B$.\n\n### Method 4\n\nIn method 1 we looked at $\\text{row}_A \\times \\text{col}_B$, which produced a single number in $C$. What if we did $\\text{col} \\times \\text{row}$?\n\nThe size of a column in $A$ might be written as $r_A \\times 1$ and a row in $B$ as $1 \\times s_B$. The result in $C$ would then be $r_A \\times s_B$. Let's use `sympy` to investigate a simple example. Here we have $A_{3 \\times 1}$ and $B_{1 \\times 2}$, with $C_{3 \\times 2}$.\n\n\n```python\nA = Matrix([[2], [3], [4]])\nB = Matrix([[1, 6]])\nA, B\n```\n\n\n```python\nC = A * B\nC\n```\n\nSo in method 4, for the example below, $C$ is the sum of *the columns of* $A$ times *the rows of* $B$.\n\n$$ \\begin{bmatrix} { a }_{ 11 } & { a }_{ 12 } \\\\ { a }_{ 21 } & { a }_{ 22 } \\\\ { a }_{ 31 } & { a }_{ 32 } \\end{bmatrix}\\begin{bmatrix} { b }_{ 11 } & { b }_{ 12 } \\\\ b_{ 21 } & { b }_{ 22 } \\end{bmatrix}=\\begin{bmatrix} { a }_{ 11 } \\\\ { a }_{ 21 } \\\\ { a }_{ 31 } \\end{bmatrix}\\begin{bmatrix} { b }_{ 11 } & { b }_{ 12 } \\end{bmatrix}+\\begin{bmatrix} { a }_{ 12 } \\\\ { a }_{ 22 } \\\\ { a }_{ 32 } \\end{bmatrix}\\begin{bmatrix} { b }_{ 21 } & { b }_{ 22 } \\end{bmatrix} \\tag{2}$$\n\n### Block multiplication\n\nThis is in essence, a combination of the above. We do the following: \n+ Both $A$ and $B$ can be broken into block of sizes that allow for multiplication\n\nWe see an example of this in (3).\n\n$$ \\begin{bmatrix} { A }_{ 1 } & { A }_{ 2 } \\\\ { A }_{ 3 } & { A }_{ 4 } \\end{bmatrix}\\begin{bmatrix} { B }_{ 1 } & { B }_{ 2 } \\\\ { B }_{ 3 } & { B }_{ 4 } \\end{bmatrix}=\\begin{bmatrix} { A }_{ 1 }{ B }_{ 1 }+{ A }_{ 2 }{ B }_{ 3 } & { A }_{ 1 }{ B }_{ 2 }+{ A }_{ 2 }{ B }_{ 4 } \\\\ { A }_{ 3 }{ B }_{ 1 }+{ A }_{ 4 }{ B }_{ 3 } & { A }_{ 3 }{ B }_{ 2 }+{ A }_{ 4 }{ B }_{ 4 } \\end{bmatrix} \\tag{3}$$\n\n## Inverses\n\nW e know by now that **if** the inverse of a matrix $A$ exists then $A^{-1} A = I$, the identity matrix. This is a _left inverse_, but what about a _right inverse_, $A A^{-1}$? This is also equal to the identity matrix (given that $A$ is invertible). Invertible matrices are also called *non-singular* matrices.\n\nThis brings the question: \"_What is a non-invertible matrix?_\". Non-invertible matrices are called *singular* matrices. An example is shown in (4).\n\n$$ \\begin{bmatrix}1&3\\\\2&6\\end{bmatrix} \\tag{4}$$\n\nNote how the elements of row `2` are just $2$ times the elements in row `1`. The elements of row `2` are linear combinations of the elements in row `1`. The same go for the columns, the first being a linear combination of the second, multiplying each element by $3$. More profoundly, note that you could find a column vector $\\underline{x}$ such that $A \\times \\underline{x}= \\underline{0}$.\n\nIn (5) below we see that $3$ times column `1` in $A$ plus $-1$ times column `2` gives *nothing*, the zero vector.\n\n$$ \\begin{bmatrix}1&3\\\\2&6\\end{bmatrix}\\begin{bmatrix}3\\\\-1\\end{bmatrix}=\\begin{bmatrix}0\\\\0\\end{bmatrix} \\tag{5}$$\n\nLet's construct as example, shown in (6) below as $A A^{-1} = I$ (given that $A^{-1}$ exists).\n\n$$ \\begin{bmatrix} 1 & 3 \\\\ 2 & 7 \\end{bmatrix}\\begin{bmatrix} a & c \\\\ b & d \\end{bmatrix}=\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix} \\tag{6}$$\n\nIn essence, we have to solve two systems. $A$ times column `1` of $A^{-1}$ is column `1` of $I$. This is the Gauss-Jordan idea of solving two systems at once.\n\n$$ \\begin{bmatrix} 1 & 3 \\\\ 2 & 7 \\end{bmatrix}\\begin{bmatrix} a \\\\ b \\end{bmatrix}=\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}\\\\ \\begin{bmatrix} 1 & 3 \\\\ 2 & 7 \\end{bmatrix}\\begin{bmatrix} c \\\\ d \\end{bmatrix}=\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} \\tag{7}$$\n\nThis will give us the two columns of $I$. We proceed by creating an augmented matrix of the coefficients (note carefully!).\n\n$$ \\begin{bmatrix} 1 & 3 & 1 & 0 \\\\ 2 & 7 & 0 & 1 \\end{bmatrix} \\tag{8}$$\n\nNow we use elementary row operations to reduced row-echelon form (leading $1$'s in the pivot positions, with $0$'s below and above each).\n\n$$ \\begin{bmatrix} 1 & 3 & 1 & 0 \\\\ 2 & 7 & 0 & 1 \\end{bmatrix}\\rightarrow \\begin{bmatrix} 1 & 3 & 1 & 0 \\\\ 0 & 1 & -2 & 1 \\end{bmatrix}\\rightarrow \\begin{bmatrix} 1 & 0 & 7 & -3 \\\\ 0 & 1 & -2 & 1 \\end{bmatrix} \\tag{9}$$\n\nWe now read off the two columns of $A^{-1}$.\n\n $$ \\begin{bmatrix}7&-3\\\\-2&1\\end{bmatrix} \\tag{10}$$\n\n## Example problems\n\n\n### Example problem 1\n\nFind the conditions on $a$ and $b$ that makes the matrix $A$ invertible and find $A^{-1}$.|\n\n$$ A=\\begin{bmatrix} a & b & b \\\\ a & a & b \\\\ a & a & a \\end{bmatrix} \\tag{11} $$\n\n#### Solution\n\n1. A matrix is singular (non-invertible) if we have a row or column of zeros, so $a \\ne 0$\n2. We can also not have similar columns, so $a \\ne b$\n\nUsing Gauss-Jordan elimination we will have the following.\n\n\n\n$$ \\begin{bmatrix} a & b & b & 1 & 0 & 0 \\\\ a & a & b & 0 & 1 & 0 \\\\ a & a & a & 0 & 0 & 1 \\end{bmatrix}\\rightarrow \\begin{bmatrix} a & b & b & 1 & 0 & 0 \\\\ 0 & a-b & 0 & -1 & 1 & 0 \\\\ 0 & a-b & a-b & -1 & 0 & 1 \\end{bmatrix}\\rightarrow \\begin{bmatrix} a & b & b & 1 & 0 & 0 \\\\ 0 & a-b & 0 & -1 & 1 & 0 \\\\ 0 & 0 & a-b & 0 & -1 & 1 \\end{bmatrix}\\\\ \\rightarrow \\begin{bmatrix} a & b & b & 1 & 0 & 0 \\\\ 0 & \\frac { a-b }{ a-b } & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } & 0 \\\\ 0 & 0 & \\frac { a-b }{ a-b } & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } \\end{bmatrix}\\rightarrow \\begin{bmatrix} a & b & b & 1 & 0 & 0 \\\\ 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } & 0 \\\\ 0 & 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } \\end{bmatrix}\\\\ \\rightarrow \\begin{bmatrix} a & b & 0 & 1 & \\frac { 1 }{ a-b } \\left( b \\right) & -\\frac { 1 }{ a-b } \\left( b \\right) \\\\ 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } & 0 \\\\ 0 & 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } \\end{bmatrix}\\rightarrow \\begin{bmatrix} a & 0 & 0 & 1+\\frac { b }{ a-b } & 0 & -\\frac { 1 }{ a-b } \\left( b \\right) \\\\ 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } & 0 \\\\ 0 & 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } \\end{bmatrix}\\\\ \\rightarrow \\begin{bmatrix} 1 & 0 & 0 & \\frac { 1 }{ a-b } & 0 & -\\frac { 1 }{ a\\left( a-b \\right) } \\left( b \\right) \\\\ 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } & 0 \\\\ 0 & 0 & 1 & 0 & \\frac { -1 }{ a-b } & \\frac { 1 }{ a-b } \\end{bmatrix}\\\\ { A }^{ -1 }=\\frac { 1 }{ a-b } \\begin{bmatrix} 1 & 0 & \\frac { -b }{ a } \\\\ -1 & 1 & 0 \\\\ 0 & -1 & 1 \\end{bmatrix} \\tag{12}$$\n\nIf we were to construct this matrix using `sympy` and invert it using the `.inv()` method, we can deduce our findings through simple algebra, i.e. $a \\ne b$, $a \\ne 0$.\n\n\n```python\na, b = symbols('a b')\n```\n\n\n```python\nA = Matrix([[a, b, b], [a, a, b], [a, a, a]])\nA\n```\n\n\n```python\nA.inv()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e4137d8fccd23d25cb37771077e7e80041e2e0bc", "size": 31158, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_3_Matrix_multiplication_Inverses.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_3_Matrix_multiplication_Inverses.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_3_Matrix_multiplication_Inverses.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 51.5008264463, "max_line_length": 4056, "alphanum_fraction": 0.6567815649, "converted": true, "num_tokens": 3996, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.42250463481418826, "lm_q2_score": 0.24798743735585302, "lm_q1q2_score": 0.10477584165854106}} {"text": "```python\nfrom IPython.display import HTML\n\nHTML('''\n
''')\n```\n\n\n\n\n\n
\n\n\n\n# 8. Shor's algorithm\n\n\nShor's factoring algorithm [1] is the most well-known example of a quantum algorithm outperforming the best known classical algorithms. The algorithm allows to factor a number $N_0$ which is the product of two prime numbers $N_0 = p \\cdot q$, in polynomial time. This is possible thanks to a theorem in number theory which turns the problem of finding factors into the problem of finding the period of a periodic function. Using the quantum Fourier transform (QFT), one can then find the period of this function with high probability and complete the factorization.\n\nThe problem of factoring numbers has been studied for centuries and no efficient (polynomial time) algorithm has ever been found. The difficulty of factoring numbers is the basis on which the most widespread encryption standard, the RSA, is founded. Therefore, an algorithm which is capable of factoring numbers efficiently could have a huge impact on the security of electronic based interactions. The discovery of this algorithm by Peter Shor in 1994, led to an explosion of the field of quantum computation because of its important application. \n\nHowever, experimental implementation of the algorithm still remains a challenge because of the errors introduced by the large number of physical qubits and gates required to execute the algorithm. Proof of principle demonstration of Shor's factoring algorithm to factor the smallest number $N_0=15$ using five superconducting qubits have been done for setups like NMR [2], trapped ions [3], photons [4-6], photonic chips [7] and superconducting qubits [8,9]. These experiments show how complex is to implement the actual algorithm, as even for the smallest number which allows the algorithm to be run ($N_0=15$), extreme simplifications must be made to run the algorithm [10]. \n\n## 8.1 Mathematical preliminaries\n\n\n\n### 8.1.1 Modular arithmetic\nWhen division between integers $a$ (the dividend) and $b$ (the divisor) is taught for the first time, one is usually introduced to the idea of quotient $q$ and remainder $r$. The quotient is the number of times the dividend $a$ contains the divisor $b$, the remainder is any left-over that added to the product of the quotient by the divisor returns the dividend, $r + q\\times b = a$ or $r = a - q\\times b$.\n\n#### Example:\nConsider the case $a=40$ and $b=30$, let us find the quotient and the remainder of the division of $a$ by $b$. $40/30 \\rightarrow q = 1, \\, r = 13$,\n\t\t \nTherefore, the quotient is $q=1$ and the remainder is $r = 13$. \n\t\n\n\nBy looking at the division between integers in this way, one can easily understand the fundamentals of modular arithmetic. Modular arithmetic is a set of rules for handling operations between integer numbers. In modular arithmetic, integers multiple of a fixed number $N_0$ are considered equivalent. Thus, by selecting an integer $N_0$, the set of all integers is restricted to the integers in the interval $\\left[ 0 , N_0-1 \\right]$. \nTo visualize this, one can think about a twelve-hours clock. In a clock, the only integers allowed lie in the interval $\\left[ 0 , 11 \\right]$, once this interval is exceeded one goes back at the beginning of the interval. Thus, $9+5 = 2 \\, \\left( mod \\, 12 \\right)$.\nMore formally, two integers $a$ and $b$ are congruent modulo $N_0$ if their difference $a-b$ is an integer multiple of $N_0$. That is, $a-b = r \\cdot N_0$, where $r$ is an integer, then $a \\equiv b \\, \\left( mod \\, N_0 \\right)$. \nEquivalently, if $a \\equiv b \\, \\left( mod \\, N_0 \\right)$, then $\\frac{a-b}{N_0}$ has zero remainder. Or, $b \\, \\left( mod \\, N_0 \\right)$ is the remainder of the division $\\frac{b}{N_0}=a$. Thus, both numbers have the same remainder when divided by $N_0$.\n\n#### Example:\nEquivalence of two integers modulo $N_0$:\n$38 \\equiv 14 \\, \\left( mod \\, 12 \\right)$\n$38 - 14 = 24 = 2 \\cdot 12 $\n\nBoth $\\frac{38}{12}$ and $\\frac{14}{12}$ have remainder $2$.\n\n\n\n### 8.1.2 Continued Fraction Algorithm\n\nThe continued fraction algorithm is used to reduce a fraction $\\frac{m}{n}$ to another fraction $\\frac{u}{t}$. In general, the algorithm allows to rewrite any irrational number as a finite/infinite sum of an integer part plus the reciprocal of a number.\nSo, consider the fraction $\\frac{m}{n}$, its continued fraction expansion is\n\n$$ \\frac{m}{n} = a_0 + \\frac{1}{a_1 + \\frac{1}{a_2+ \\frac{1}{a_3+\\frac{1}{1+\\frac{1}{a_4+\\frac{1}{a_5+\\frac{1}{...}}}}}}} $$\n\nUsing the integers $a_i$, one can rewrite the fraction $\\frac{m}{n}$ as $\\frac{u}{t}$. To find $u_i$ and $t_i$ one can use the following formulas:\n\n$u_0 = a_0$, $u_1=1+a_0a_1$, ..., $u_n = a_n d_n + d_{n-2}$\n$t_0=1$, $t_1 = a_1$, ..., $t_n =a_n t_{n\u22121} +t_{n\u22122}$ \n\nWhich then give the possible approximations for $\\frac{m}{n}$:\n\n$\\frac{m}{n} \\approx \\frac{u_0}{t_0}$, $\\frac{m}{n} \\approx \\frac{u_1}{t_1}$, ..., $\\frac{m}{n} \\approx \\frac{u_n}{t_n}$\n\n\n## 8.2 Factoring and order finding \n \n \nThe possibility of running an efficient algorithm for factoring a product of two prime number arises from: i. the connection between factoring and order finding, ii. the ability of quantum computers to deal efficiently with periodic functions.\nIn this Section, the connection between the problem of factoring and the problem of finding the period of a periodic function, also called $\\textit{order finding}$ is explained.\nIt is important to note that this equivalency holds in the realm of modular arithmetic.\nLet us start from the concept of order of a number. Given an integer $a$, the order of $a$ is the smallest integer number $r$ for which the following condition holds\n\n\\begin{equation}\na^r \\text{mod} N_0 = 1 \\tag{1}\n\\end{equation}\n \n\n\n#### Example:\nConsider the case $a=2$ and $N_0=21$, let us find the order of $a$ by trying different exponents $r$ sequentially until \twe find the one for which condition (1) is satisfied:\n\t$2^1 (\\text{mod} 21) = 2$, \n\t$2^2 (\\text{mod} 21) = 4$, \n\t$2^3 (\\text{mod} 21) = 8$, \n\t$2^4 (\\text{mod} 21) = 16$,\n\t$2^5 (\\text{mod} 21) = 11$, \n\t$2^6 (\\text{mod} 21) = 1$.\n\nTherefore, the order of $a=2$ is $r=6$. \n\t\n\n\nNow that we have clarified what is the order of a number, let us explore the connection between order finding and factoring. \nStart by rewriting Eq. (1) as $a^r - 1 \\text{mod} \\, N_0 = 0$, which means that $a^r - 1$ is a multiple of $N_0$. Assuming that $r$ is even, we can write $ a^r - 1 \\text{mod} \\, N_0 = \\left( a^{\\frac{r}{2}} - 1\\right)\\left( a^{\\frac{r}{2}} + 1\\right) \\text{mod} \\, N_0 $. Finally, we check if either $a^{\\frac{r}{2}} - 1$ or $a^{\\frac{r}{2}} + 1$ shares any factors with $N_0$. This can be checked tacking the greatest common divisor (gcd) of them, which outputs the biggest number that divides both inputs. If the output of $\\text{gcd}\\left( a^{\\frac{r}{2}} \\pm 1,N_0 \\right)$ is greater than one, than that's one of the factors. The other one being $N_0/ \\text{gcd}\\left( a^{\\frac{r}{2}} \\pm 1,N_0 \\right)$.\n\n#### Example:\nAgain consider the case $a=2$ and $N_0=21$, in the previous example we found that the order is $r = 6$. Let us now show how to find the two prime factors whose product gives $21$. \n$2^6 - 1 (\\text{mod} \\, 21) = \\left( 2^3 +1 \\right)\\left( 2^3 -1 \\right) (\\text{mod} \\, 21) $.\nWe need to check if the two numbers $ \\left( 2^3 +1 \\right) = 9$ and $ \\left( 2^3 -1 \\right)=7$ have any factors in common with $21$.\nLet's start by checking $ \\left( 2^3 +1 \\right)$: $\\text{gcd}\\left( 9, 21\\right) = 3$.\nWe obtained a $gcd \\neq 1$, which means that there is a common factor between $a^{\\frac{r}{2}} + 1$ and $N_0$. We can now calculate the first factor of $21$.\n$\\text{factor}_1 = \\frac{21}{\\text{gcd}\\left( 9, 21\\right)} = \\frac{21}{3} = 7$.\nThe second factor follows immediately from $\\text{factor}_2 = \\frac{21}{\\text{factor}_1} = \\frac{21}{7} = 3$.\n\nTherefore, we find the two prime factors whose product is $N_0=21$, $\\text{factor}_1 = 7$ and $\\text{factor}_2 = 3$. Thus, the order of $a=2$ is $r=6$. \n\t\n\n\nIt's important to notice that several assumptions are needed to translate the problem of factoring to the one of order finding. These are \"weak points\" of the algorithm, which fails to produce the correct result each time one of these assumptions is violated.\n\n\n## 8.3 Quantum Fourier transform\n\nThe Quantum Fourier Transform (QFT) is the heart of Shor's factoring algorithm. Similarly to the discrete Fourier transform, which allows to compute the Fourier transform of a complex number, the QFT allows to compute the Fourier transform of a quantum state. This means that we can rewrite a generic state of a qubit register as a superposition of all the possible basis state vectors of the register, with a certain phase.\n\nThe QFT of one of the basis states $\\lvert i \\rangle$ of an $n$ qubit register is\n\n\\begin{equation}\nQFT \\lvert j \\rangle = \\frac{1}{2^{n/2}} \\sum_{k=0}^{2^{n}-1} e^{\\frac{2 \\pi i}{2^n} j k} \\lvert k \\rangle\n\\end{equation}\n\nThe QFT on a generic state can then be derived from this definition\n\n\\begin{equation}\nQFT \\sum_{j=0}^{2^n-1} x_j \\lvert j \\rangle = \\sum_{k=0}^{2^{n}-1} \\sum_{l=0}^{2^{n}-1} x_l e^{\\frac{2 \\pi i}{2^n} l k} \\lvert k \\rangle\n\\end{equation}\n\nBy writing the basis state $\\lvert j \\rangle$ in binary representation we adopt the following convention: $\\lvert j_1 j_2 ... j_n \\rangle$ correspond to $j= j_1 2^{n-1}+j_2 2^{n-2} ... j_n 2^{0}$. Following the same convention, we can also write numbers smaller than one as $0.j_1 j_2 ... j_n$ meaning $j_1/2^{0} + j_2/2^{1} +...+ j_n/2^{n}$ a it is possible to rewrite the QFT in a very simple form\n\n\\begin{aligned}\nQFT \\lvert j \\rangle & = \\frac{1}{2^{n/2}} \\sum_{k=0}^{2^{n}-1} e^{2 \\pi i jk / 2^n} \\lvert k \\rangle \\\\\n& = \\frac{1}{2^{n/2}} \\sum_{k_1=0}^{1}...\\sum_{k_n=0}^{1} e^{2 \\pi i \\left(\\sum_{l=1}^{n} k_l 2^{-l}\\right) j} \\lvert k_1 ... k_n \\rangle \\\\\n& = \\frac{1}{2^{n/2}}\\sum_{k_1=0}^{1}...\\sum_{k_n=0}^{1} \\bigotimes_{l=0}^n e^{2 \\pi i j k_l2^{-l} } \\lvert k_l \\rangle \\\\\n& = \\frac{1}{2^{n/2}} \\bigotimes_{l=1}^n \\left(\\lvert 0 \\rangle + e^{2 \\pi i j 2^{-l} } \\lvert 1 \\rangle \\right) \\\\\n& = \\frac{1}{2^{n/2}} \\left( \\lvert 0 \\rangle + e^{2 \\pi i[0.j_n]} \\lvert 1 \\rangle \\right) \\otimes \\left( \\lvert 0 \\rangle + e^{2 \\pi i[0.j_{n-1} j_{n}]} \\lvert 1 \\rangle \\right) \\otimes ...\\otimes \\left(\\lvert 0 \\rangle + e^{2 \\pi i[0.j_1j_2...j_{n-1}j_n]} \\lvert 1 \\rangle \\right)\n\\end{aligned}\n\nBy writing the QFT in this way, it is easy to find a circuit which implements it. The QFT turns out to be composed of very simple gates such as Hadamard and control-rotations around the z-axis between pairs of qubits $CR_k$.\n\n$$ CR_k =\n\\begin{pmatrix} \n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\ \n0 & 0 & 1 & 0 \\\\ \n0 & 0 & 0 & e^{2 \\pi i/ 2^k}\n\\end{pmatrix}$$\n\n\n$$\\text{1. Quantum circuit for the Quantum Fourier Transform.}$$\n\n\n#### Example: QFT on two qubits\nLet us calculate the QFT in the case of $n=2$ qubits\n\n$$QFT\\lvert j_1j_2\\rangle = \\frac{1}{\\sqrt{8}} \\left(\\lvert 0 \\rangle + e^{2 \\pi i[0.j_2]} \\lvert 1 \\rangle\\right) \\otimes \\left(\\lvert 0 \\rangle + e^{2 \\pi i[0.j_1 j_2]} \\lvert 1 \\rangle \\right) $$\n\nThe steps to creating the circuit for $\\lvert k_1k_2\\rangle = QFT\\lvert j_1j_2\\rangle$ would be:\n\n
    \n
  1. \nApply an Hadamard to $\\lvert j_2 \\rangle$, giving the state $\\frac{1}{\\sqrt{2}}\\left(\\lvert 0 \\rangle + e^{2 \\pi i 0.j_2} \\lvert 1\\rangle\\right) = \\frac{1}{\\sqrt{2}}\\left(\\lvert 0 \\rangle + (-1)^{j_2} \\lvert 1 \\rangle\\right)$\n
  2. \n\n
  3. \nApply a $CR_2$ from $\\lvert j_1 \\rangle$ to $\\lvert j_2 \\rangle$ and an Hadamard to $\\lvert j_1 \\rangle$, giving the state $\\frac{1}{\\sqrt{2}}\\left(\\lvert0\\rangle + e^{2 \\pi i[0.j_1j_2]} \\lvert1\\rangle\\right)$\n
  4. \n\n
  5. \nMeasure the qubits in reverse order, that is $k_2$ corresponds to the first qubit, $k_1$ to the second qubit.\n
  6. \n
\n\n\n$$\\text{2. Quantum Fourier Transform for two qubits.}$$\n\n\n```python\n# QISKIT: QFT code \n\n# calculate qft of two qubit as the example above\n```\n\n### QISKit: implement the quantum Fourier transform \n\n#### 1) Fourier transform of two qubits in the $\\lvert 00 \\rangle $ state \n\n\n```python\nfrom initialize import *\n\ndef qft(circ, q, n):\n \"\"\"n-qubit QFT on q in circ.\"\"\"\n for j in range(n):\n for k in range(j):\n circ.cu1(3.14/float(2**(j-k)), q[j], q[k])\n circ.h(q[j])\n\n#initialize quantum program\nmy_alg = initialize(circuit_name = 'fourier', qubit_number=2, bit_number=2, backend = 'local_qasm_simulator', shots = 1024)\n\n#add gates to the circuit\nqft(my_alg.q_circuit, my_alg.q_reg, 2) # qft of input state\nmy_alg.q_circuit.measure(my_alg.q_reg[0], my_alg.c_reg[0]) # measures first qubit\nmy_alg.q_circuit.measure(my_alg.q_reg[1], my_alg.c_reg[1]) # measures second qubit\n\n# print list of gates in the circuit\nprint('List of gates:')\nfor circuit in my_alg.q_circuit:\n print(circuit.name)\n\n#Execute the quantum algorithm\nresult = my_alg.Q_program.execute(my_alg.circ_name, backend=my_alg.backend, shots= my_alg.shots)\n\n#Show the results obtained from the quantum algorithm \ncounts = result.get_counts(my_alg.circ_name)\n\nprint('\\nThe measured outcomes of the circuits are:',counts)\n\n# credits to: https://github.com/QISKit/qiskit-tutorial\n```\n\n List of gates:\n h\n cu1\n h\n measure\n measure\n \n The measured outcomes of the circuits are: {'00': 243, '01': 241, '10': 272, '11': 268}\n\n\n## 8.4 Description of the algorithm\n\n\nShor's algorithm exploits both classical and quantum computation. Classical operations are carried out in steps where an efficient classical algorithm exists, while quantum operations are used to find the periodicity of the function needed to factor $N_0$. Here we present an outline of the algorithm.\n\n\n$$\\text{3. Quantum circuit for Shor's algorithm.}$$\n\n\nTo run the algorithm, we need two quantum registers. One contains the order or period, called period register, and the other contains the results of the computation, called computational register. The size of both registers depends on the number $N_0$ to be factored. In particular, the period register should contain a number of qubits $n_p$ in the interval $2 \\text{log} N_0 \\leq n_p < 2 \\text{log} (2N_0) $ and the computational register should be large enough to be able to represent the number $N_0-1$, thus $n_q = \\text{log} (N_0-1)$.\n\nFirst, one needs to check if $N_0$ is even. If $N_0$ is even, one of the factor is $2$ and the other is $N_0/2$. \n\n\n\nIf $N_0$ is odd, a base $a$ is picked randomly among the numbers from $0$ to $N_0-1$. Then, check if $a$ is a factor of $N_0$, by checking if gcd$\\left(a,N_0 \\right) \\neq 1$. If $a$ is has a common divisor with $N_0$, then one factor is given by gcd$\\left(a,N_0 \\right)$ and the other by $N_0/$gcd$\\left(a,N_0 \\right)$. If $a$ is co-prime with $N_0$ ( gcd$\\left(a,N_0 \\right) = 1$ ) then one needs to compute the function $a^x$ mod $N_0$, the modular exponentiation function (MEF), for $x=1,2,3,..Q-1$ where $N_0^2 \\leq Q < 2N_0^2$ and $Q=2^{n_p}$. \n\nAt the beginning of the quantum algorithm used to compute the MEF we need two registers initialized to zero $\\lvert 00...0 \\rangle \\lvert 00...0 \\rangle $. The first register also known as the period register will stores all the possible values of the exponent $x$ by creating a uniform superposition of all possible bit strings through Hadamard gates on all qubits $\\frac{1}{\\sqrt{Q}} \\sum_{x=0}^{Q-1} \\lvert x \\rangle $, and the second register, called the computational register will store the results of the MEF $a^x$ mod $N_0$, $\\lvert a^x $ mod $ N_0 \\rangle$. Thus, after the first step, one has\n\n\\begin{equation}\n\t\\frac{1}{\\sqrt{Q}} \\sum_{x=0}^{Q-1} \\lvert x \\rangle \\lvert a^x mod N_0 \\rangle\n\\end{equation}\n\nThen, we apply the quantum Fourier transform (QFT) to the first register, so that $\\lvert x \\rangle \\rightarrow \\frac{1}{\\sqrt{Q}} \\sum_{s=0}^{Q-1} e^{\\frac{2\\pi i s x}{Q}}\\lvert s \\rangle$. As a result of the QFT, interference between all the possible states occurs and only the periodic ones survive. That is, if one measures the first register, one will see a value of $s$ such that $\\frac{sx}{Q}$ is an integer $d$ when $x$ a multiple of the period $r$. Which means that $\\frac{s}{Q}=\\frac{d}{r}$. By knowing the fraction $\\frac{s}{Q}$ one can find the value of $r$ through the continued fraction algorithm.\nNow, if $r$ is odd or $r=0$, the algorithm fails and one needs to restart by picking a different base $a$. If $r$ is even, one can factorize $a^r - 1$ mod $N_0$ into $\\left( a^{\\frac{r}{2}} - 1\\right)\\left( a^{\\frac{r}{2}} + 1\\right)$ mod $N_0$. The final step is to check if $a^{\\frac{r}{2}} + 1 \\, \\text{mod} \\, N_0 \\neq -1$. If that's true, then $\\text{gcd}\\left(a^{\\frac{r}{2}} + 1,N_0 \\right)$ will be the first factor and $\\text{gcd}\\left(a^{\\frac{r}{2}} - 1,N_0 \\right)$ the other.\n\n\nThe execution of this version of the algorithm requires $n=log_2 \\left( N_0 \\right)$ qubits in the computational register to perform the modular exponentiation and another $2n$ qubits in the period register to perform the QFT. Thus, the algorithm requires a total number of $3n$ qubits.\n\n## 8.5 Example of Shor's factoring algorithm\n\nLet us see here an example of Shor's factoring algorithm for $N_0=21$. Since $21$ needs five bits to be represented in binary, we need at least $\\text{log}_2Q = 2*5$ qubits for factoring. \n\n
    \n \n
  1. Check if $N_0=21$ is even: $21 \\, \\text{mod} \\, 2=1$, $N_0$ is not even.
  2. \n \n
  3. Pick a base $a$ at random. Let's say $a=2$.
  4. \n \n
  5. Check if $a$ has any common factors with $N_0$: $\\text{gcd}\\left(a,N_0 \\right) = 1$, it doesn't.
  6. \n \n
  7. Initialize two qubit registers with $n_p = 2\\times5=10$ qubits, as $5$ qubits are needed to represent $21$ in binary. We call the first register the $period$ register and the second register the $computational$ register\n $$ \\lvert \\psi \\rangle = \\lvert 0 \\rangle^{\\otimes 10}_p \\lvert 0 \\rangle^{\\otimes 10}_c $$
  8. \n \n
  9. Apply Hadamard gate on all ther qubits of the first register to create a uniform superposition of all $2^{10}$ possible values \n $$ \\lvert \\psi \\rangle =\\frac{1}{\\sqrt{2^{10}}} \\sum_{x=0}^{2^{10}-1} \\lvert x \\rangle_p \\lvert 0 \\rangle_c $$
  10. \n \n
  11. Apply the modular exponentiation function $a^x \\, \\text{mod} \\, N_0$ on the second register, for each of the stored value of $x$ in the first register.\n \n$$ \\lvert \\psi \\rangle =\\frac{1}{\\sqrt{1024}} \\sum_{x=0}^{1023} \\lvert x \\rangle_p \\lvert 2^x \\, \\text{mod} \\, 21 \\rangle_c\n = \\\\ = \\frac{1}{\\sqrt{1024}} \\left( \\lvert 0 \\rangle_p \\lvert 1 \\rangle_c + \\lvert 1 \\rangle_p \\lvert 2 \\rangle_c + \\lvert 2 \\rangle_p \\lvert 4 \\rangle_c + \\lvert 3 \\rangle_p \\lvert 8 \\rangle_c + \\lvert 4 \\rangle_p \\lvert 16 \\rangle_c + \\lvert 5 \\rangle_p \\lvert 11 \\rangle_c + \\lvert 6 \\rangle_p \\lvert 1 \\rangle_c + \\lvert 7 \\rangle_p \\lvert 2 \\rangle_c + \\lvert 8 \\rangle_p \\lvert 4 \\rangle_c ... \\right) $$\n By looking at the values stored in the second register, we can find out what is their periodicity. In particular, it can be seen that the values start repeating with order $r=6$ (value of the first register). However, we need to do a few more steps to allow the quantum computer to find the answer by itself.
  12. \n \n
  13. To simplify the example, we will adopt the \"Principle of implicit measurement\": Without loss of generality, any qubits which are not measured at the end of the quantum circuit may be assumed to be measured. Thus, let us use the principle of implicit measurement on the second register. Since each term of the superposition has equal weight, each outcome is equally likely, therefore one will see one of the following values: $ \\lvert 1 \\rangle_c$ , $\\lvert 2 \\rangle_c$, $ \\lvert 4 \\rangle_c$, $\\lvert 8 \\rangle_c$, $\\lvert 16 \\rangle_c$, $\\lvert 11 \\rangle_c$; with probability $1/6$.\n Assume that the state $ \\lvert 4 \\rangle_c$ is measured, then we are left with the composite state:\n \n $$ \\lvert \\psi \\rangle = \\frac{\\sqrt{6}}{\\sqrt{1024}} \\left( \\lvert 2 \\rangle_p \\lvert 4 \\rangle_c + \\lvert 8 \\rangle_p \\lvert 4 \\rangle_c + \\lvert 14 \\rangle_p \\lvert 4 \\rangle_c + \\lvert 20 \\rangle_p \\lvert 4 \\rangle_c ... \\right) $$
  14. \n \n
  15. Apply the QFT to the first register\n $$ \\lvert x \\rangle_p \\rightarrow \\frac{1}{\\sqrt{1024}} \\sum_{s=0}^{1023} e^{2\\pi i \\frac{s x}{1024}}\\lvert s \\rangle_p $$\n Which transform each of the terms in the period register as:\n $$ \\lvert 2 \\rangle_p \\rightarrow \\frac{1}{\\sqrt{1024}} \\sum_{s=0}^{1023} e^{2\\pi i \\frac{2 s}{1024}}\\lvert s \\rangle_p $$\n $$ \\lvert 8 \\rangle_p \\rightarrow \\frac{1}{\\sqrt{1024}} \\sum_{s=0}^{1023} e^{2\\pi i \\frac{8 s}{1024}}\\lvert s \\rangle_p $$\n $$ \\lvert 14 \\rangle_p \\rightarrow \\frac{1}{\\sqrt{1024}} \\sum_{s=0}^{1023} e^{2\\pi i \\frac{14 s}{1024}}\\lvert s \\rangle_p $$\n $$ \\lvert 20 \\rangle_p \\rightarrow \\frac{1}{\\sqrt{1024}} \\sum_{s=0}^{1023} e^{2\\pi i \\frac{20 s}{1024}}\\lvert s \\rangle_p $$\n $$...$$\n \n So, we can write:\n $$ \\lvert \\psi \\rangle = \\frac{\\sqrt{6}}{1024} \\sum_{s=0}^{1023} \\left( e^{2\\pi i \\frac{2s}{1024}} + e^{2\\pi i \\frac{8s}{1024}} + e^{2\\pi i \\frac{14s}{1024}} + e^{2\\pi i \\frac{20s}{1024}} ... \\right) \\lvert s \\rangle_p \\lvert 4 \\rangle_c = \\\\\n = \\frac{\\sqrt{6}}{1024} \\sum_{s=0}^{1023} e^{2\\pi i \\frac{2s}{1024}} \\left( 1 + e^{2\\pi i \\frac{6s}{1024}} + e^{2\\pi i \\frac{12s}{1024}} + e^{2\\pi i \\frac{18s}{1024}} ... \\right) \\lvert s \\rangle_p \\lvert 4 \\rangle_c$$\n
  16. \n\n
  17. Measure the period register. The probability of finding a certain value $\\lvert s \\rangle_p$ is:\n $$ P(s) = \\lvert \\frac{\\sqrt{6}}{1024} e^{2\\pi i \\frac{2s}{1024}} \\left( 1 + e^{2\\pi i \\frac{6s}{1024}} + e^{2\\pi i \\frac{12s}{1024}} + e^{2\\pi i \\frac{18s}{1024}} ... \\right) \\rvert^2 $$\n \n Because of the possible sign difference between all different terms, the values of $s$ which have the highest likelyhood to be observed are the ones for which the phase terms all have the same sign and they add up. That is, $\\frac{s}{1024}=\\frac{d}{6}$, where $d$ is an integer. Therefore, a value $s=\\frac{1024 \\cdot d}{6}$ where $d=1,2,3,...$ will most likely be observed.\n Let us assume that the value $s=853$ is measured.\n
  18. \n\n
  19. To find the period from the value of $s$ measured, one then uses the continued fraction algorithm in the following way: we find the fraction $\\frac{d}{r}$ which approximates the fraction $\\frac{s}{Q}$ to a fixed precision $2*Q$\n $$\\lvert \\frac{853}{1024} - \\frac{d}{r} \\rvert < \\frac{1}{2048} $$\n So, let's find $d$ and $r$ with the continued fraction algorithm:\n $$ \\frac{853}{1024} = 0 + \\frac{1}{1 + \\frac{1}{4+ \\frac{1}{1+\\frac{1}{1+\\frac{1}{1+\\frac{1}{84+\\frac{1}{2}}}}}}} $$\n which gives as possible fractions $\\frac{d}{r}$: $\\frac{1}{1}$, $\\frac{4}{5}$, $\\frac{5}{6}$, ...\n \n imposing the condition written above, we find that the only fraction satisfying it is $\\frac{d}{r} = \\frac{5}{6}$. Therefore, the period is $r=6$!\n
  20. \n \n
  21. Once the period has been found, we can find the factors of $N_0=21$ almost immediately. First check if $r$ is even then check if $a^{\\frac{r}{2}} + 1 \\, \\text{mod} \\, N_0 \\neq -1 $ :\n $$ 6 \\, \\text{mod} \\, 2 = 0 $$\n $$ 2^{\\frac{6}{2}} + 1 \\, \\text{mod} \\, 21 = 9$$\n Therefore the two factors are: $ p = \\text{gcd}\\left(a^{\\frac{r}{2}} + 1,N_0 \\right)$ and $q = \\text{gcd}\\left(a^{\\frac{r}{2}} - 1,N_0 \\right)$\n $$p = \\text{gcd}\\left(9,21 \\right) = 3$$\n $$q = \\text{gcd}\\left(7,21 \\right) = 7$$\n
  22. \n\n
\n\n \n\n## Exercises\n\n\n
    \n
  1. \nCalculate the following modular expression:\n\n
      \n
    1. \n$ 54 \\, \\text{(mod 5)}$\n
    2. \n
    3. \n$ 54 \\times 35 \\, \\text{(mod 17)}$\n
    4. \n
    5. \n$ 9^{2} \\, \\text{(mod 6 )}$\n
    6. \n
    \n
  2. \n\n
  3. \nRewrite the following fractions using the continued fraction algorithm\n\n
      \n
    1. \n$ \\frac{45}{251}$\n
    2. \n
    3. \n$ \\frac{71}{385}$\n
    4. \n
    5. \n$ \\frac{512}{1027}$\n
    6. \n
    \n\n
  4. \n\n
  5. \nWhat is the minimum number of qubits needed to factor:\n\n
      \n
    1. \n$ 15 $\n
    2. \n\n
    3. \n$ 4,367,398 $\n
    4. \n
    \n\n
  6. \n\n
  7. \nDerive the expression of the Quantum Fourier Transform of three qubits, similarly to how it is done in the example shown in Section 8.3\n\n
  8. \n\n
  9. \nFollowing the steps highlighted in the example given in Section 8.5, show the factoring of $N_0=15$ with $a=2$.\n
  10. \n\n
  11. \nWrite a QISKit program that calculates the QFT of three qubits in the state $\\lvert 010 \\rangle$ .\n
  12. \n\n\n
\n\n## References\n\n\n[1] P. Shor, in Proc. 35th Annu. Symp. on the Foundations of\nComputer Science, edited by S. Goldwasser (IEEE Computer\nSociety Press, Los Alamitos, California, 1994),\np. 124-134.\n\n\n[2] L. M. K. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni,\nM. H. Sherwood, and I. L. Chuang, Nature 414,\n883 (2001).\n\n[3] Monz, T. et al., Science 351, 1068-1070 (2016).\n\n\n[4] C. Y. Lu, D. E. Browne, T. Yang, and J. W. Pan, Physical\nReview Letters 99, 250504 (2007).\n\n[5] E. Martin-Lopez, A. Laing, T. Lawson, R. Alvarez, X.-Q.\nZhou, and J. L. O'Brien, Nat Photon advance online publication (2012).\n\n[6] Lanyon, B. P. et al., Phys. Rev. Lett. 99, 250505 (2007).\n\n[7] A. Politi, J. C. F. Matthews, and J. L. O' Brien, Science 25, 1221 (2009).\n\n[8] E. Lucero, R. Barends, Y. Chen, J. Kelly, M. Mariantoni, A. Megrant, P. O'Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, Y. Yin, A. N. Cleland and John M. Martinis, Nature Physics volume 8, 719-723 (2012).\n\n[9] P. J. Coles et al. (2018), arXiv:1804.03719.\n\n[10] J. A. Smolin, G. Smith, and A. Vargo, Nature (London) 499, 163-165 (2013).\n\n", "meta": {"hexsha": "ebf24f5b2b06dfad840d200b8efbd83a7fd16c17", "size": 32728, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "awards/teach_me_quantum_2018/intro2qc/8.Shor's algorithm.ipynb", "max_stars_repo_name": "YumaNK/qiskit-community-tutorials", "max_stars_repo_head_hexsha": "491fbb7ef1f99772d25eb6eacb4340ef1ac75253", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 293, "max_stars_repo_stars_event_min_datetime": "2020-05-29T17:03:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T07:09:50.000Z", "max_issues_repo_path": "awards/teach_me_quantum_2018/intro2qc/8.Shor's algorithm.ipynb", "max_issues_repo_name": "YumaNK/qiskit-community-tutorials", "max_issues_repo_head_hexsha": "491fbb7ef1f99772d25eb6eacb4340ef1ac75253", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 30, "max_issues_repo_issues_event_min_datetime": "2020-06-23T19:11:32.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T22:25:54.000Z", "max_forks_repo_path": "awards/teach_me_quantum_2018/intro2qc/8.Shor's algorithm.ipynb", "max_forks_repo_name": "YumaNK/qiskit-community-tutorials", "max_forks_repo_head_hexsha": "491fbb7ef1f99772d25eb6eacb4340ef1ac75253", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 204, "max_forks_repo_forks_event_min_datetime": "2020-06-08T12:55:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T08:37:14.000Z", "avg_line_length": 58.6523297491, "max_line_length": 746, "alphanum_fraction": 0.5821009533, "converted": true, "num_tokens": 8672, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.20946968379784037, "lm_q1q2_score": 0.10473484189892018}} {"text": "# Example of DOV search methods for CPT measurements (sonderingen)\n\n## Use cases explained below\n* Get CPT measurements in a bounding box\n* Get CPT measurements with specific properties\n* Get CPT measurements in a bounding box based on specific properties\n* Select CPT measurements in a municipality and return depth\n* Get CPT measurements based on fields not available in the standard output dataframe\n* Get CPT measurements data, returning fields not available in the standard output dataframe\n* Get CPT measurements in a municipality and where groundwater related data are available\n\n\n```python\n%matplotlib inline\nimport inspect, sys\nimport warnings; warnings.simplefilter('ignore')\n```\n\n\n```python\n# check pydov path\nimport pydov\n```\n\n## Get information about the datatype 'Sondering'\n\n\n```python\nfrom pydov.search.sondering import SonderingSearch\nsondering = SonderingSearch()\n```\n\nA description is provided for the 'Sondering' datatype:\n\n\n```python\nsondering.get_description()\n```\n\n\n\n\n 'In DOV worden de resultaten van sonderingen ter beschikking gesteld. Bij het uitvoeren van de sondering wordt een sondeerpunt met conus bij middel van buizen statisch de grond ingedrukt. Continu of met bepaalde diepte-intervallen wordt de weerstand aan de conuspunt, de plaatselijke wrijvingsweerstand en/of de totale indringingsweerstand opgemeten. Eventueel kan aanvullend de waterspanning in de grond rond de conus tijdens de sondering worden opgemeten met een waterspanningsmeter. Het op diepte drukken van de sondeerbuizen gebeurt met een indrukapparaat. De nodige reactie voor het indrukken van de buizen wordt geleverd door een verankering en/of door het gewicht van de sondeerwagen. De totale indrukcapaciteit varieert van 25 kN tot 250 kN, afhankelijk van apparaat en opstellingswijze.'\n\n\n\nThe different fields that are available for objects of the 'Sondering' datatype can be requested with the get_fields() method:\n\n\n```python\nfields = sondering.get_fields()\n\n# print available fields\nfor f in fields.values():\n print(f['name'])\n```\n\n x\n formele_stratigrafie\n gemeente\n u\n diepte_sondering_tot\n pkey_sondering\n weerstandsdiagram\n informele_stratigrafie\n diepte_sondering_van\n id\n i\n diepte_gw_m\n opdrachten\n conus\n qc\n z\n y\n datum_gw_meting\n fs\n sondeermethode\n Qt\n generated_id\n start_sondering_mtaw\n hydrogeologische_stratigrafie\n uitvoerder\n sondeernummer\n datum_aanvang\n apparaat\n meetreeks\n\n\nYou can get more information of a field by requesting it from the fields dictionary:\n* *name*: name of the field\n* *definition*: definition of this field\n* *cost*: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.\n* *notnull*: whether the field is mandatory or not\n* *type*: datatype of the values of this field\n\n\n```python\nfields['diepte_sondering_tot']\n```\n\n\n\n\n {'cost': 1,\n 'definition': 'Maximumdiepte van de sondering ten opzichte van het aanvangspeil, in meter.',\n 'name': 'diepte_sondering_tot',\n 'notnull': True,\n 'query': True,\n 'type': 'float'}\n\n\n\nOptionally, if the values of the field have a specific domain the possible values are listed as *values*:\n\n\n```python\nfields['conus']['values']\n```\n\n\n\n\n ['E', 'M1', 'M2', 'M4', 'U', 'onbekend']\n\n\n\n## Example use cases\n\n### Get CPT measurements in a bounding box\n\nGet data for all the CPT measurements that are geographically located within the bounds of the specified box.\n\nThe coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.\n\n\n```python\nfrom pydov.util.location import Within, Box\n\ndf = sondering.search(location=Within(Box(152999, 206930, 153050, 207935)))\ndf.head()\n```\n\n [000/001] c\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.21.62.0600NaNNaNNaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.43.64.2600NaNNaNNaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.62.63.4600NaNNaNNaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.84.05.6600NaNNaNNaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN1.03.06.5300NaNNaNNaN
\n
\n\n\n\nThe dataframe contains one CPT measurement where multiple measurement points. The available data are flattened to represent unique attributes per row of the dataframe.\n\nUsing the *pkey_sondering* field one can request the details of this borehole in a webbrowser:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/1973-016812\n\n\n### Get CPT measurements with specific properties\n\nNext to querying CPT based on their geographic location within a bounding box, we can also search for CPT measurements matching a specific set of properties. For this we can build a query using a combination of the 'Sondering' fields and operators provided by the WFS protocol.\n\nA list of possible operators can be found below:\n\n\n```python\n[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]\n```\n\n\n\n\n ['PropertyIsBetween',\n 'PropertyIsEqualTo',\n 'PropertyIsGreaterThan',\n 'PropertyIsGreaterThanOrEqualTo',\n 'PropertyIsLessThan',\n 'PropertyIsLessThanOrEqualTo',\n 'PropertyIsLike',\n 'PropertyIsNotEqualTo',\n 'PropertyIsNull',\n 'SortProperty']\n\n\n\nIn this example we build a query using the *PropertyIsEqualTo* operator to find all CPT measuremetns that are within the community (gemeente) of 'Herstappe':\n\n\n```python\nfrom owslib.fes import PropertyIsEqualTo\n\nquery = PropertyIsEqualTo(propertyname='gemeente',\n literal='Elsene')\ndf = sondering.search(query=query)\n\ndf.head()\n```\n\n [000/029] ccccccccccccccccccccccccccccc\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.03.3NaNNaNNaNNaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.12.9NaNNaNNaNNaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.22.7NaNNaNNaNNaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.32.4NaNNaNNaNNaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.43.6NaNNaNNaNNaN
\n
\n\n\n\nOnce again we can use the *pkey_sondering* as a permanent link to the information of these CPT measurements:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/1992-000337\n https://www.dov.vlaanderen.be/data/sondering/1992-000338\n https://www.dov.vlaanderen.be/data/sondering/1976-014638\n https://www.dov.vlaanderen.be/data/sondering/1992-000335\n https://www.dov.vlaanderen.be/data/sondering/1971-023091\n https://www.dov.vlaanderen.be/data/sondering/1975-014063\n https://www.dov.vlaanderen.be/data/sondering/1976-014640\n https://www.dov.vlaanderen.be/data/sondering/1971-022775\n https://www.dov.vlaanderen.be/data/sondering/1976-013899\n https://www.dov.vlaanderen.be/data/sondering/1971-022776\n https://www.dov.vlaanderen.be/data/sondering/1976-013900\n https://www.dov.vlaanderen.be/data/sondering/1971-023323\n https://www.dov.vlaanderen.be/data/sondering/1971-023321\n https://www.dov.vlaanderen.be/data/sondering/1980-024719\n https://www.dov.vlaanderen.be/data/sondering/1976-030140\n https://www.dov.vlaanderen.be/data/sondering/1971-023320\n https://www.dov.vlaanderen.be/data/sondering/1971-022777\n https://www.dov.vlaanderen.be/data/sondering/1975-014064\n https://www.dov.vlaanderen.be/data/sondering/1971-023322\n https://www.dov.vlaanderen.be/data/sondering/1971-023319\n https://www.dov.vlaanderen.be/data/sondering/1976-030150\n https://www.dov.vlaanderen.be/data/sondering/1976-030128\n https://www.dov.vlaanderen.be/data/sondering/1992-000336\n https://www.dov.vlaanderen.be/data/sondering/1974-016926\n https://www.dov.vlaanderen.be/data/sondering/1976-013898\n https://www.dov.vlaanderen.be/data/sondering/1974-016927\n https://www.dov.vlaanderen.be/data/sondering/1980-024720\n https://www.dov.vlaanderen.be/data/sondering/1992-000339\n https://www.dov.vlaanderen.be/data/sondering/1976-030148\n\n\n### Get CPT measurements in a bounding box based on specific properties\n\nWe can combine a query on attributes with a query on geographic location to get the CPT measurements within a bounding box that have specific properties.\n\nThe following example requests the CPT measurements with a depth greater than or equal to 2000 meters within the given bounding box.\n\n(Note that the datatype of the *literal* parameter should be a string, regardless of the datatype of this field in the output dataframe.)\n\n\n```python\nfrom owslib.fes import PropertyIsGreaterThanOrEqualTo\n\nquery = PropertyIsGreaterThanOrEqualTo(\n propertyname='diepte_sondering_tot',\n literal='20')\n\ndf = sondering.search(\n location=Within(Box(200000, 211000, 205000, 214000)),\n query=query\n )\n\ndf.head()\n```\n\n [000/009] ccccccccc\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.301.22NaN1.0NaN0.8
1https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.353.19NaN2.0NaN1.0
2https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.407.21NaN63.0NaN1.2
3https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.4512.75NaN138.0NaN1.2
4https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.5015.26NaN143.0NaN1.4
\n
\n\n\n\nWe can look at one of the CPT measurements in a webbrowser using its *pkey_sondering*:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/2009-000054\n https://www.dov.vlaanderen.be/data/sondering/2007-049200\n https://www.dov.vlaanderen.be/data/sondering/2015-054999\n https://www.dov.vlaanderen.be/data/sondering/2010-062407\n https://www.dov.vlaanderen.be/data/sondering/2015-054995\n https://www.dov.vlaanderen.be/data/sondering/2009-000053\n https://www.dov.vlaanderen.be/data/sondering/2015-055496\n https://www.dov.vlaanderen.be/data/sondering/2009-000052\n https://www.dov.vlaanderen.be/data/sondering/2007-049201\n\n\n### Select CPT measurements in a municipality and return depth\n\nWe can limit the columns in the output dataframe by specifying the *return_fields* parameter in our search.\n\nIn this example we query all the CPT measurements in the city of Ghent and return their depth:\n\n\n```python\nquery = PropertyIsEqualTo(propertyname='gemeente',\n literal='Gent')\ndf = sondering.search(query=query,\n return_fields=('diepte_sondering_tot',))\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diepte_sondering_tot
02.7
11.4
27.6
311.5
418.6
\n
\n\n\n\n\n```python\ndf.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diepte_sondering_tot
count3560.000000
mean18.495803
std8.505536
min1.000000
25%11.400000
50%18.600000
75%24.600000
max52.600000
\n
\n\n\n\n\n```python\ndf.boxplot()\n```\n\n### Get CPT measurements based on fields not available in the standard output dataframe\n\nTo keep the output dataframe size acceptable, not all availabe WFS fields are included in the standard output. However, one can use this information to select CPT measurements as illustrated below.\n\nFor example, make a selection of the CPT measurements in municipality the of Antwerp, using a conustype 'U':\n\n\n```python\nfrom owslib.fes import And\n\nquery = And([PropertyIsEqualTo(propertyname='gemeente',\n literal='Antwerpen'),\n PropertyIsEqualTo(propertyname='conus', \n literal='U')]\n )\ndf = sondering.search(query=query,\n return_fields=('pkey_sondering', 'sondeernummer', 'x', 'y', 'diepte_sondering_tot', 'datum_aanvang'))\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxydiepte_sondering_totdatum_aanvang
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.029.701993-03-02
1https://www.dov.vlaanderen.be/data/sondering/2...GEO-02/111-S1150347.3214036.429.952002-12-17
2https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD4-E146437.7222317.54.452004-07-12
3https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD6-E146523.9222379.77.402004-07-14
4https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD5-E146493.4222298.81.652004-07-16
\n
\n\n\n\n### Get CPT data, returning fields not available in the standard output dataframe\n\nAs denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the *return_fields* parameter of the search:\n\n\n```python\nquery = And([PropertyIsEqualTo(propertyname='gemeente', literal='Gent'),\n PropertyIsEqualTo(propertyname='conus', literal='U')])\n\ndf = sondering.search(query=query,\n return_fields=('pkey_sondering', 'sondeernummer', 'diepte_sondering_tot',\n 'conus', 'x', 'y'))\n\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxydiepte_sondering_totconus
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SV110241.6204692.233.80U
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SI110062.5205051.415.65U
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SII110107.0204965.326.50U
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIII110152.4204876.116.50U
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIV110197.8204787.016.70U
\n
\n\n\n\n\n```python\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxydiepte_sondering_totconus
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SV110241.6204692.233.80U
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SI110062.5205051.415.65U
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SII110107.0204965.326.50U
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIII110152.4204876.116.50U
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIV110197.8204787.016.70U
5https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIX110479.5205240.727.60U
6https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SVI110288.5204608.816.80U
7https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SVII110334.3204519.826.70U
8https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SX110685.0204845.527.50U
9https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SXI109941.5204346.925.60U
10https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SXII110412.2204398.126.50U
11https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SIX(CPT9)105018.0190472.017.60U
12https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SVII(CPT7)105046.0190550.026.05U
13https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SVIII(CPT8)104997.0190521.024.75U
14https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S2105376.6189104.329.90U
15https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S3105391.3189083.75.90U
16https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S1105399.3189065.230.60U
17https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S1106104.1188699.418.05U
18https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S2106045.3188708.417.30U
19https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S3106100.5188743.818.70U
20https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S5106130.0188712.017.30U
21https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S4106077.5188686.017.00U
\n
\n\n\n\n## Resistivity plot\n\nThe data for the reporting of resistivity plots with the online application, see for example [this report](https://www.dov.vlaanderen.be/zoeken-ocdov/proxy-sondering/sondering/1993-001275/rapport/identifygrafiek?outputformaat=PDF), is also accessible with the pydov package. Querying the data for this specific _sondering_:\n\n\n```python\nquery = PropertyIsEqualTo(propertyname='pkey_sondering',\n literal='https://www.dov.vlaanderen.be/data/sondering/1993-001275')\ndf_sond = sondering.search(query=query)\n\ndf_sond.head()\n```\n\n [000/001] c\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.611.60NaN130.069.0NaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.76.30NaN100.029.0NaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.86.22NaN120.0-4.0NaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.94.92NaN120.0-48.0NaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN1.04.40NaN80.0-35.0NaN
\n
\n\n\n\nWe have the depth (`z`) available, together with the measured values for each depth of the variables (in dutch):\n\n* `qc`: Opgemeten waarde van de conusweerstand, uitgedrukt in MPa.\n* `Qt`: Opgemeten waarde van de totale weerstand, uitgedrukt in kN.\n* `fs`: Opgemeten waarde van de plaatelijke kleefweerstand uitgedrukt in kPa.\n* `u`: Opgemeten waarde van de porienwaterspanning, uitgedrukt in kPa.\n* `i`: Opgemeten waarde van de inclinatie, uitgedrukt in graden.\n\nTo recreate the resistivity plot, we also need the `resistivity number` (wrijvingsgetal `rf`), see [DOV documentation](https://www.dov.vlaanderen.be/page/sonderingen).\n\n\\begin{equation}\nR_f = \\frac{f_s}{q_c}\n\\end{equation}\n\n**Notice:** $f_s$ is provide in kPa and $q_c$ in MPa.\n\nAdding `rf` to the dataframe:\n\n\n```python\ndf_sond[\"rf\"] = df_sond[\"fs\"]/df_sond[\"qc\"]/10 \n```\n\nRecreate the resistivity plot:\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndef make_patch_spines_invisible(ax):\n ax.set_frame_on(True)\n ax.patch.set_visible(False)\n for sp in ax.spines.values():\n sp.set_visible(False)\n```\n\n\n```python\nfig, ax0 = plt.subplots(figsize=(8, 12))\n\n# Prepare the individual axis\nax_qc = ax0.twiny()\nax_fs = ax0.twiny()\nax_u = ax0.twiny()\nax_rf = ax0.twiny()\n\nfor i, ax in enumerate([ax_qc, ax_fs, ax_u]):\n ax.spines[\"top\"].set_position((\"axes\", 1+0.05*(i+1)))\n make_patch_spines_invisible(ax)\n ax.spines[\"top\"].set_visible(True)\n\n# Plot the data on the axis\ndf_sond.plot(x=\"rf\", y=\"z\", label=\"rf\", ax=ax_rf, color='purple', legend=False)\ndf_sond.plot(x=\"qc\", y=\"z\", label=\"qc (MPa)\", ax=ax_qc, color='black', legend=False)\ndf_sond.plot(x=\"fs\", y=\"z\", label=\"fs (kPa)\", ax=ax_fs, color='green', legend=False)\ndf_sond.plot(x=\"u\", y=\"z\", label=\"u (kPa)\", ax=ax_u, color='red', \n legend=False, xlim=(-100, 300)) # ! 300 is hardocded here for the example\n\n# styling and configuration\nax_rf.xaxis.label.set_color('purple')\nax_fs.xaxis.label.set_color('green')\nax_u.xaxis.label.set_color('red')\n\nax0.axes.set_visible(False)\nax_qc.axes.yaxis.set_visible(False)\nax_fs.axes.yaxis.set_visible(False)\nfor i, ax in enumerate([ax_rf, ax_qc, ax_fs, ax_u, ax0]):\n ax.spines[\"right\"].set_visible(False)\n ax.spines[\"bottom\"].set_visible(False)\n ax.xaxis.label.set_fontsize(15)\n ax.xaxis.set_label_coords(-0.05, 1+0.05*i)\n ax.spines['left'].set_position(('outward', 10))\n ax.spines['left'].set_bounds(0, 30)\nax_rf.set_xlim(0, 46)\n\nax0.invert_yaxis()\nax_rf.invert_xaxis()\nax_u.set_ylabel(\"Depth(m)\", fontsize=12)\nfig.legend(loc='lower center', ncol=4)\nfig.tight_layout()\n```\n\n## Visualize locations\n\nUsing Folium, we can display the results of our search on a map.\n\n\n```python\n# import the necessary modules (not included in the requirements of pydov!)\nimport folium\nfrom folium.plugins import MarkerCluster\nfrom pyproj import Proj, transform\n```\n\n\n```python\n# convert the coordinates to lat/lon for folium\ndef convert_latlon(x1, y1):\n inProj = Proj(init='epsg:31370')\n outProj = Proj(init='epsg:4326')\n x2,y2 = transform(inProj, outProj, x1, y1)\n return x2, y2\ndf['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y'])) \n# convert to list\nloclist = df[['lat', 'lon']].values.tolist()\n```\n\n\n```python\n# initialize the Folium map on the centre of the selected locations, play with the zoom until ok\nfmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=11)\nmarker_cluster = MarkerCluster().add_to(fmap)\nfor loc in range(0, len(loclist)):\n folium.Marker(loclist[loc], popup=df['sondeernummer'][loc]).add_to(marker_cluster)\nfmap\n\n```\n\n\n\n\n
\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "383c71447622ab287f6a5c34d97c8a96d330551e", "size": 193891, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_stars_repo_name": "jorisvandenbossche/pydov", "max_stars_repo_head_hexsha": "8b909209e63455fb06251d73e32958d66c6e14ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_issues_repo_name": "jorisvandenbossche/pydov", "max_issues_repo_head_hexsha": "8b909209e63455fb06251d73e32958d66c6e14ed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_forks_repo_name": "jorisvandenbossche/pydov", "max_forks_repo_head_hexsha": "8b909209e63455fb06251d73e32958d66c6e14ed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.0082322357, "max_line_length": 78484, "alphanum_fraction": 0.7540731648, "converted": true, "num_tokens": 14390, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.20946968133032523, "lm_q1q2_score": 0.10473484066516262}} {"text": "# 1. Chapter 1 - Plausible Reasoning\nAs we tread further into the twenty first century, almost everyone is expected to memorize the mantra \"we must make data driven decisions\" (well, at least most people in the technology space, and certainly data scientists). However, I want us to pause for a moment and think about what that really means?\n\nIn an idealized, rigid, platonic world this may simply mean to cast our first intuitions aside, instead using metrics related to the problem at hand, past decision's outcomes, and so on. Now, in a simplistic system this is a satisfactory approach. Consider the following (overly simple) scenario; you want to figure out the fastest way to drive from your house to the grocery store at rush hour (note how _constrained_ this situation already is). You intuitively feel as though route $A$ will be faster, so in general you take that route. You know that on average it takes about 9 minutes. Then, one day your roommate decides to join you and recommends route $B$, stating that they always drive that way and that on average it takes 6 minutes. You give route $B$ a shot an sure enough it takes 6 and half minutes. \n\nThis is an example of a very simplistic scenario that is conducive to basic data driven decision making. You have (essentially) all the data/variables you need in order to represent the scenario at hand. In other words, the decision is based on a univariate function:\n\n$$\\text{Time spent driving to grocery story} = f(\\text{route})$$\n\nWe have data regarding driving time of both routes, and in this toy example there is really nothing else we need to consider in order to make an optimal decision that reduces driving time. \n\nIt should be no surprise that this is _not_ how things work in the real world. The real world is messy, contains an abundance of variables, and these variables manifest into **uncertainty**. The question that I have become obsessed with is as follows:\n\n> How can we reason optimally in complex and uncertain situations? \n\nFor instance, let's now say that your company sells widgets. You, as a person in marketing, are in charge of coming up with sales offerings around the holiday season. Your initial intuition is that if you give out 20 dollar coupons to anyone who makes a purchase, you will get more purchases. However, there is a competing hypothesis from your colleague that suggests offering a discount to customers who make over 1000 dollar in purchases would actually be more effective at generating revenue. You have some historical time series data, but _neither_ have specifically been conditioned upon the exact hypotheses you are both proposing (i.e., you have no data that was collected during a 20 dollar coupon period, or during a 1000 dollar purchase discount period). The data that you have is necessarily incomplete, and even if it wasn't we still must confront the following logical problem:\n\n> How do we use data (frequencies of events) to estimate plausibilities of beliefs?\n\nIn other words, how can we use the data present to estimate how plausible on hypothesis is compared to the other? This question is the central focus of _Probability Theory: The Logic of Science_, by E.T. Jaynes. Often viewed as the first text to make probability theory a \"hard\" branch of mathematics (compared to a group of ad hoc methods), it is an incredibly ambitious and thought provoking book that should be on any data scientist's or statistician's bookshelf. With that said, at times it is rather dense, and I wanted to take the time to create a set of articles that serve as chapter summaries. Note, these do not mean to replace the original text; rather, they can be read in tandem to clear up any sources of confusion and ensure clear understanding. \n\nWith that said, let's begin digging into the book, starting with the preface and chapter 1.\n\n## 0.1 Preface\nJaynes starts the book by stating who the intended audience is; while this is generally not very informative, here it actually has a good deal of merit! He states that the material is meant to help those who are in fields such as physics, chemistry, biology, geology, medicine, economics, sociology, engineering, operations research, etc.; any field where **inference** is needed. He adds a footnote stating that by inference he means:\n\n> **Inference:** **Deductive** reasoning whenever enough information is at hand to permit it; **inductive** or _plausible_ reasoning when the necessary information is not available (as it almost never is in real problems). If a problem can be solved with deductive reasoning, probability theory is not needed for it. Thus, the topic of this book is **the optimal processing of incomplete information**.\n\nAs a data scientist this type of footnote should send tingles down your spine. In nearly every situation you will encounter you are often treading the line of making inferences based off of a combination of deductive and inductive reasoning; who wouldn't want to be making those inferences in an optimal way? \n\nNow, for those unfamiliar with the concepts of deduction and induction I recommend checking out this section of my article on Bayesian Inference. But, for a quick recap we can think of **deduction** as forming some hypothesis about the state of the world or it's workings, gathering data about that state, and then seeing if our data confirms or denies our hypothesis. It can be visualized below:\n\n\n\nThat is deduction. We perform deduction every day with relative ease. On the other hand we have **induction**, which works in the opposite direction:\n\n\n\nHere, we gather data about the world around us, and after picking up on a pattern we induce a hypothesis, and then work towards concluding it's validity. Now, it should be noted that this idea of reasoning from evidence to hypothesis can be thought of as a parallel of reasoning from effect to cause. \n\nAs a small example (that I came across when reading _The Book of Why_ by Judea Pearl, consider Sherlock Holmes. Imagine that Sherlock Holmes was called to investigate a broken window. Sure he could go about it deductively and, while driving to the location of the window (i.e. before gathering any data), generate the hypothesis that the it was a group of children who broke the window. This is what humans do with ease, daily. However, what truly made Sherlock Holmes so powerful was his ability to perform induction. In this case, he arrives at the scene of the broken window, sees glass scattered about, and surely a host of other interesting things; he takes all of this _data_ and then generates a hypothesis about how the window was broken. \n\nIf that example leads you to feel slightly intimidated by the process of induction, do not worry! Without digging into it too deeply, there is actually an _asymmetry_ going on here that can lead you into _chaos theory_. For this I will borrow another example, this time from Nassim Taleb. Imagine that you want to understand what happens when you leave a block of ice out at room temperature. You hypothesize that it will melt (form your hypothesis). You then take 10 blocks of ice, leave them all out at room temperature, and see that they do indeed all melt (collect data). You just performed a very straight forward deduction, making use of what is referred to as the **forward process**. \n\nConsider the inverse case now. Imagine that you walk into your kitchen and see that there is a puddle of water on the ground. You try and gather data, seeing if there is a leak in a pipe anywhere, if a cup spilled, etc. There is an incredibly long list of ways that this water could have gotten there. Trying to determine (from what is most likely incomplete data) that the water is actually there due to a block of ice that melted, is incredibly challenging. This is known as the **backward process**. If you are interested in this type of problem I recommend looking at the [Butterfly Effect](https://en.wikipedia.org/wiki/Butterfly_effect); for now I will leave it here to prevent going down a rabbit hole. \n\n### 0.2 The Theme of the Book\nTo give us a north star to focus on, I want to take a moment to highlight what is essentially the theme of the book:\n\n> **Probability theory as extended logic**.\n\nWe will dig into this much further in subsequent posts, but what this book does is create a framework in which the rules of probability theory can be viewed as uniquely valid principles of _logic_ in general, leaving out reference to _chance_ or _random variables_. This allows for the imaginary distinction between probability theory and statistical inference to disappear, allowing logical unity as well as greater technical power. \n\nThis theme amounts to recognition that the mathematical rules of probability theory are not merely rules for calculating frequencies of \"random variables\"; they are also the unique and consistent rules for conducting inference (i.e. plausible reasoning) of any kind. \n\nThis set of rules will automatically include all **Bayesian** calculations, as well as all **frequentist** calculations. Never the less, our basic rules are broader than either of these, and in many applications the calculations will not fit into either category. As explained by Jaynes:\n\n> The traditional frequentist methods which only use sampling distributions are usable and useful in particularly simple, idealized problems; however, the represent the most proscribed cases of probability theory, because they presuppose conditions (independent repetitions of a 'random experiment' but no relevant prior information) that are hardly ever met in real problems. This approach is quite inadequate for the current needs of science. \n\nJaynes proceeds to dig into the idea of **prior** information/knowledge, and how it is essential that it is included in data analysis and inference. He writes that a false premise built into a model which is never questioned cannot be removed by any amount of new data. The use of models which correctly represent the prior information that scientists have about the mechanism at work can prevent such folly in the future. By ignoring prior information we not only set ourselves up to fail in the inference we are trying to make, but we also risk stalling out scientific progression itself. _No amount of analyzing coin tossing data by a stochastic model could have led us to the discovery of Newtonian Mechanics, which alone determines those data_. \n\nWith our stage set we are ready to dive into what is one of the most thought provoking and eye opening books in the realm of mathematics and science. Enter chapter 1. \n\n# 1. Chapter 1: Plausible Reasoning\nSuppose that on some dark night a policeman walks down a street, apparently deserted. Sudddenly he hears a burglar alarm, looks across the street, and sees a jewelry store with a broken window. Then a gentleman wearing a mask comes crawling through the broken window, carrying a bag which turns out to be full of expensive jewelry. The policeman doesn't hesitate at all in deciding that this gentleman is dishonest. But the question is: **By what reasoning process does he arrive at this conclusion?**\n\n#### 1.1 Deductive and Plausible Reasoning\nKeep in mind the key theme of this book, these articles, and more importantly data science in general: _How can we reason optimally in complex and uncertain situations?_ It may seem as though this preface and introduction get slightly into the weeds; this is a book on probability after all, right? Jaynes goes to great length to work through the _philosophy_ behind this vantage point of probability (seen as extended logic instead of based off of the standard Kolmogorov axioms); it is not unreasonable to ask why. The key lies in the fact that Jaynes is not simply trying to replace one axiomatic framework with another, but rather he is trying to build up a way to _reason_ consistently and logically, and apply that reasoning to any area of inference. I recommend taking a moment to appreciate that and let it sink in. Many of the problems that Jaynes works this in the forthcoming chapters could have been solved with standard orthodox or bayesian statistics; what makes this framework so powerful is that it provides a way to _reason consistently_. You are not required to hold in your head a bag of ad hoc methods and techniques. Rather, you have a methodology that allows you to approach and reason about any problem logically and consistently. Surely this is what any scientifically minded problem solver would hope for! \n\nWith that said let's return to our problem. A bit of thought makes it clear that the policeman's conclusion was not a logical deduction from the evidence; there may have been a perfectly reasonable explanation for everything! For instance, it is certainly _possible_ that the man was the owner of the jewelery store and was coming home from a masquerade party, and as he passed his store some teenagers threw a rock at the window, and he was simply trying to protect his property. \n\nSo, clearly the policemans reasoning process was not strictly logical deduction. However, we can grant that it did possess a high degree of validity. The evidence did not make the gentleman's dishonesty _certain_, but it did make it extremely _plausible_. This is the type of reasoning that humans have become proficient with long before studying mathematical theories. We encounter an abundance of these decisions daily (will it rain or won't it?) where we do not have enough information to permit deductive reasoning; but still we must decide immediately what to do. \n\nNow, inspite of how familiar this process is to all of us, the formation of plausible conclusions is very subtle. This book allows us to replace intuitive judgements with definite theorems, and ad hoc procedures are replaced by rules that are determined uniquely by some elementary and inescapable criteria of rationality. \n\nNow, let's take a moment to try and place this within the context of aristotelian logic; that is the most appropriate starting point to dissect the difference between **[deductive reasoning](https://en.wikipedia.org/wiki/Deductive_reasoning)** and **plausible reasoning**. \n\n#### Deductive Reasoning\nDeductive reasoning is defined as the process of reasoning from one or more **statements** (premises) in order to reach a logically certain conclusion. We can think of deductive reasoning as follows:\n\n1. All men are mortal. (First premise)\n2. Socrates is a man. (Second premise)\n3. Therefore, Socrates is mortal. (Conclusion)\n\nAnd this is defined mathematically below (known as **[modus ponens](https://en.wikipedia.org/wiki/Deductive_reasoning#Modus_ponens)**, the law of detachment): \n\n$$\n\\begin{equation}\n\\text{If A is true, then B is true} \\\\\n\\frac{\\text{A is true}}\n{\\text{therefore, B is true}}\n\\end{equation}\n$$\n\nIt is the primary deductive rule of inference. An example would be:\n1. If it is raining, then there are clouds in the sky.\n2. It is raining.\n3. Therefore, there are clouds in the sky.\n\nThere is also the inverse (known as **[modus tollens](https://en.wikipedia.org/wiki/Deductive_reasoning#Modus_tollens)**, the law of contrapositive), another deductive rule of inference:\n\n$$\n\\begin{equation}\n\\text{If A is true, then B is true} \\\\\n\\frac{\\text{B is false}}\n{\\text{therefore, A is false}}\n\\end{equation}\n$$\n\nAnd we have the counter simple example:\n1. If it is raining, then there are clouds in the sky.\n2. There are no clouds in the sky.\n3. Thus, it is not raining.\n\nBoth of the above are known as **strong syllogisms**. Now in general, Deductive reasoning (\"top-down logic\") contrasts with inductive reasoning (\"bottom-up logic\") in the following way; in deductive reasoning, a conclusion is reached reductively by applying general rules which hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion(s) is left. In inductive reasoning, the conclusion is reached by generalizing or extrapolating from specific cases to general rules, i.e., there is epistemic uncertainty (for example the black swan). However, the inductive reasoning mentioned here is not the same as induction used in mathematical proofs \u2013 mathematical induction is actually a form of deductive reasoning.\n\n\n```javascript\n%%javascript\nMathJax.Hub.Config({\n TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n});\n```\n\n\n \n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9169e4748da8a481f6d91a91b1962f6cf4fde708", "size": 18946, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/07-Probability_Theory,_The_Logic_of_Science-01-Chapter-1.ipynb", "max_stars_repo_name": "NathanielDake/nathanieldake.github.io", "max_stars_repo_head_hexsha": "82b7013afa66328e06e51304b6af10e1ed648eb8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-03-30T06:28:21.000Z", "max_stars_repo_stars_event_max_datetime": "2018-04-25T15:43:24.000Z", "max_issues_repo_path": "Mathematics/07-Probability_Theory,_The_Logic_of_Science-01-Chapter-1.ipynb", "max_issues_repo_name": "NathanielDake/nathanieldake.github.io", "max_issues_repo_head_hexsha": "82b7013afa66328e06e51304b6af10e1ed648eb8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/07-Probability_Theory,_The_Logic_of_Science-01-Chapter-1.ipynb", "max_forks_repo_name": "NathanielDake/nathanieldake.github.io", "max_forks_repo_head_hexsha": "82b7013afa66328e06e51304b6af10e1ed648eb8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-02-07T22:21:33.000Z", "max_forks_repo_forks_event_max_datetime": "2018-05-04T20:16:43.000Z", "avg_line_length": 98.1658031088, "max_line_length": 1340, "alphanum_fraction": 0.7331890637, "converted": true, "num_tokens": 3524, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.45326184801538616, "lm_q2_score": 0.23091975234373585, "lm_q1q2_score": 0.104667113690577}} {"text": "# Final assignment\n\n## Deadline: March 31th 2022 at 16:00\n\n## Instructions\n\n* Write a report of all the problems that you have solved explaining clearly all the steps that you have taken in solving them. Include also the code that you used. **Remember to add comments to your answers and include also some testing part!**\n\n* You can use any packages or tools that you see most fit for the purpose.\n\n* Your work will be evaluated based on your report and the tools used. Therefore, pay also attention to how you prepare the report having a clear and logical structure.\n\n* **Submit your answers by using Moodle.** The link is https://moodle.jyu.fi/mod/assign/view.php?id=705286&forceview=1.\n\n* *Reminder about course grading: weekly exercises **65%**, assignment **35%*** **(modified after mixing exercise 8 and the final assignment)**.\n\n* Max 25 points (5 for the first three problems and 10 points for problem 4)\n\n## Problem 1\n\nA window is being built and the bottom is a rectangle and the top is a semicircle. If there is 12 m of framing materials what must the dimensions of the window be to make the window area as big as possible?\n\nModel the decision problem as an optimization problem and solve it with a method of your choosing. **Analyze the result!**\n\n## Problem 2\n\nThe 10-dimensional Robsenbrock function (one of the variants) is defined as\n$$\nf(\\mathbf{x}) = \\sum_{i=1}^{9} 100 (x_{i+1} - x_i^2 )^2 + (1-x_i)^2\n$$\nfor $x\\in\\mathbb R^{10}$. \n\nCompare at least two different optimization method's performance in minimizing this function over $\\mathbb R^{10}$. You can decide the method of comparison as the one that makes most sense to you. **Analyze the results!**\n\n## Problem 3\n\nStudy biobjective optimization problem\n$$\n\\begin{align}\n\\min \\ &(\\|x-(1,0)\\|,\\|x-(0,1)\\|)\\\\\n\\text{s.t. }&x\\in \\mathbb R^2.\n\\end{align}\n$$\nTry to generate an evenly spread representation of the Pareto front. Plot the results in both the decision and objective spaces. **Analyze the results!**\n\n## Problem 4\n\nFind an application of optimization in a scientific paper, and replicate the results.\n\nIf you do not have the data or the model, then you need to come up with a mock-up data or model yourself. If you do not have the data and you cannot come up with any model or data that would somehow resemble the model or the data in the paper, then you need to write a description of why this could not be done.\n\nIf you want, you can apply any other optimization method than what was applied in the paper. This would actually be interesting, since then we can compare the results.\n", "meta": {"hexsha": "a7407d8c4f288d5a72e7b2a488d205e3c8abb185", "size": 4324, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Final assignment.ipynb", "max_stars_repo_name": "bshavazipour/TIES483-2022", "max_stars_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Final assignment.ipynb", "max_issues_repo_name": "bshavazipour/TIES483-2022", "max_issues_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Final assignment.ipynb", "max_forks_repo_name": "bshavazipour/TIES483-2022", "max_forks_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-03T09:40:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-03T09:40:02.000Z", "avg_line_length": 31.5620437956, "max_line_length": 320, "alphanum_fraction": 0.6031452359, "converted": true, "num_tokens": 667, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3886180267058489, "lm_q2_score": 0.26894141551050293, "lm_q1q2_score": 0.10451548219516943}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nPromijeni vidljivost ovdje.''')\ndisplay(tag)\n```\n\n\n\nPromijeni vidljivost ovdje.\n\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\nimport ipywidgets as widgets\n```\n\n## Linearizacija - Jednostavno njihalo\n\nPri radu s modelima sustava, linearizacija je postupak modeliranja nelinearnog sustava s linearnom diferencijalnom jednad\u017ebom u susjedstvu neke radne to\u010dke (obi\u010dno to\u010dke ravnote\u017ee). U ovom primjeru taj je postupak demonstriran pomo\u0107u jednostavnog njihala. Sila koja uzrokuje oscilacijsko gibanje njihala (prikazano na donjoj slici) definirana je kao $-mgsin\\theta$. Jednad\u017eba gibanja njihala definirana je kao:\n\n\\begin{equation}\n mL^2\\frac{d^2\\theta}{dt^2}=-mgsin\\theta L.\n\\end{equation}\n\nNakon sre\u0111ivanja dobivamo sljede\u0107u nelinearnu diferencijalnu jednad\u017ebu drugog reda:\n\n\\begin{equation}\n \\frac{d^2\\theta}{dt^2}+\\frac{g}{L}sin\\theta=0.\n\\end{equation}\n\nU slu\u010daju malih kutnih pomaka, vrijedi aproksimacija malog kuta (i.e. $sin\\theta\\approx\\theta$) i dobiva se sljede\u0107a linearna diferencijalna jednad\u017eba drugog reda:\n\n\\begin{equation}\n \\frac{d^2\\theta}{dt^2}+\\frac{g}{L}\\theta=0.\n\\end{equation}\n\n---\n\n\n \n \n \n \n \n \n \n \n
Jednostavno njihalo
\n\n### Kako koristiti ovaj interaktivni primjer?\n\nPomi\u010dite kliza\u010de za promjenu duljine njihala $L$ i vrijednosti po\u010detnih uvjeta $\\theta_0$ i $\\dot{\\theta_0}$.\n\n\n```python\n# create figure\nfig = plt.figure(figsize=(9.8, 3),num='Diskretizacija - jednostavno njihalo')\n\n# add sublot\nax = fig.add_subplot(111)\nax.set_title('Vremenski odziv')\nax.set_ylabel('izlaz')\nax.set_xlabel('$t$ [s]')\nax.axhline(y=0, xmin=-1, xmax=6, color='k', linewidth=1)\n\nax.grid(which='both', axis='both', color='lightgray')\n\nnonlinear, = ax.plot([], [])\nlinear, = ax.plot([], [])\n\nstyle = {'description_width': 'initial'}\n\ng=9.81 # m/s^2\n\ndef model_nonlinear(ic,t,L):\n fi, fidot = ic\n return [fidot,-g/L*np.sin(fi)]\n\ndef model_linear(ic,t,L):\n fi, fidot = ic\n return [fidot,-g/L*fi]\n\ndef build_model(y0,ypika0,L):\n ic=[y0,ypika0]\n t=np.linspace(0,5,num=500)\n fi=odeint(model_nonlinear,ic,t,args=(L,))\n ys=fi[:,0]\n\n fi_linear=odeint(model_linear,ic,t,args=(L,))\n ys_linear=fi_linear[:,0]\n \n global nonlinear, linear\n\n ax.lines.remove(nonlinear)\n ax.lines.remove(linear)\n \n nonlinear, = ax.plot(t,ys,label='original',color='C0', linewidth=5)\n linear, = ax.plot(t,ys_linear,label='linearizacija',color='C3', linewidth=2)\n \n ax.legend()\n \n ax.relim()\n ax.autoscale_view()\n\nL_slider=widgets.FloatSlider(value=0.3, min=.01, max=2., step=.01,\n description='$L$ [m]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',)\n\nypika0_slider=widgets.FloatSlider(value=1, min=-3, max=3, step=0.1,\n description='$\\dot \\\\theta_0$ [rad/s]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',) \n\ny0_slider=widgets.FloatSlider(value=1, min=-3, max=3, step=0.1,\n description='$\\\\theta_0$ [rad]:',disabled=False,continuous_update=False,\n orientation='horizontal',readout=True,readout_format='.2f',) \n\ninput_data=widgets.interactive_output(build_model, {'y0':y0_slider,'ypika0':ypika0_slider,'L':L_slider})\n\ndisplay(L_slider,y0_slider,ypika0_slider,input_data)\n```\n\n\n \n\n\n\n\n\n\n\n FloatSlider(value=0.3, continuous_update=False, description='$L$ [m]:', max=2.0, min=0.01, step=0.01)\n\n\n\n FloatSlider(value=1.0, continuous_update=False, description='$\\\\theta_0$ [rad]:', max=3.0, min=-3.0)\n\n\n\n FloatSlider(value=1.0, continuous_update=False, description='$\\\\dot \\\\theta_0$ [rad/s]:', max=3.0, min=-3.0)\n\n\n\n Output()\n\n", "meta": {"hexsha": "10f9e2a6c7a4a869d70ae620c26a9c1d4d2b5612", "size": 119128, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_hr/examples/02/TD-06-Linearizacija_njihalo.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-06-Linearizacija_njihalo-checkpoint.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_hr/examples/02/.ipynb_checkpoints/TD-06-Linearizacija_njihalo-checkpoint.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 112.8106060606, "max_line_length": 75963, "alphanum_fraction": 0.7892686858, "converted": true, "num_tokens": 1356, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.480478678047907, "lm_q2_score": 0.21733752104706247, "lm_q1q2_score": 0.10442604480290174}} {"text": "\n\n## Data-driven Design and Analyses of Structures and Materials (3dasm)\n\n## Lecture 6\n\n### Miguel A. Bessa | M.A.Bessa@tudelft.nl | Associate Professor\n\n**What:** A lecture of the \"3dasm\" course\n\n**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)\n\n**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)\n\n**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.\n* If working offline: Go through this notebook and read the book.\n* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.\n* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.\n\n**Optional reference (the \"bible\" by the \"bishop\"... pun intended \ud83d\ude06) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.\n\n**References/resources to create this notebook:**\n* [Car figure](https://korkortonline.se/en/theory/reaction-braking-stopping/)\n\nApologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.\n\n## **OPTION 1**. Run this notebook **locally in your computer**:\n1. Confirm that you have the 3dasm conda environment (see Lecture 1).\n\n2. Go to the 3dasm_course folder in your computer and pull the last updates of the [repository](https://github.com/bessagroup/3dasm_course):\n```\ngit pull\n```\n3. Open command window and load jupyter notebook (it will open in your internet browser):\n```\nconda activate 3dasm\njupyter notebook\n```\n4. Open notebook of this Lecture.\n\n## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):\n\n1. go to https://colab.research.google.com\n2. login\n3. File > Open notebook\n4. click on Github (no need to login or authorize anything)\n5. paste the git link: https://github.com/bessagroup/3dasm_course\n6. click search and then click on the notebook for this Lecture.\n\n\n```python\n# Basic plotting tools needed in Python.\n\nimport matplotlib.pyplot as plt # import plotting tools to create figures\nimport numpy as np # import numpy to handle a lot of things!\nfrom IPython.display import display, Math # to print with Latex math\n\n%config InlineBackend.figure_format = \"retina\" # render higher resolution images in the notebook\nplt.style.use(\"seaborn\") # style for plotting that comes from seaborn\nplt.rcParams[\"figure.figsize\"] = (8,4) # rescale figure size appropriately for slides\n```\n\n## Outline for today\n\n* Continuation of previous lecture: Bayesian inference for one hidden rv\n - Prior\n - Likelihood\n - Marginal likelihood\n - Posterior\n - Gaussian pdf's product\n\n**Reading material**: This notebook + Chapter 3\n\n## Recap of Lecture 5: car stopping distance with known $x$ and $p(z_2)$\n\nWe focused on the car stopping distance problem with two rv's under the following conditions:\n* We kept $x=75$ m/s.\n* The \"true\" distribution of one of the rv's was known: $p(z_2)=\\mathcal{N}(\\mu_{z_2}=0.1,\\sigma_{z_2}^2=0.01^2)$\n* But the distribution of the other rv ($z \\equiv z_1$) is not known: $p(z)=?$\n\nUnder these conditions, recall the \"true\" model by observing the following plot, including some data observations.\n\n\n```python\n# This cell is hidden during presentation. It's just to define a function to plot the governing model of\n# the car stopping distance problem. Defining a function that creates a plot allows to repeatedly run\n# this function on cells used in this notebook.\ndef car_fig_2rvs(ax):\n x = np.linspace(3, 83, 1000)\n mu_z1 = 1.5; sigma_z1 = 0.5; # parameters of the \"true\" p(z_1)\n mu_z2 = 0.1; sigma_z2 = 0.01; # parameters of the \"true\" p(z_2)\n mu_y = mu_z1*x + mu_z2*x**2 # From Homework of Lecture 4\n sigma_y = np.sqrt( (x*sigma_z1)**2 + (x**2*sigma_z2)**2 ) # From Homework of Lecture 4\n ax.set_xlabel(\"x (m/s)\", fontsize=20) # create x-axis label with font size 20\n ax.set_ylabel(\"y (m)\", fontsize=20) # create y-axis label with font size 20\n ax.set_title(\"Car stopping distance problem with two rv's\", fontsize=20); # create title with font size 20\n ax.plot(x, mu_y, 'k:', label=\"Governing model $\\mu_y$\")\n ax.fill_between(x, mu_y - 1.9600 * sigma_y,\n mu_y + 1.9600 * sigma_y,\n color='k', alpha=0.2,\n label='95% confidence interval ($\\mu_y \\pm 1.96\\sigma_y$)') # plot 95% credence interval\n ax.legend(fontsize=15)\n```\n\n\n```python\n# This cell is also hidden during presentation.\nfrom scipy.stats import norm # import the normal dist, as we learned before!\ndef samples_y_with_2rvs(N_samples,x): # observations/measurements/samples for car stop. dist. prob. with 2 rv's\n mu_z1 = 1.5; sigma_z1 = 0.5;\n mu_z2 = 0.1; sigma_z2 = 0.01;\n samples_z1 = norm.rvs(mu_z1, sigma_z1, size=N_samples) # randomly draw samples from the normal dist.\n samples_z2 = norm.rvs(mu_z2, sigma_z2, size=N_samples) # randomly draw samples from the normal dist.\n samples_y = samples_z1*x + samples_z2*x**2 # compute the stopping distance for samples of z_1 and z_2\n return samples_y # return samples of y\n```\n\n\n```python\n# vvvvvvvvvvv this is just a trick so that we can run this cell multiple times vvvvvvvvvvv\nfig_car_new, ax_car_new = plt.subplots(1,2); plt.close() # create figure and close it\nif fig_car_new.get_axes():\n del ax_car_new; del fig_car_new # delete figure and axes if they exist\n fig_car_new, ax_car_new = plt.subplots(1,2) # create them again\n# ^^^^^^^^^^^ end of the trick ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nN_samples = 3 # CHANGE THIS NUMBER AND RE-RUN THE CELL\nx = 75; empirical_y = samples_y_with_2rvs(N_samples, x); # Empirical measurements of N_samples at x=75\nempirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); # empirical mean and std\ncar_fig_2rvs(ax_car_new[0]) # a function I created to include the background plot of the governing model\nfor i in range(2): # create two plots (one is zooming in on the error bar)\n ax_car_new[i].errorbar(x , empirical_mu_y,yerr=1.96*empirical_sigma_y, fmt='m*', markersize=15);\n ax_car_new[i].scatter(x*np.ones_like(empirical_y),empirical_y, s=40,\n facecolors='none', edgecolors='k', linewidths=2.0)\nprint(\"Empirical mean[y] is\",empirical_mu_y, \"(real mean[y]=675)\")\nprint(\"Empirical std[y] is\",empirical_sigma_y,\"(real std[y]=67.6)\")\nfig_car_new.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)\n```\n\n## Recap of Lecture 5: Summary of our model\n\n1. The **observation distribution**:\n\n$$\np(y|z) = \\mathcal{N}\\left(y | \\mu_{y|z}=w z+b, \\sigma_{y|z}^2\\right) = \\frac{1}{C_{y|z}} \\exp\\left[ -\\frac{1}{2\\sigma_{y|z}^2}(y-\\mu_{y|z})^2\\right]\n$$\n\nwhere $C_{y|z} = \\sqrt{2\\pi \\sigma_{y|z}^2}$ is the **normalization constant** of the Gaussian pdf, and where $\\mu_{y|z}=w z+b$, with $w$, $b$ and $\\sigma_{y|z}^2$ being constants.\n\n2. and the **prior distribution**: $p(z) = \\frac{1}{C_z}$\n\nwhere $C_z = z_{max}-z_{min}$ is the **normalization constant** of the Uniform pdf, i.e. the value that guarantees that $p(z)$ integrates to one.\n\n### Recap of Lecture 5: Data\n\n* Since we usually don't know the true process, we can only observe/collect data $y=\\mathcal{D}_y$:\n\n\n```python\nprint(\"Example of N=%1i data points for y at x=%1.1f m/s with :\" % (N_samples,x), empirical_y)\n```\n\n Example of N=3 data points for y at x=75.0 m/s with : [641.47373038 733.92797742 637.52728953]\n\n\n## Recap of Lecture 5: Posterior from Bayes' rule applied to data\n\nUse Bayes' rule applied to data to determine the posterior:\n\n$\\require{color}$\n$$\n{\\color{green}p(z|y=\\mathcal{D}_y)} = \\frac{ {\\color{blue}p(y=\\mathcal{D}_y|z)}{\\color{red}p(z)} } {p(y=\\mathcal{D}_y)}\n$$\n\nThat requires calculating the likelihood (here, it results from a product of Gaussian densities):\n\n$$\n{\\color{blue}p(y=\\mathcal{D}_y | z)} = \\frac{1}{|w|^N} \\cdot C \\cdot \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left[ -\\frac{1}{2\\sigma^2}(z-\\mu)^2\\right]\n$$\n\nwhere $\\mu = \\frac{w^2\\sigma^2}{\\sigma_{y|z}^2} \\sum_{i=1}^N \\mu_i$\n\n$\\sigma^2 = \\frac{\\sigma_{y|z}^2}{w^2 N}$, and\n\n$C = \\frac{1}{2\\pi^{(N-1)/2}} \\sqrt{\\frac{\\sigma^2}{\\left( \\frac{\\sigma_{y|z}^2}{w^2}\\right)^N}}\n$\n\nAfter calculating the likelihood, we determined the marginal likelihood:\n\n$$\np(y=\\mathcal{D}_y) = \\frac{C}{|w|^N C_z}\n$$\n\nFrom which we got the posterior:\n\n$$\\require{color}\\begin{align}\n{\\color{green}p(z|y=\\mathcal{D}_y)} &= \\frac{ p(y=\\mathcal{D}_y|z)p(z) } {p(y=\\mathcal{D}_y)} \\\\\n&= \\frac{1}{p(y=\\mathcal{D}_y)} \\cdot \\frac{1}{|w|^N} C \\cdot \\mathcal{N}(z|\\mu,\\sigma^2) \\cdot \\frac{1}{C_z} \\\\\n&= \\mathcal{N}(z|\\mu, \\sigma^2)\n\\end{align}\n$$\n\nwhich is a **normalized** Gaussian pdf in $z$ with mean and variance as shown in the previous cell.\n\n## Determining the Posterior Predictive Distribution (PPD) from the posterior\n\nHowever, as we mentioned, Bayes' rule is just a way to calculate the posterior:\n\n$$\np(z|y=\\mathcal{D}_y) = \\frac{ p(y=\\mathcal{D}_y|z)p(z) } {p(y=\\mathcal{D}_y)}\n$$\n\nWhat we really want is the Posterior Predictive Distribution (PPD) . This comes after calculating the posterior given some data $\\mathcal{D}_y$:\n\n$$\\require{color}\n{\\color{orange}p(y|y=\\mathcal{D}_y)} = \\int p(y|z) p(z|y=\\mathcal{D}_y) dz\n$$\n\nwhich is often written in simpler notation: $p(y|\\mathcal{D}_y) = \\int p(y|z) p(z|\\mathcal{D}_y) dz$\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\int \\underbrace{p(y|z)}_{\\text{observation}\\\\ \\text{distribution}} \\overbrace{p(z|y=\\mathcal{D}_y)}^{\\text{posterior}} dz\n$$\n\nConsidering the terms we found before, we get:\n\n$$\\begin{align}\np(y|\\mathcal{D}_y) &= \\int \\underbrace{\\frac{1}{|w|}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\} }_{\\text{observation}\\\\ \\text{distribution}} \\overbrace{\\mathcal{N}(z|\\mu, \\sigma^2)}^{\\text{posterior}} dz\n\\end{align}\n$$\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\frac{1}{|w|} \\int {\\color{blue}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y-b}{w}\\right)\\right]^2\\right\\}} \\mathcal{N}(z|\\mu, \\sigma^2) dz\n$$\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\frac{1}{|w|} \\int \\mathcal{N}\\left(z\\left|\\frac{y-b}{w}, \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2\\right.\\right) \\mathcal{N}(z|\\mu, \\sigma^2) dz\n$$\n\nThis is (again!) the product of two Gaussians!\n\nIn Lecture 5 (and the Homework!) you saw (and demonstrated!) that the product of two or more univariate (and multivariate!) Gaussians is...\n\n* Another Gaussian! Although it needs to be scaled by a constant...\n\nSo, we conclude that the PPD is an integral of a Gaussian:\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\frac{1}{|w|} \\int C^* \\mathcal{N}\\left(z|\\mu^*, \\left(\\sigma^*\\right)^2\\right) dz\n$$\n\n\nwhere $\\mu^* = \\left(\\sigma^* \\right)^2 \\left( \\frac{\\mu}{\\sigma^2} + \\frac{(y-b)/w}{\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2} \\right) = \\left(\\sigma^* \\right)^2 \\left( \\frac{\\mu}{\\sigma^2} + \\frac{(y-b)\\cdot w}{\\sigma_{y|z}^2} \\right)$\n\n$\\left( \\sigma^* \\right)^2 = \\frac{1}{\\frac{1}{\\sigma^2}+\\frac{1}{\\left( \\frac{\\sigma_{y|z}}{w}\\right)^2}}= \\frac{1}{\\frac{1}{\\sigma^2}+\\frac{w^2}{\\sigma_{y|z}^2}}$\n\n$C^* = \\frac{1}{\\sqrt{2\\pi \\left( \\sigma^2 + \\frac{\\sigma_{y|z}^2}{w^2} \\right)}}\\exp\\left[ - \\frac{\\left(\\mu - \\frac{y-b}{w}\\right)^2}{2\\left( \\sigma^2+\\frac{\\sigma_{y|z}^2}{w^2}\\right)}\\right]$\n\nThis integral is simple to solve!\n\n$$\\require{color}\n\\begin{align}\n{\\color{orange}p(y|\\mathcal{D}_y)} &= \\frac{1}{|w|} \\int C^* \\mathcal{N}\\left(z|\\mu^*, \\left(\\sigma^*\\right)^2\\right) dz \\\\\n&= \\frac{C^*}{|w|} \\int {\\color{blue}\\mathcal{N}\\left(z|\\mu^*, \\left(\\sigma^*\\right)^2\\right)} dz\n\\end{align}\n$$\n\nWhat's the result of integrating the blue term?\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\frac{C^*}{|w|}\n$$\n\n## Exercise 1\n\nRewrite the PPD to show that it becomes:\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\mathcal{N}\\left(y| b+\\mu w, w^2\\sigma^2+\\sigma_{y|z}^2 \\right)\n$$\n\na normalized univariate Gaussian!\n\n## A long way to show that the PPD is a simple Gaussian...\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\mathcal{N}\\left(y| b+\\mu w, w^2\\sigma^2+\\sigma_{y|z}^2 \\right)\n$$\n\nwhere we recall that each constant is:\n\n$b = 0.1x^2 = 562.5$\n\n$w = x = 75$\n\n$\\sigma_{y|z}^2 = (x^2 \\sigma_{z_2})^2=(75^2\\cdot0.01)^2=56.25^2$\n\n$\\sigma^2 = \\frac{\\sigma_{y|z}^2}{w^2 N} = \\frac{(x^2\\cdot\\sigma_{z_2})^2}{x^2 N} = \\frac{x^2\\cdot\\sigma_{z_2}^2}{N} $\n\n$\\mu = \\frac{w^2 \\sigma^2}{\\sigma_{y|z}^2} \\sum_{i=1}^{N} \\mu_i = \\cdots = \\frac{\\sum_{i=1}^N y_i}{w N}-\\frac{b}{w}$\n\n\n#### Note on algebra to determine $\\mu$ parameter:\n\n$$\n\\begin{align}\n\\mu &= \\frac{w^2 \\sigma^2}{\\sigma_{y|z}^2} \\sum_{i=1}^{N} \\mu_i \\\\\n&= \\frac{\\sum_{i=1}^{N} \\mu_i}{N} = \\frac{1}{N} \\sum_{i=1}^{N} \\left(\\frac{y_i-b}{w}\\right) \\\\\n&= \\frac{1}{w N} \\sum_{i=1}^N\\left( y_i-b\\right) \\\\\n&= \\frac{1}{w N} \\left( \\sum_{i=1}^N y_i-N b\\right) \\\\\n& = \\frac{\\sum_{i=1}^N y_i}{w N}-\\frac{b}{w}\n\\end{align}\n$$\n\n## A long way to show that the PPD is a simple Gaussian...\n\n$$\\require{color}\n\\begin{align}\n{\\color{orange}p(y|\\mathcal{D}_y)} &= \\mathcal{N}\\left(y| b+\\mu w, w^2\\sigma^2+\\sigma_{y|z}^2 \\right) \\\\\n&= \\mathcal{N}\\left(y \\left| \\left(\\sum_{i=1}^N \\frac{y_i}{N}\\right), \\sigma_{y|z}^2 \\left(\\frac{1}{N} + 1 \\right) \\right. \\right)\n\\end{align}\n$$\n\nwhere $y_i$ are each of the $N$ data points of the observed data $\\mathcal{D}_y$, and $\\sigma_{y|z}^2 = (x^2 \\sigma_{z_2})^2=(75^2\\cdot0.01)^2=56.25^2$ is the variance arising from the contribution of $z_2$ on $y$.\n\n* **Very Important Questions (VIQs)**: What does this result tell us? Did you expect this predicted distribution for $y$?\n\n\n```python\nfig_car_PPD, ax_car_PPD = plt.subplots(1,2); plt.close() # create figure and close it\nif fig_car_new.get_axes():\n del ax_car_PPD; del fig_car_PPD; fig_car_PPD, ax_car_PPD = plt.subplots(1,2) # delete fig & axes & create them\nN_samples = 30000 # CHANGE THIS NUMBER AND RE-RUN THE CELL\nx = 75; empirical_y = samples_y_with_2rvs(N_samples, x); # Empirical measurements of N_samples at x=75\nempirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); # empirical mean and std\n# Calculate PPD mean and standard deviation:\nPPD_mu_y = np.mean(empirical_y); sigma_z2 = 0.01; PPD_sigma_y = np.sqrt( (x**2*sigma_z2)**2*(1/N_samples + 1) )\ncar_fig_2rvs(ax_car_PPD[0]) # a function I created to include the background plot of the governing model\nfor i in range(2): # create two plots (one is zooming in on the error bar)\n ax_car_PPD[i].errorbar(x , empirical_mu_y,yerr=1.96*empirical_sigma_y, fmt='m*', markersize=15, elinewidth=6);\n ax_car_PPD[i].errorbar(x , PPD_mu_y,yerr=1.96*PPD_sigma_y, color='#F39C12', fmt='*', markersize=5, elinewidth=3);\n ax_car_PPD[i].scatter(x*np.ones_like(empirical_y),empirical_y, s=100,facecolors='none', edgecolors='k', linewidths=2.0)\nprint(\"PPD & empirical mean[y] are the same:\",empirical_mu_y, \"(real mean[y]=675)\")\nprint(\"PPD std[y] is\",PPD_sigma_y, \"& empirical std[y] is\",empirical_sigma_y,\"(real std[y]=67.6)\")\nfig_car_PPD.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)\n```\n\n### Reflection on what we are observing\n\n1. Generally speaking, our PPD is quite reasonable!\n * For few data points it is more reasonable than just calculating the standard deviation directly from the data.\n\n2. However, as the number of data points increases it starts getting \"overconfident\" (see PPD as $N\\rightarrow \\infty$ or play with the figure above by increasing $N$).\n * This results from our choice of prior... Our belief was incorrect.\n * The hidden rv $z$ is actually a Gaussian distribution, instead of a noninformative Uniform distribution\n\nPlease keep this in your head:\n* (Bayesian) ML is not magic. Every modeling choice you make affects the predictions you get.\n* Of course, there are ways of getting \"closer\" to the truth! We'll take some steps in that direction in the remainder of the course. \n\n# HOMEWORK\n\nConsider the same problem, but now starting from a different model:\n\n1. Same **observation distribution** as before:\n\n$$\np(y|z) = \\mathcal{N}\\left(y | \\mu_{y|z}=w z+b, \\sigma_{y|z}^2\\right) = \\frac{1}{C_{y|z}} \\exp\\left[ -\\frac{1}{2\\sigma_{y|z}^2}(y-\\mu_{y|z})^2\\right]\n$$\n\n2. but now assuming a different **prior distribution**: $p(z) = \\mathcal{N}\\left(z| \\overset{\\scriptscriptstyle <}{\\mu}_z=3, \\overset{\\scriptscriptstyle <}{\\sigma}_z^2=2^2\n\\right)$\n\nIn my notation, the superscript $\\overset{\\scriptscriptstyle <}{(\\cdot)}$ indicates a parameter of the prior distribution.\n\n### Notes about the prior distribution\n\n* We would have to be very lucky if our \"belief\" coincided with the \"true\" distribution of $z$.\n * Usually, we have beliefs but they are not really true (not talking about religion \ud83d\ude06).\n - Our hope is that our beliefs are at least reasonable!\n\n* When defining a prior we are making a decision about two things:\n 1. The distribution.\n * For example, in this exercise we are assuming that the prior is Gaussian (before we assumed a noninformative Uniform prior). In this case we hit the jackpot! But remember that we are cheating here... That's why we know the actual distribution of $z$ is a Gaussian!\n 2. The parameters of the distribution.\n * For example, in this exercise we are assuming values that are not the true ones! This is normal! As I said, usually we don't know the truth about the \"hidden\" variable. Most times we don't even know how many hidden variables we have...\n\n### See you next class\n\nHave fun!\n", "meta": {"hexsha": "1dd4f321a74a6fecd3730d4f47bab03c54690eb2", "size": 332767, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture6/3dasm_Lecture6.ipynb", "max_stars_repo_name": "shushu-qin/3dasm_course", "max_stars_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-07T18:45:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T21:45:27.000Z", "max_issues_repo_path": "Lectures/Lecture6/3dasm_Lecture6.ipynb", "max_issues_repo_name": "shushu-qin/3dasm_course", "max_issues_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture6/3dasm_Lecture6.ipynb", "max_forks_repo_name": "shushu-qin/3dasm_course", "max_forks_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2022-02-07T18:45:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T19:30:17.000Z", "avg_line_length": 383.3721198157, "max_line_length": 152028, "alphanum_fraction": 0.9283703011, "converted": true, "num_tokens": 6108, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.33458944125318596, "lm_q2_score": 0.31069437683198775, "lm_q1q2_score": 0.10395505794472158}} {"text": "```python\nfrom resources.workspace import *\n```\n\n$\n% START OF MACRO DEF\n% DO NOT EDIT IN INDIVIDUAL NOTEBOOKS, BUT IN macros.py\n%\n\\newcommand{\\Reals}{\\mathbb{R}}\n\\newcommand{\\Expect}[0]{\\mathbb{E}}\n\\newcommand{\\NormDist}{\\mathcal{N}}\n%\n\\newcommand{\\DynMod}[0]{\\mathscr{M}}\n\\newcommand{\\ObsMod}[0]{\\mathscr{H}}\n%\n\\newcommand{\\mat}[1]{{\\mathbf{{#1}}}} \n%\\newcommand{\\mat}[1]{{\\pmb{\\mathsf{#1}}}}\n\\newcommand{\\bvec}[1]{{\\mathbf{#1}}} \n%\n\\newcommand{\\trsign}{{\\mathsf{T}}} \n\\newcommand{\\tr}{^{\\trsign}} \n\\newcommand{\\tn}[1]{#1} \n\\newcommand{\\ceq}[0]{\\mathrel{\u2254}}\n%\n\\newcommand{\\I}[0]{\\mat{I}} \n\\newcommand{\\K}[0]{\\mat{K}}\n\\newcommand{\\bP}[0]{\\mat{P}}\n\\newcommand{\\bH}[0]{\\mat{H}}\n\\newcommand{\\bF}[0]{\\mat{F}}\n\\newcommand{\\R}[0]{\\mat{R}}\n\\newcommand{\\Q}[0]{\\mat{Q}}\n\\newcommand{\\B}[0]{\\mat{B}}\n\\newcommand{\\C}[0]{\\mat{C}}\n\\newcommand{\\Ri}[0]{\\R^{-1}}\n\\newcommand{\\Bi}[0]{\\B^{-1}}\n\\newcommand{\\X}[0]{\\mat{X}}\n\\newcommand{\\A}[0]{\\mat{A}}\n\\newcommand{\\Y}[0]{\\mat{Y}}\n\\newcommand{\\E}[0]{\\mat{E}}\n\\newcommand{\\U}[0]{\\mat{U}}\n\\newcommand{\\V}[0]{\\mat{V}}\n%\n\\newcommand{\\x}[0]{\\bvec{x}}\n\\newcommand{\\y}[0]{\\bvec{y}}\n\\newcommand{\\z}[0]{\\bvec{z}}\n\\newcommand{\\q}[0]{\\bvec{q}}\n\\newcommand{\\br}[0]{\\bvec{r}}\n\\newcommand{\\bb}[0]{\\bvec{b}}\n%\n\\newcommand{\\bx}[0]{\\bvec{\\bar{x}}}\n\\newcommand{\\by}[0]{\\bvec{\\bar{y}}}\n\\newcommand{\\barB}[0]{\\mat{\\bar{B}}}\n\\newcommand{\\barP}[0]{\\mat{\\bar{P}}}\n\\newcommand{\\barC}[0]{\\mat{\\bar{C}}}\n\\newcommand{\\barK}[0]{\\mat{\\bar{K}}}\n%\n\\newcommand{\\D}[0]{\\mat{D}}\n\\newcommand{\\Dobs}[0]{\\mat{D}_{\\text{obs}}}\n\\newcommand{\\Dmod}[0]{\\mat{D}_{\\text{obs}}}\n%\n\\newcommand{\\ones}[0]{\\bvec{1}} \n\\newcommand{\\AN}[0]{\\big( \\I_N - \\ones \\ones\\tr / N \\big)}\n%\n% END OF MACRO DEF\n$\nIn this tutorial we shall derive:\n\n# the Kalman filter for multivariate systems. \n\nThe [forecast step](T3%20-%20Univariate%20Kalman%20filtering.ipynb#Exc-3.7:-The-forecast-step:)\nremains essentially unchanged.\nThe only difference is that $\\DynMod$ is now a matrix, as well as the use of the transpose ${}^T$ in the covariance equation:\n$\\begin{align}\n\\bb_k\n&= \\DynMod_{k-1} \\hat{\\x}_{k-1} \\, , \\tag{1a} \\\\\\\n\\B_k\n&= \\DynMod_{k-1} \\bP_{k-1} \\DynMod_{k-1}^T + \\Q_{k-1} \\, . \\tag{1b}\n\\end{align}$\n\nHowever, the analysis step [[Exc 2.18](T2%20-%20Bayesian%20inference.ipynb#Exc--2.18-'Gaussian-Bayes':)] gets a little more complicated...\n\n#### Exc 2 (The likelihood):\n\nThe analysis step is only concerned with a single time (index). We therefore drop the $k$ subscript in the following.\n\n\nSuppose the observation, $\\y$, is related to the true state, $\\x$, via a (possibly rectangular) matrix, $\\bH$:\n\\begin{align*}\n\\y &= \\bH \\x + \\br \\, , \\;\\; \\qquad (2)\n\\end{align*}\nwhere the noise follows the law $\\br \\sim \\NormDist(\\bvec{0}, \\R)$ for some $\\R>0$ (i.e. $\\R$ is symmetric-positive-definite).\n\n\nDerive the expression for the likelihood, $p(\\y|\\x)$.\n\n\n```python\n#show_answer('Likelihood derivation')\n```\n\nThe following exercise derives the analysis step\n\n#### Exc 4 (The 'precision' form of the KF):\nSimilarly to [Exc 2.18](T2%20-%20Bayesian%20inference.ipynb#Exc--2.18-'Gaussian-Bayes':),\nit may be shown that the prior $p(\\x) = \\NormDist(\\x \\mid \\bb,\\B)$\nand likelihood $p(\\y|\\x) = \\NormDist(\\y \\mid \\bH \\x,\\R)$,\nyield the posterior:\n\\begin{align}\np(\\x|\\y)\n&= \\NormDist(\\x \\mid \\hat{\\x}, \\bP) \\tag{4}\n\\, ,\n\\end{align}\nwhere the posterior/analysis mean (vector) and covariance (matrix) are given by:\n\\begin{align}\n\t\t\t\\bP &= (\\bH\\tr \\Ri \\bH + \\Bi)^{-1} \\, , \\tag{5} \\\\\n\t\t\t\\hat{\\x} &= \\bP\\left[\\bH\\tr \\Ri \\y + \\Bi \\bb\\right] \\tag{6}\u00a0\\, ,\n\\end{align}\nProve eqns (4-6). \nHint: as in [Exc 2.18](T2%20-%20Bayesian%20inference.ipynb#Exc--2.18-'Gaussian-Bayes':), the main part lies in \"completing the square\" in $\\x$.\n\n\n```python\n#show_answer('KF precision')\n```\n\n\nWe have now derived (one form of) the Kalman filter. In the multivariate case,\nwe know how to:\n
    \n
  • Propagate our estimate of $\\x$ to the next time step using eqns (1a) and (1b).
  • \n
  • Update our estimate of $\\x$ by assimilating the latest observation $\\y$, using eqns (5) and (6).
  • \n
\n
\n\nHowever, the computations can be pretty expensive...\n\n#### Exc 5: Suppose $\\x$ is $M$-dimensional and has a covariance matrix $\\B$.\n * (a). What's the size of $\\B$?\n * (b). How many \"flops\" (approximately, i.e. to leading order) are required \n to compute the \"precision form\" of the KF update equation, eqn (5) ?\n * (c). How much memory (bytes) is required to hold its covariance matrix $\\B$ ?\n * (d). How many mega bytes's is this if $M$ is a million?\n\n\n```python\n#show_answer('Cov memory')\n```\n\nThis is one of the principal reasons why basic extended KF is infeasible for DA. \nThe following derives another, often more practical, form of the KF analysis update.\n\n#### Exc 6 (The \"Woodbury\" matrix inversion identity):\nThe following is known as the Sherman-Morrison-Woodbury lemma/identity,\n$$\\begin{align}\n \\bP = \\left( \\B^{-1} + \\V\\tr \\R^{-1} \\U \\right)^{-1}\n =\n \\B - \\B \\V\\tr \\left( \\R + \\U \\B \\V\\tr \\right)^{-1} \\U \\B \\, ,\n \\tag{W}\n\\end{align}$$\nwhich holds for any (suitably shaped matrices)\n$\\B$, $\\R$, $\\V,\\U$ *such that the above exists*.\n\nProve the identity. Hint: don't derive it, just prove it!\n\n\n```python\n#show_answer('Woodbury')\n```\n\n#### Exc 7:\n- Show that $\\B$ and $\\R$ must be square.\n- Show that $\\U$ and $\\V$ are not necessarily square, but must have the same dimensions.\n- Show that $\\B$ and $\\R$ are not necessarily of equal size.\n\n\nExc 7 makes it clear that the Woodbury identity may be used to compute $\\bP$ by inverting matrices of the size of $\\R$ rather than the size of $\\B$.\nOf course, if $\\R$ is bigger than $\\B$, then the identity is useful the other way around.\n\n#### Exc 8 (Corollary 1):\nProve that, for any symmetric, positive-definite (SPD) matrices $\\R$ and $\\B$, and any matrix $\\bH$,\n$$\\begin{align}\n \t\\left(\\bH\\tr \\R^{-1} \\bH + \\B^{-1}\\right)^{-1}\n &=\n \\B - \\B \\bH\\tr \\left( \\R + \\bH \\B \\bH\\tr \\right)^{-1} \\bH \\B \\tag{C1}\n \\, .\n\\end{align}$$\nHint: consider the properties of [SPD](https://en.wikipedia.org/wiki/Definiteness_of_a_matrix#Properties) matrices.\n\n\n```python\n#show_answer('Woodbury C1')\n```\n\n#### Exc 10 (Corollary 2):\nProve that, for the same matrices as for Corollary C1,\n$$\\begin{align}\n\t\\left(\\bH\\tr \\R^{-1} \\bH + \\B^{-1}\\right)^{-1}\\bH\\tr \\R^{-1}\n &= \\B \\bH\\tr \\left( \\R + \\bH \\B \\bH\\tr \\right)^{-1}\n \\tag{C2}\n \\, .\n\\end{align}$$\n\n\n```python\n#show_answer('Woodbury C2')\n```\n\n#### Exc 12 (The \"gain\" form of the KF):\nNow, let's go back to the KF, eqns (5) and (6). Since $\\B$ and $\\R$ are covariance matrices, they are symmetric-positive. In addition, we will assume that they are full-rank, making them SPD and invertible. \n\nDefine the Kalman gain by:\n $$\\begin{align}\n \\K &= \\B \\bH\\tr \\big(\\bH \\B \\bH\\tr + \\R\\big)^{-1} \\, . \\tag{K1}\n\\end{align}$$\n * (a) Apply (C1) to eqn (5) to obtain the Kalman gain form of analysis/posterior covariance matrix:\n$$\\begin{align}\n \\bP &= [\\I_M - \\K \\bH]\\B \\, . \\tag{8}\n\\end{align}$$\n\n* (b) Apply (C2) to (5) to abtain the identity\n$$\\begin{align}\n \\K &= \\bP \\bH\\tr \\R \\, . \\tag{K2}\n\\end{align}$$\n\n* (c) Show that $\\bP \\Bi = [\\I_M - \\K \\bH]$.\n* (d) Use (b) and (c) to obtain the Kalman gain form of analysis/posterior covariance\n$$\\begin{align}\n \\hat{\\x} &= \\bb + \\K\\left[\\y - \\bH \\bb\\right] \\, . \\tag{9}\n\\end{align}$$\n\nTogether, eqns (8) and (9) define the Kalman gain form of the KF update.\nThe inversion (eqn 7) involved is of the size of $\\R$, while in eqn (5) it is of the size of $\\B$.\n\n## In summary: \nWe have derived two forms of the multivariate KF analysis update step: the \"precision matrix\" form, and the \"Kalman gain\" form. The latter is especially practical when the number of observations is smaller than the length of the state vector.\n\n### Next: [Time series analysis](T5%20-%20Time%20series%20analysis.ipynb)\n", "meta": {"hexsha": "46ec8d826f8279baeeb1473fc22b02ef30d52dcd", "size": 12235, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/T4 - Multivariate Kalman.ipynb", "max_stars_repo_name": "brajard/DA-tutorials", "max_stars_repo_head_hexsha": "f02ccc0d7e83dc4008a458aacabd05cb2e4d53ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/T4 - Multivariate Kalman.ipynb", "max_issues_repo_name": "brajard/DA-tutorials", "max_issues_repo_head_hexsha": "f02ccc0d7e83dc4008a458aacabd05cb2e4d53ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/T4 - Multivariate Kalman.ipynb", "max_forks_repo_name": "brajard/DA-tutorials", "max_forks_repo_head_hexsha": "f02ccc0d7e83dc4008a458aacabd05cb2e4d53ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-08-21T05:32:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-24T07:18:24.000Z", "avg_line_length": 33.7052341598, "max_line_length": 252, "alphanum_fraction": 0.5075602779, "converted": true, "num_tokens": 2875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4035668537353746, "lm_q2_score": 0.25683199138751883, "lm_q1q2_score": 0.1036488787028518}} {"text": "\n\n# \u30e1\u30e2\ncolab \u3067 latex \u3092\u52c9\u5f37\u3059\u308b\u305f\u3081\u306e\u30ce\u30fc\u30c8\u30d6\u30c3\u30af\u3067\u3059\u3002\n\ncolab \u3067\u958b\u3044\u3066\u304f\u3060\u3055\u3044\u3002\n\nhttps://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/learnlatex.ipynb\n\ncolab \u3067\u30c6\u30ad\u30b9\u30c8\u30bb\u30eb\u5185\u3067 $ \u30de\u30fc\u30af\u3092\u4f7f\u3046\u3068\u6570\u5f0f\u3092 latex \u3067\u51e6\u7406\u3057\u3066\u7f8e\u3057\u304f\u8868\u793a\u3067\u304d\u308b\u3002\n$$ \\int f(x)dx $$\n\n\u3053\u308c\u3092 latex \u3067\u3069\u3046\u66f8\u3044\u3066\u3044\u308b\u304b\u306f\u3001\u30bb\u30eb\u3092\u30c0\u30d6\u30eb\u30af\u30ea\u30c3\u30af\u3059\u308b\u304b\u30bb\u30eb\u3092\u9078\u629e\u3057\u3066 Ctrl+Enter \u3092\u62bc\u3057\u3066\u3001\u7de8\u96c6\u30e2\u30fc\u30c9\u306b\u3059\u308b\u3053\u3068\u3067\u898b\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n\n\n\u3082\u3063\u3068\u660e\u793a\u7684\u306b\u30b3\u30fc\u30c9\u30bb\u30eb\u3067 `%%latex` \u30de\u30b8\u30c3\u30af\u3092\u4f7f\u3063\u3066\u66f8\u304f\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n\n\u3053\u306e\u5834\u5408\u306f\u30b3\u30fc\u30c9\u30bb\u30eb\u3092\u30bb\u30eb\u306e\u5de6\u5074\u306e\u4e38\u306b\u53f3\u5411\u304d\u4e09\u89d2\u306e\u5b9f\u884c\u30dc\u30bf\u30f3\u3092\u62bc\u3059\u304b\u3001Ctrl+Enter \u3067\u5b9f\u884c\u3057\u3066\u51fa\u529b\u3068\u3057\u3066\u30ec\u30f3\u30c0\u30ea\u30f3\u30b0\u3055\u308c\u305f\u8868\u793a\u3092\u898b\u308b\u3053\u3068\u306b\u306a\u308b\u3002\n\ncolab \u306e latex \u306f\u5b8c\u5168\u306a latex \u3067\u306f\u306a\u3044\u3002 mathjax \u30d9\u30fc\u30b9\u306e\u30b5\u30d6\u30bb\u30c3\u30c8\u3067\u3042\u308b\u3002 \u6570\u5f0f\u3092\u66f8\u304f\u306e\u306b\u4fbf\u5229\u3002\n\n\u3053\u306e\u30ce\u30fc\u30c8\u30d6\u30c3\u30af\u3067\u306f\u30c6\u30ad\u30b9\u30c8\u30bb\u30eb\u3067\u8868\u793a\u3057\u3066\u540c\u3058\u3053\u3068\u3092\u3001\u30b3\u30fc\u30c9\u30bb\u30eb\u3067\u5b9f\u884c\u3067\u304d\u308b\u3088\u3046\u306b\u3059\u308b\u3053\u3068\u3067\u3001\u3044\u3061\u3044\u3061\u7de8\u96c6\u30e2\u30fc\u30c9\u306b\u5165\u3089\u305a\u306b\u8aad\u307f\u9032\u3081\u3089\u308c\u308b\u3088\u3046\u306b\u3057\u305f\u3044\u3002\n\n\n\n```python\n# \u4f8b\n%%latex\n\\int f(x)dx\n```\n\n\n\\int f(x)dx\n\n\n# \u53c2\u8003\u30b5\u30a4\u30c8\n \nTeX\u5165\u9580 ( http://www.comp.tmu.ac.jp/tsakai/lectures/intro_tex.html ) \nTeX\u5165\u9580Wiki ( https://texwiki.texjp.org/) \nLearn LaTeX in 30 minutes ( https://www.overleaf.com/learn/latex/ \nLearn_LaTeX_in_30_minutes ) \nMathJax ( https://docs.mathjax.org/en/v2.5-latest/tex.html ) \nMathJax\u306e\u4f7f\u3044\u65b9(\nhttp://www.eng.niigata-u.ac.jp/~nomoto/download/mathjax.pdf)\n\n# \u6570\u5f0f\u8a18\u53f7\u4e00\u89a7\n\nThe Comprehensive LaTeX Symbol List - The CTAN archive ( http://tug.ctan.org/info/symbols/comprehensive/symbols-a4.pdf )\n\nShort Math Guide for LaTeX ( https://ftp.yz.yamagata-u.ac.jp/pub/CTAN/info/short-math-guide/short-math-guide.pdf )\n\n\u30ae\u30ea\u30b7\u30e3\u6587\u5b57, \u30c9\u30a4\u30c4\u6587\u5b57, \u82b1\u6587\u5b57, \u7b46\u8a18\u4f53\u306e TeX \u8868\u8a18\u3092\u307e\u3068\u3081\u3066\u304a\u3044\u305f ( https://phasetr.com/blog/2013/04/14/\u30ae\u30ea\u30b7\u30e3\u6587\u5b57-\u30c9\u30a4\u30c4\u6587\u5b57-\u7b46\u8a18\u4f53\u306e-tex-\u8868\u8a18\u3092/ )\n\n# \u306f\u3058\u3081\u306b \uff5e \u5b9f\u9a13\u5b66\u7fd2\n\n\n\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3067\u30c6\u30ad\u30b9\u30c8\u306e\u4e2d\u306b latex \u3092\u5165\u308c\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002 \n\u30a4\u30f3\u30e9\u30a4\u30f3\u3067\u5165\u308c\u308b\u3053\u3068\u3082\u3067\u304d\u308b\u3057 $x = 3$ \u3068\u304b\u3002\n\n$$\nx = 3\n$$\n\n\u3068\u304b\u3002\n\n\n\u30c6\u30ad\u30b9\u30c8\u30bb\u30eb\u3067\u306f\u6570\u5f0f\u3092 $ \u30de\u30fc\u30af\u3067\u56f2\u3080\u3068\u30a4\u30f3\u30e9\u30a4\u30f3\u3001$$ \u3067\u56f2\u3080\u3068\u6bb5\u843d\u3092\u5909\u3048\u3066\u8868\u793a\u3059\u308b\u3002\n\n\u8868\u793a\u3055\u308c\u305f\u6570\u5f0f\u306e latex \u3067\u306e\u8868\u8a18\u306f\u30bb\u30eb\u3092\u7de8\u96c6\u30e2\u30fc\u30c9\u306b\u3059\u308b\u3068\u898b\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n\n\u7de8\u96c6\u30e2\u30fc\u30c9\u306b\u3059\u308b\u306b\u306f\u30bb\u30eb\u3092\u9078\u629e\u3057\u3066\u30c0\u30d6\u30eb\u30af\u30ea\u30c3\u30af\u3059\u308b\u304b Enter \u30ad\u30fc\u3092\u62bc\u3059\u3002\n\n\n\n\n```python\n# \u30b3\u30fc\u30c9\u30bb\u30eb\u3067 %%latex \u3067\u8868\u793a\u3059\u308b\n# \u7de8\u96c6\u30e2\u30fc\u30c9\u306b\u3057\u306a\u304f\u3066\u3082 latex \u8868\u8a18\u304c\u3067\u304d\u308b\u306e\u3067 latex \u306e\u5b66\u7fd2\u306b\u306f\u4fbf\u5229\n# \u30b3\u30fc\u30c9\u30bb\u30eb\u3092\u5b9f\u884c\u3057\u306a\u3044\u3068\u6570\u5f0f\u8868\u8a18\u306b\u306a\u3089\u306a\u3044\u3002\n# \u30b3\u30fc\u30c9\u30bb\u30eb\u306e\u5b9f\u884c\u306f\u30bb\u30eb\u306e\u5de6\u306e\u5b9f\u884c\u30dc\u30bf\u30f3(\u53f3\u5411\u304d\u4e09\u89d2)\u3092\u62bc\u3059\u304b\u3001Ctrl+Enter \u3092\u62bc\u3059\u3002\n# x = 3 \u306e\u6570\u5b57\u3092\u5909\u3048\u3066\u5b9f\u884c\u3057\u3066\u307f\u3088\u3046\n%%latex\nx = 3\n```\n\n\nx = 3\n\n\n\n```python\n# \u5b9f\u9a13 python \u30d7\u30ed\u30b0\u30e9\u30e0\u3067\u8868\u793a\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u308b\u3002\u3053\u306e\u30ce\u30fc\u30c8\u30d6\u30c3\u30af\u3067\u306f\u5b66\u7fd2\u5bfe\u8c61\u3068\u3057\u306a\u3044\u3002\nfrom sympy import *\nfrom IPython.display import Markdown\ndisplay(Markdown(\"\u5b9f\u9a13\u5b66\u7fd2 $x = 3$ \u3068\u66f8\u304f\"))\nx = symbols('x')\ndisplay(Eq(x,3))\n```\n\n\n\u5b9f\u9a13\u5b66\u7fd2 $x = 3$ \u3068\u66f8\u304f\n\n\n\n$\\displaystyle x = 3$\n\n\n# \u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3068 %%latex \u306e\u9055\u3044 (\u53c2\u8003)\n1. \u30b3\u30fc\u30c9\u30bb\u30eb\u306b %%latex \u3068\u66f8\u304f\u3068\u305d\u306e\u884c\u4ee5\u964d mathjax \u306e\u30eb\u30fc\u30eb\u304c\u9069\u7528\u3055\u308c\u308b\u3002\n1. %%latex\u4ee5\u964d\u306b\u30d0\u30c3\u30af\u30b9\u30e9\u30c3\u30b7\u30e5 \\ \u3067\u30a8\u30b9\u30b1\u30fc\u30d7\u3055\u308c\u3066\u3044\u306a\u3044 \\$ \u30de\u30fc\u30af\u304c\u3042\u308b\u3068 latex \u3067\u89e3\u91c8\u305b\u305a\u306b\u5165\u529b\u306e\u307e\u307e\u51fa\u529b\u3059\u308b\u3002 \n1. % \u8a18\u53f7\u306f\u30b3\u30e1\u30f3\u30c8\u8a18\u53f7\u306a\u306e\u3067\u6587\u6cd5\u4e0a\u306f\u8a31\u3055\u308c\u308b\u304c\u3001% \u8a18\u53f7\u4ee5\u964d\u304c\u51fa\u529b\u3055\u308c\u306a\u3044\u306e\u3067\u30d0\u30c3\u30af\u30b9\u30e9\u30c3\u30b7\u30e5 \\ \u3067\u30a8\u30b9\u30b1\u30fc\u30d7\u3059\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\n1. \u30d0\u30c3\u30af\u30b9\u30e9\u30c3\u30b7\u30e5 \\ \u81ea\u4f53\u306f \\backslash \u3068\u66f8\u304f\u3002 \u6ce2\u8a18\u53f7 \u30c1\u30eb\u30c0 tilde ~ \u306f \\tilde{}\u3001\u30ad\u30e3\u30ec\u30c3\u30c8\u3001\u30cf\u30c3\u30c8\u8a18\u53f7\u3001\u30b5\u30fc\u30ab\u30e0\u30d5\u30ec\u30c3\u30af\u30b9\u3001circumflex ^ \u306f \\hat{} \u306b\u306a\u308b\u3002\n1. \u5730\u306e\u6587\u3001\u30c6\u30ad\u30b9\u30c8\u3082\u6570\u5f0f\u3068\u3057\u3066\u89e3\u91c8\u3055\u308c\u308b\u306e\u3067\u3001\u82f1\u6587\u5b57\u306f\u30d5\u30a9\u30f3\u30c8\u304c\u5909\u308f\u308b\u3002\u666e\u901a\u306e\u6587\u5b57\u306b\u3057\u305f\u3044\u5834\u5408\u306f \\text{} \u3067\u62ec\u308b\u3002\n1. \u6539\u884c\u306f\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3067\u306f\u30b9\u30da\u30fc\u30b9 2 \u500b\u3060\u304c\u3001%%latex \u3067\u306f \u30d0\u30c3\u30af\u30b9\u30e9\u30c3\u30b7\u30e5 \\ \u30922\u500b \\\\\\\\ \u306b\u306a\u308b\u3002\n1. \u305d\u306e\u4ed6\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u4e0a\u306e\u4fbf\u5229\u306a\u6a5f\u80fd\u3002\u7b87\u6761\u66f8\u304d\u306e\u81ea\u52d5\u30ca\u30f3\u30d0\u30ea\u30f3\u30b0\u306a\u3069\u306f\u4f7f\u3048\u306a\u3044\u3002\n\n\n\n```python\n# \u5b9f\u9a13\n%%latex\nthis is a pen. \\\\\n\\text{this is a pen} \\\\\n\\backslash \\\\\n\\tilde{x}\\\\\n\\tilde{} \\quad x\\\\\n\\hat{x} \\\\\n\\hat{} \\quad x \\\\\nx^3 \\\\\n```\n\n\nthis is a pen. \\\\\n\\text{this is a pen} \\\\\n\\backslash \\\\\n\\tilde{x}\\\\\n\\tilde{} \\quad x\\\\\n\\hat{x} \\\\\n\\hat{} \\quad x \\\\\nx^3 \\\\\n\n\n\u540c\u3058\u3053\u3068\u3092\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3067\u3084\u3063\u3066\u307f\u308b\u3002\n\n$this is a pen.$ \n$\\text{this is a pen}$ \n$\\backslash$ \n$\\tilde{x}$ \n$\\tilde{} \\quad x$ \n$\\hat{x} $ \n$\\hat{} \\quad x $ \n$x^3 $ \n\n\n```python\n# \u3053\u308c\u306f\u30b3\u30e1\u30f3\u30c8\n%%latex\ny = 5 \\\\\n% \u3053\u308c\u306f\u30b3\u30e1\u30f3\u30c8\nx = 3\n```\n\n\ny = 5 \\\\\n% \u3053\u308c\u306f\u30b3\u30e1\u30f3\u30c8\nx = 3\n\n\n# \u7c21\u5358\u306a\u6570\u5f0f\n\n\n```latex\n%%latex\nE = mc^2\n```\n\n\nE = mc^2\n\n\n\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3067\u30c9\u30eb\u30b5\u30a4\u30f3\u3067\u56f2\u3080\u3068 latex(mathjax) \u306b\u306a\u308b\u3002 2 \u500b\u306e\u30c9\u30eb\u30b5\u30a4\u30f3\u3067\u56f2\u3080\u3068\u884c\u66ff\u3048\u3055\u308c\u3001\u30bb\u30f3\u30bf\u30ea\u30f3\u30b0\u3055\u308c\u305f\u6570\u5f0f\u306b\u306a\u308b\u3002\n\n$$ E=mc^2$$\n\n\u3053\u3053\u3067 $c$ \u306f\u5149\u901f\u3092\u8868\u3057\u3001\u5024\u306f\u6b21\u306e\u901a\u308a\u3002 \n\n$$ c = 299{,}792{,}458 \\, \\mathrm{m/s} $$\n\n\u3044\u308f\u3086\u308b\u5149\u901f\u306f 30 \u4e07\u30ad\u30ed/\u79d2\u3068\u3044\u3046\u3084\u3064\u306d!!!!\n\n\u5730\u7403 7 \u5468\u308a\u534a\u3002\n\n\n\n```latex\n%%latex\nE =mc^2 \\\\[0.8em]\nc = 299{,}792{,}458 \\, \\mathrm{m/s}\n```\n\n\nE =mc^2 \\\\[0.8em]\nc = 299{,}792{,}458 \\, \\mathrm{m/s}\n\n\n\u6570\u5b57\u306e\u30ab\u30f3\u30de\u306f `{,}` \u3068\u3057\u3066\u5165\u308c\u308b\u3002 \u6ce2\u62ec\u5f27 brace \u3067\u56f2\u3080\u3002\n\n\n# \u30c9\u30eb`$`\u30de\u30fc\u30af \u30c9\u30eb\u30b5\u30a4\u30f3\u3092\u30a8\u30b9\u30b1\u30fc\u30d7\u3059\u308b\u306b\u306f\n\n\u3053\u306e\u672c\u306f\\$35.40\u3002\u3053\u308c\u306f\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3002\n\n\u3053\u306e\u672c\u306f $\\$35.40$ \u3002\u3053\u308c\u306f latex\u3002\n\n\u30c9\u30eb\u30b5\u30a4\u30f3\u306f latex (mathjax) \u5bfe\u5fdc\u306e\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3067\u306f mathjax \u306e begin, end \u306e\u8a18\u53f7\u306a\u306e\u3067\u3001\u30a8\u30b9\u30b1\u30fc\u30d7\u3059\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\n\ncolab \u74b0\u5883\u3067\u306f\u30d0\u30c3\u30af\u30b9\u30e9\u30c3\u30b7\u30e5 `\\` \u3067\u30a8\u30b9\u30b1\u30fc\u30d7\u3067\u304d\u308b\u3002\n\n\u30c9\u30eb\u30b5\u30a4\u30f3\u81ea\u4f53\u3092 HTML\u6587\u5b57\u53c2\u7167\u3067\u66f8\u304f\u3001\u3068\u3044\u3046\u65b9\u6cd5\u3082\u3042\u308b\u3002\n\n`$ `\n`$`\n`$`\n\nHTML\u6587\u5b57\u53c2\u7167\u306f latex \u306e\u4e2d\u3067\u306f\u4f7f\u3048\u306a\u3044\u3002\n\n\n\u540c\u4e00\u30bb\u30eb\u5185\u3067\u30da\u30a2\u306b\u306a\u3063\u3066\u3044\u306a\u3044\u3068\u8868\u793a\u3055\u308c\u308b\u3002\n\n$\n\n$$\n\n# \u6c34\u5e73\u65b9\u5411\u306e\u7a7a\u767d\n\n\n```python\n# \u6c34\u5e73\u65b9\u5411\u306e\u7a7a\u767d\u306e\u5b9f\u9a13\n# \\+\u30bb\u30df\u30b3\u30ed\u30f3\u3001\u30b9\u30da\u30fc\u30b9\u3001\u30c1\u30eb\u30c0\u304c\u6a19\u6e96\u7684\u306a 1 \u6587\u5b57\u30b9\u30da\u30fc\u30b9\u306e\u3088\u3046\u3067\u3042\u308b\n# \u3068\u308a\u3042\u3048\u305a\u30bb\u30df\u30b3\u30ed\u30f3\u304c\u308f\u304b\u308a\u3084\u3059\u3044\u306e\u3067 \\; \u3068\u3059\u308b\n%%latex\na\\;b\\;c\\;d\\;e\\;f\\;g\\; \u30bb\u30df\u30b3\u30ed\u30f3\\\\\na\\ b\\ c\\ d\\ e\\ f\\ g\\ \u30b9\u30da\u30fc\u30b9\\\\\na~b~c~d~e~f~g~ tilde\\\\\na\\,b\\,c\\,d\\,e\\,f\\,g\\, \u30ab\u30f3\u30de\\\\\na~~b~~c~~d~~e~~f~~g~~ tilde2\\\\\na\\quad b\\quad c\\quad d\\quad e\\quad f\\quad g\\quad quad\\\\\n```\n\n\na\\;b\\;c\\;d\\;e\\;f\\;g\\; \u30bb\u30df\u30b3\u30ed\u30f3\\\\\na\\ b\\ c\\ d\\ e\\ f\\ g\\ \u30b9\u30da\u30fc\u30b9\\\\\na~b~c~d~e~f~g~ tilde\\\\\na\\,b\\,c\\,d\\,e\\,f\\,g\\, \u30ab\u30f3\u30de\\\\\na~~b~~c~~d~~e~~f~~g~~ tilde2\\\\\na\\quad b\\quad c\\quad d\\quad e\\quad f\\quad g\\quad quad\\\\\n\n\n# \u6539\u884c\u3068\u7a7a\u884c\n\n\n\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u4e2d\u306e latex \u306e\u4e2d\u3067\u306e\u6539\u884c\u306f\u7121\u8996\u3055\u308c\u308b\u3002\n\n\n\u7d50\u5408\u3055\u308c\u3066\u30011 \u3064\u306e\u6587\u306b\u306a\u308b\u3002\n\n\u6539\u884c\u306f `\\` \u3092\u884c\u672b\u306b2\u500b\u5165\u308c\u308b\u3002 \n\n\u884c\u9593\u3092\u3042\u3051\u308b (\u7a7a\u767d\u884c\u3092\u5165\u308c\u308b) \u306e\u306f\u3051\u3063\u3053\u3046\u3080\u305a\u304b\u3057\u3044\u3002\n\n\\{\\}+\\\\\\\\ \n`{}\\\\`\n\n\u3068\u304b\u3002\n\n\n\n#\u30a4\u30f3\u30c6\u30b0\u30e9\u30eb \u7a4d\u5206\n$$\n\\frac{\\pi}{2} =\n\\left( \\int_{0}^{\\infty} \\frac{\\sin x}{\\sqrt{x}} dx \\right)^2 =\n\\sum_{k=0}^{\\infty} \\frac{(2k)!}{2^{2k}(k!)^2} \\frac{1}{2k+1} =\n\\prod_{k=1}^{\\infty} \\frac{4k^2}{4k^2 - 1}\n$$\n\n\n```python\n# \u30a4\u30f3\u30c6\u30b0\u30e9\u30eb \u7a4d\u5206\u8a18\u53f7\n# latex \u3067\u306f \\int , sympy \u3067\u306f Integral\n%%latex\n\\displaystyle\n\\frac{\\pi}{2}\n=\n\\left( \\int_{0}^{\\infty} \\frac{\\sin x}{\\sqrt{x}} dx \\right)^2\n=\n\\sum_{k=0}^{\\infty} \\frac{(2k)!}{2^{2k}(k!)^2} \\frac{1}{2k+1}\n=\n\\prod_{k=1}^{\\infty} \\frac{4k^2}{4k^2 - 1}\n```\n\n\n\\displaystyle\n\\frac{\\pi}{2}\n=\n\\left( \\int_{0}^{\\infty} \\frac{\\sin x}{\\sqrt{x}} dx \\right)^2\n=\n\\sum_{k=0}^{\\infty} \\frac{(2k)!}{2^{2k}(k!)^2} \\frac{1}{2k+1}\n=\n\\prod_{k=1}^{\\infty} \\frac{4k^2}{4k^2 - 1}\n\n\n`\\displaystyle`\u3068\u3057\u306a\u3044\u3068\u3001\u30a4\u30f3\u30e9\u30a4\u30f3\u30b9\u30bf\u30a4\u30eb\u3067\u30d5\u30a9\u30f3\u30c8\u304c\u5c0f\u3055\u304f\u306a\u3063\u3066\u3057\u307e\u3046\u3002\n\n\n\n# \u30d5\u30a9\u30f3\u30c8\u306e\u5b9f\u9a13\n\n\u30de\u30fc\u30af\u30c0\u30a6\u30f3\u3067\u82f1\u6587\u5b57\u306f \nabcdefABC \n\u3068\u306a\u308b\u304c\u3001\u30c9\u30eb\u30b5\u30a4\u30f3\u3067\u56f2\u3063\u3066 latex \u306b\u3059\u308b\u3068\u3001\u6570\u5b66\u7528\u306e\u30d5\u30a9\u30f3\u30c8\u306b\u306a\u308b\u3002 \n$abcdefABC$ \n\u3053\u306e latex \u306e\u306a\u304b\u3067 text \u3068\u3059\u308b\u3068\u6570\u5b66\u30d5\u30a9\u30f3\u30c8\u3067\u306f\u306a\u304f\u306a\u308b\u3002 \n$\\text{This is a text}$ \n\n* latex -> $abcdefABC$ $\\quad$ \u901a\u5e38\u306e\u6570\u5f0f\u66f8\u4f53\n* \\mathrm -> $\\mathrm{abcdefABC}$ $\\quad$ sin, cos \u306a\u3069\u6570\u5f0f\n*\\boldsymbol -> $\\boldsymbol{abcdefABC}$ $\\quad$ \u30d9\u30af\u30c8\u30eb\u306b\u4f7f\u3046\n* \\mathbf -> $\\mathbf{abcdefABC}$ $\\quad$ $\\mathbf{NZRC}$\n* \\mathbb -> $\\mathbb{abcdefABC}$ $\\quad$ $\\mathbb{NZRC}$\n* \\mathcal -> $\\mathcal{abcdefABC}$ $\\quad$ \n* \\mathfrak -> $\\mathfrak{abcdefABC}$ $\\quad$ \n\n\u6570\u5b66\u306e\u30c6\u30ad\u30b9\u30c8\u3067\u306e\u30a2\u30eb\u30d5\u30a1\u30d9\u30c3\u30c8\u306e\u4f7f\u3044\u5206\u3051\u306f\u901a\u5e38\u306f\u30a4\u30bf\u30ea\u30c3\u30af\u3067\u3001\u5ea7\u6a19\u3068\u304b\u306f\u30bb\u30ea\u30d5\u4ed8\u304d\u306e\u30ed\u30fc\u30de\u30f3\u4f53\u306e\u5927\u6587\u5b57\u304c\u4f7f\u308f\u308c\u3066\u3044\u308b\u3002 \u30d9\u30af\u30c8\u30eb\u3092\u5c0f\u6587\u5b57\u3067\u8868\u3059\u3068\u304d\u306b\u4e0a\u77e2\u5370\u3092\u3064\u3051\u305a\u306b\u30dc\u30fc\u30eb\u30c9\u4f53\u3092\u4f7f\u3046\u3053\u3068\u304c\u3042\u308b\u3002\n\n\u624b\u66f8\u304d\u3067\u90e8\u5206\u767d\u629c\u304d\u3092\u771f\u4f3c\u305f mathbb \u3068\u3044\u3046\u306e\u3082\u3042\u3063\u3066\u3001\u3053\u308c\u3067\u66f8\u304b\u308c\u305f\u30c6\u30ad\u30b9\u30c8\u3082\u3042\u308b\u3002\n\n \n\n# \u884c\u5217 matrix\n\n$\nA =\\begin{pmatrix}\n a_{11} & \\ldots & a_{1n} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & \\ldots & a_{mn}\n\\end{pmatrix}$\n\n\n```latex\n%%latex\n\\displaystyle\nA =\\begin{pmatrix}\n a_{11} & \\ldots & a_{1n} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & \\ldots & a_{mn}\n\\end{pmatrix} \\quad\nA =\\begin{bmatrix}\n a_{11} & \\ldots & a_{1n} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & \\ldots & a_{mn}\n\\end{bmatrix}\n```\n\n\n\\displaystyle\nA =\\begin{pmatrix}\n a_{11} & \\ldots & a_{1n} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & \\ldots & a_{mn}\n\\end{pmatrix} \\quad\nA =\\begin{bmatrix}\n a_{11} & \\ldots & a_{1n} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & \\ldots & a_{mn}\n\\end{bmatrix}\n\n\n\u76f4\u4ea4\u884c\u5217\n\u5b9f\u5bfe\u79f0\u884c\u5217 $A$ \u306f\u76f4\u4ea4\u884c\u5217 $P$ \u306b\u3088\u3063\u3066 \n$D = P^{-1} A P$ \n\u3068\u5bfe\u89d2\u884c\u5217 $D$ \u306b\u5bfe\u89d2\u5316\u3055\u308c\u308b\u3002 \n\n\n\n```python\n# \u76f4\u4ea4\u884c\u5217\n%%latex\nD = P^{-1} A P\n```\n\n\nD = P^{-1} A P\n\n\n\u6570\u5217\u306e\u5408\u8a08\u30b7\u30b0\u30de\u3001\u7121\u9650\u3001\u968e\u4e57\n$$\n\\sin x = \\sum_{n=0}^{\\infty} \\frac{(-1)^n}{(2n+1)!} x^{2n+1}\n$$\n\n\n```latex\n%%latex\n\\displaystyle\n\\sin x = \\sum_{n=0}^{\\infty} \\frac{(-1)^n}{(2n+1)!} x^{2n+1}\n```\n\n\n\\displaystyle\n\\sin x = \\sum_{n=0}^{\\infty} \\frac{(-1)^n}{(2n+1)!} x^{2n+1}\n\n\n# \u7a4d\u5206\u8a18\u53f7\u3001\u30a4\u30d7\u30b7\u30ed\u30f3\u3001\u6975\u9650\n$$\n\\int_{0}^{1} \\log x \\,dx\n= \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x \\,dx\n= \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1}\n= -1\n$$\n\n\n```python\n# \\int, Integral, \\lim, \n%%latex\n\\displaystyle\n\\int_{0}^{1} \\log x \\,dx\n= \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x \\,dx\n= \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1}\n= -1\n```\n\n\n\\displaystyle\n\\int_{0}^{1} \\log x \\,dx\n= \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x \\,dx\n= \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1}\n= -1\n\n\n# array, eqnarray, align, cases\n\u305d\u308c\u305e\u308c\u9055\u3044\u304c\u3042\u308b\u304c\u3001\u6c4e\u7528\u7684\u306a\u306e\u306f array \u3060\u308d\u3046\u3002\n$$\n\\begin{array}{lcl}\n \\displaystyle \\int_{0}^{1} \\log x dx\n & = \\quad & \\displaystyle \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x dx \\\\\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1} \\\\\n & = & -1\n\\end{array}\n$$\n\n \n$$\n\\begin{eqnarray}\n \\int_{0}^{1} \\log x dx\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x dx \\\\\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1} \\\\\n & = & -1\n\\end{eqnarray}\n$$\n\n \n\n$$\n\\begin{align}\n \\int_{0}^{1} \\log x dx\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x dx \\\\\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1} \\\\\n & = & -1\n\\end{align}\n$$\n\n \n\n\n$$\n\\begin{align}\n\\int_1^x \\{ye^{-t^2}\\}'dt &=&\\int_1^x e^{-t^2}tdt \\\\\n\\left[ye^{-t^2}\\right]_1^x &=& \\left[-{1\\over 2}e^{-t^2}\\right]_1^x \\\\\nye^{-x^2}-2e^{-1} &=& -{1\\over 2}e^{-x^2}+{1\\over 2}e^{-1} \\\\\nye^{-x^2} &=& -{1\\over 2}e^{-x^2}+{5\\over 2}e^{-1}\n\\end{align}\n$$\n\n\n```latex\n%%latex\n\\begin{array}{lcl}\n \\displaystyle \n \\int_{0}^{1} \\log x dx\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x dx \\\\\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1} \\\\\n & = & -1\n\\end{array}\n```\n\n\n\\begin{array}{lcl}\n \\displaystyle \n \\int_{0}^{1} \\log x dx\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} \\int_{\\epsilon}^{1} \\log x dx \\\\\n & = & \\displaystyle \\lim_{\\epsilon \\to +0} [x \\log x - x]_{\\epsilon}^{1} \\\\\n & = & -1\n\\end{array}\n\n\n\n```python\n# \\begin{array} \u306e\u4f7f\u3044\u65b9\u3002 \n%%latex\n\\displaystyle\n\\begin{array}\n lkj;lkj & = & jk \\\\\n & = & kj;ljk;jk;j\n\\end{array}\n```\n\n\n\\displaystyle\n\\begin{array}\n lkj;lkj & = & jk \\\\\n & = & kj;ljk;jk;j\n\\end{array}\n\n\n# array \u3067\u884c\u9593\u304c\u72ed\u3044\u3068\u304d\narray \u3067\u884c\u9593\u304c\u72ed\u3044\u3068\u304d\u306b\u306f \\\\ \u306e\u3042\u3068\u306b [0.3em] \u3068\u304b\u5165\u308c\u308b\n\n$$\n\\displaystyle\n\\begin{array}{lcl}\n \\sin(\\alpha \\pm \\beta) & = & \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta \\\\\n \\cos(\\alpha \\pm \\beta) & = & \\cos \\alpha \\cos \\beta \\mp \\sin \\alpha \\sin \\beta \\\\[0.5em]\n \\tan(\\alpha \\pm \\beta) & = & \\displaystyle \\frac{\\tan \\alpha \\pm \\tan \\beta}{1 \\mp \\tan \\alpha \\tan \\beta}\n\\end{array}\n$$\n\n\n```python\n# \u884c\u9593\u304c\u304d\u3064\u3044\u3068\u304d\u306b\u306f \\\\ \u306e\u3042\u3068\u306b [0.3em] \u3068\u304b\u5165\u308c\u308b\n%%latex\n\\displaystyle\n\\begin{array}{lcl}\n \\sin(\\alpha \\pm \\beta) & = & \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta \\\\\n \\cos(\\alpha \\pm \\beta) & = & \\cos \\alpha \\cos \\beta \\mp \\sin \\alpha \\sin \\beta \\\\[0.3em]\n \\tan(\\alpha \\pm \\beta) & = & \\displaystyle \\frac{\\tan \\alpha \\pm \\tan \\beta}{1 \\mp \\tan \\alpha \\tan \\beta}\n\\end{array}\n```\n\n\n\\displaystyle\n\\begin{array}{lcl}\n \\sin(\\alpha \\pm \\beta) & = & \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta \\\\\n \\cos(\\alpha \\pm \\beta) & = & \\cos \\alpha \\cos \\beta \\mp \\sin \\alpha \\sin \\beta \\\\[0.3em]\n \\tan(\\alpha \\pm \\beta) & = & \\displaystyle \\frac{\\tan \\alpha \\pm \\tan \\beta}{1 \\mp \\tan \\alpha \\tan \\beta}\n\\end{array}\n\n\n\n```python\n# \u62ec\u5f27\u306e\u3044\u308d\u3044\u308d \u4e38\u62ec\u5f27 \u6ce2\u62ec\u5f27 \u89d2\u62ec\u5f27 \u7d76\u5bfe\u5024\n%%latex\n[ (x) ] \\\\\n\\{ x \\}\\\\\n\\| x \\| \\\\\n| x | \\\\\n\\langle x \\rangle\n```\n\n\n[ (x) ] \\\\\n\\{ x \\}\\\\\n\\| x \\| \\\\\n| x | \\\\\n\\langle x \\rangle\n\n\n\n```latex\n%%latex \n\\displaystyle\n\\Bigg( \\bigg[ \\Big\\{ \\big\\| \\langle x \\rangle \\big\\| \\Big\\} \\bigg] \\Bigg)\n```\n\n\n\\displaystyle\n\\Bigg( \\bigg[ \\Big\\{ \\big\\| \\langle x \\rangle \\big\\| \\Big\\} \\bigg] \\Bigg)\n\n\n\u5de6\u53f3\u306e\u62ec\u5f27\u306b`\\left`\u3001`\\right`\u3092\u3064\u3051\u308b\u3068\u81ea\u52d5\u3067\u53ef\u5909\u306b\u306a\u308b\u3002 \u5927\u304d\u304f\u306a\u308b\u3002\n\\left`\u3001`\\right`\u306f\u304b\u306a\u3089\u305a\u30da\u30a2\u3067\u3001\u7247\u65b9\u3060\u3051\u4f7f\u3046\u3068\u304d\u306f\u30d4\u30ea\u30aa\u30c9 `.` \u3092\u3064\u3051\u308b\u3002\n$$\n\\displaystyle \n\\left( \\frac{a}{b} \\right)\n\\left( \\int_a^\\infty x \\, dx \\right)\n$$\n\n\u5de6\u5bc4\u305b\u3001\u30bb\u30f3\u30bf\u30ea\u30f3\u30b0\u3001\u53f3\u5bc4\u305b\n\n$$\n\\begin{array}{lcr}\n111 & 222 & 333 \\\\\n44 & 55 & 66 \\\\\n7 & 8 & 9\n\\end{array}\n$$\n\n\n```latex\n%%latex\n\n\\begin{array}{lcr}\n111 & 222 & 333 \\\\\n44 & 55 & 66 \\\\\n7 & 8 & 9\n\\end{array}\n```\n\n\n\n\\begin{array}{lcr}\n111 & 222 & 333 \\\\\n44 & 55 & 66 \\\\\n7 & 8 & 9\n\\end{array}\n\n\n# \u62ec\u5f27\u306e\u3044\u308d\u3044\u308d \u4e38\u62ec\u5f27 \u6ce2\u62ec\u5f27 \u89d2\u62ec\u5f27 \u7d76\u5bfe\u5024 \u5927\u62ec\u5f27 \u53ef\u5909\u62ec\u5f27 \u7247\u62ec\u5f27 \u5c71\u62ec\u5f27\n$$\n[ (x) ] \\\\\n\\{ x \\}\\\\\n\\| x \\| \\\\\n| x | \\\\\n\\langle x \\rangle\n$$\n\n\u5de6\u53f3\u306e\u62ec\u5f27\u306b`\\left`\u3001`\\right`\u3092\u3064\u3051\u308b\u3068\u81ea\u52d5\u3067\u53ef\u5909\u306b\u306a\u308b\u3002 \u5927\u304d\u304f\u306a\u308b\u3002\n`\\left`\u3001`\\right`\u306f\u304b\u306a\u3089\u305a\u30da\u30a2\u3067\u3001\u7247\u65b9\u3060\u3051\u4f7f\u3046\u3068\u304d\u306f\u30d4\u30ea\u30aa\u30c9 `.` \u3092\u3064\u3051\u308b\u3002\n\n$$\n\\left( \\frac{a}{b} \\right)\n\\left( \\int_a^\\infty x \\, dx \\right)\n$$\n\n\n```python\n# \u5de6\u53f3\u306e\u62ec\u5f27\u306b`\\left`\u3001`\\right`\u3092\u3064\u3051\u308b\u3068\u81ea\u52d5\u3067\u53ef\u5909\u306b\u306a\u308b\u3002 \u5927\u304d\u304f\u306a\u308b\u3002\n# `\\left`\u3001`\\right`\u306f\u304b\u306a\u3089\u305a\u30da\u30a2\u3067\u3001\u7247\u65b9\u3060\u3051\u4f7f\u3046\u3068\u304d\u306f\u30d4\u30ea\u30aa\u30c9 `.` \u3092\u3064\u3051\u308b\u3002\n%%latex\n\\displaystyle \n\\left( \\frac{a}{b} \\right)\n\\left( \\int_a^\\infty x \\, dx \\right)\n```\n\n\n\\displaystyle \n\\left( \\frac{a}{b} \\right)\n\\left( \\int_a^\\infty x \\, dx \\right)\n\n\n# \u7247\u62ec\u5f27\n$$\n\\displaystyle\n\\left\\{ \\frac{a}{b} \\right.\n$$\n\n$$\n\\left\\{\n\\begin{array}{lcl}\n \\sin(\\alpha \\pm \\beta) & = & \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta \\\\\n \\cos(\\alpha \\pm \\beta) & = & \\cos \\alpha \\cos \\beta \\mp \\sin \\alpha \\sin \\beta \\\\[0.3em]\n \\tan(\\alpha \\pm \\beta) & = & \\displaystyle \\frac{\\tan \\alpha \\pm \\tan \\beta}{1 \\mp \\tan \\alpha \\tan \\beta}\n\\end{array}\n\\right.\n$$\n\n\n```python\n# \u62ec\u5f27\u3092\u7247\u65b9\u3060\u3051\u3064\u304b\u3063\u3066\u307f\u308b\n%%latex\n\\displaystyle\n\\left\\{ \\frac{a}{b} \\right.\n```\n\n\n\\displaystyle\n\\left\\{ \\frac{a}{b} \\right.\n\n\n\n```python\n# \u62ec\u5f27\u3092\u7247\u65b9\u3060\u3051\u3064\u304b\u3063\u3066\u307f\u308b\n%%latex\n\\displaystyle\n\\left\\{\n\\begin{array}{lcl}\n \\sin(\\alpha \\pm \\beta) & = & \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta \\\\\n \\cos(\\alpha \\pm \\beta) & = & \\cos \\alpha \\cos \\beta \\mp \\sin \\alpha \\sin \\beta \\\\[0.3em]\n \\tan(\\alpha \\pm \\beta) & = & \\displaystyle \\frac{\\tan \\alpha \\pm \\tan \\beta}{1 \\mp \\tan \\alpha \\tan \\beta}\n\\end{array}\n\\right.\n```\n\n\n\\displaystyle\n\\left\\{\n\\begin{array}{lcl}\n \\sin(\\alpha \\pm \\beta) & = & \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta \\\\\n \\cos(\\alpha \\pm \\beta) & = & \\cos \\alpha \\cos \\beta \\mp \\sin \\alpha \\sin \\beta \\\\[0.3em]\n \\tan(\\alpha \\pm \\beta) & = & \\displaystyle \\frac{\\tan \\alpha \\pm \\tan \\beta}{1 \\mp \\tan \\alpha \\tan \\beta}\n\\end{array}\n\\right.\n\n\n\u884c\u5217\u3092\u62ec\u5f27\u3067\u56f2\u3080\u306b\u306f array \u3067\u306f\u306a\u304f pmatrix \u3092\u4f7f\u3046\u3002\n\\left(,\\right) \u3092\u4f7f\u3046\u3053\u3068\u3082\u3067\u304d\u308b\n$$\n%%latex\n\\begin{pmatrix}\n 111 & 222 & 333 \\\\\n 44 & 55 & 66 \\\\\n 7 & 8 & 9\n\\end{pmatrix}\n$$ \n$$\n\\left(\n\\begin{array}{rrr}\n 111 & 222 & 333 \\\\\n 44 & 55 & 66 \\\\\n 7 & 8 & 9\n\\end{array}\n\\right)\n$$\n\n\n```python\n# \u884c\u5217\u3092\u62ec\u5f27\u3067\u56f2\u3080\u306b\u306f array \u3067\u306f\u306a\u304f pmatrix \u3092\u4f7f\u3046\u3002\n# \\left(,\\right) \u3092\u4f7f\u3046\u3053\u3068\u3082\u3067\u304d\u308b\n%%latex\n\\begin{pmatrix}\n 111 & 222 & 333 \\\\\n 44 & 55 & 66 \\\\\n 7 & 8 & 9\n\\end{pmatrix}\n{}\\\\\n\\left(\n\\begin{array}{rrr}\n 111 & 222 & 333 \\\\\n 44 & 55 & 66 \\\\\n 7 & 8 & 9\n\\end{array}\n\\right)\n```\n\n\n\\begin{pmatrix}\n 111 & 222 & 333 \\\\\n 44 & 55 & 66 \\\\\n 7 & 8 & 9\n\\end{pmatrix}\n{}\\\\\n\\left(\n\\begin{array}{rrr}\n 111 & 222 & 333 \\\\\n 44 & 55 & 66 \\\\\n 7 & 8 & 9\n\\end{array}\n\\right)\n\n\npmatrix \u306f array \u306e\u3088\u3046\u306b item \u306e\u4f4d\u7f6e\u6307\u5b9a\u304c\u3067\u304d\u306a\u3044\u3002\n\n$$\n\\begin{pmatrix}\na & longitem \\\\\n128 & 3.1419\n\\end{pmatrix}\n$$\n\n\n```python\n# pmatrix \u306f array \u306e\u3088\u3046\u306b item \u306e\u4f4d\u7f6e\u3092\u6307\u5b9a\u3059\u308b\u3053\u3068\u306f\u3067\u304d\u306a\u3044\u307f\u305f\u3044\n%%latex\n\\displaystyle\n\\begin{pmatrix}\na & longitem \\\\\n128 & 3.1419\n\\end{pmatrix}\n```\n\n\n\\displaystyle\n\\begin{pmatrix}\na & longitem \\\\\n128 & 3.1419\n\\end{pmatrix}\n\n\n# \u8907\u7d20\u95a2\u6570\n\n\n**\u8907\u7d20\u95a2\u6570**\n$$\nf(z) = f(x + i y ) = u (x, y) + iv(x, y) \\\\\n$$\n\u304c\u70b9 \\\\\n$$\nz_0 = x_0 + iy_0 \n$$\n\n\u306b\u304a\u3044\u3066\u6b63\u5247\u3067\u3042\u308b\u305f\u3081\u306e\u5fc5\u8981\u5341\u5206\u6761\u4ef6\u306f\u3001 \n$ z_0 $\n\u306e\u3042\u308b \n$ \\varepsilon $\n\u8fd1\u508d \n$\\Delta (z_0, \\varepsilon) $\n\u306b\u304a\u3044\u3066\u30b3\u30fc\u30b7\u30fc\u30fb\u30ea\u30fc\u30de\u30f3\u65b9\u7a0b\u5f0f \n\n$$\n\\begin {array}{ccc}\n \\displaystyle \\frac{\\partial u}{\\partial x} &=& \\displaystyle \\frac{\\partial v}{\\partial y} \\\\\n \\displaystyle \\frac{\\partial u}{\\partial y} &=& \\displaystyle - \\frac{\\partial v}{\\partial x}\n\\end {array}\n$$\n\n\u3092\u6e80\u305f\u3059\u3053\u3068\u3067\u3042\u308b\u3002\n\n\n```latex\n%%latex\n\u8907\u7d20\u95a2\u6570 \\\\\nf(z) = f(x + i y ) = u (x, y) + iv(x, y) \\\\\n\u304c\u70b9 \\\\\nz_0 = x_0 + iy_0 \\\\\n\u306b\u304a\u3044\u3066\u6b63\u5247\u3067\u3042\u308b\u305f\u3081\u306e\u5fc5\u8981\u5341\u5206\u6761\u4ef6\u306f\u3001 \\\\\nz_0 \n\u306e\u3042\u308b \n\\varepsilon \n\u8fd1\u508d \\\\\n\\Delta (z_0, \\varepsilon) \\\\\n\u306b\u304a\u3044\u3066\u30b3\u30fc\u30b7\u30fc\u30fb\u30ea\u30fc\u30de\u30f3\u65b9\u7a0b\u5f0f \\\\\n\n\\begin {array}{ccc}\n \\displaystyle \\frac{\\partial u}{\\partial x} &=& \\displaystyle \\frac{\\partial v}{\\partial y} \\\\\n \\displaystyle \\frac{\\partial u}{\\partial y} &=& \\displaystyle - \\frac{\\partial v}{\\partial x}\n\\end {array}\n\\\\\n\u3092\u6e80\u305f\u3059\u3053\u3068\u3067\u3042\u308b\u3002\n```\n\n\n\u8907\u7d20\u95a2\u6570 \\\\\nf(z) = f(x + i y ) = u (x, y) + iv(x, y) \\\\\n\u304c\u70b9 \\\\\nz_0 = x_0 + iy_0 \\\\\n\u306b\u304a\u3044\u3066\u6b63\u5247\u3067\u3042\u308b\u305f\u3081\u306e\u5fc5\u8981\u5341\u5206\u6761\u4ef6\u306f\u3001 \\\\\nz_0 \n\u306e\u3042\u308b \n\\varepsilon \n\u8fd1\u508d \\\\\n\\Delta (z_0, \\varepsilon) \\\\\n\u306b\u304a\u3044\u3066\u30b3\u30fc\u30b7\u30fc\u30fb\u30ea\u30fc\u30de\u30f3\u65b9\u7a0b\u5f0f \\\\\n\n\\begin {array}{ccc}\n \\displaystyle \\frac{\\partial u}{\\partial x} &=& \\displaystyle \\frac{\\partial v}{\\partial y} \\\\\n \\displaystyle \\frac{\\partial u}{\\partial y} &=& \\displaystyle - \\frac{\\partial v}{\\partial x}\n\\end {array}\n\\\\\n\u3092\u6e80\u305f\u3059\u3053\u3068\u3067\u3042\u308b\u3002\n\n\n# \u7a7a\u9593\u66f2\u7dda\n\n\n**\u7a7a\u9593\u66f2\u7dda**\n$$\nc(t) = (x (t), y(t), z(t)) $$\n\n\u306b\u3088\u3063\u3066\u4e0e\u3048\u3089\u308c\u308b\u7a7a\u9593\u66f2\u7dda c \u306e $\nc(0)$ \u3092\u59cb\u70b9\u3068\u3057\u3066 $c(t)$ \u307e\u3067\u306e\u5f27\u9577\u3092 $s(t)$ \u3068\u3059\u308b\u3068 \n\n$$\ns(t) = \\displaystyle \\int_0^t \\sqrt { (\\frac {dx}{dt})^2 \n + (\\frac {dy}{dt})^2 + (\\frac {dz}{dt})^2}\n$$\n\u3068\u8868\u3055\u308c\u308b\u3002\n\n\n```latex\n%%latex\nc(t) = (x (t), y(t), z(t)) \\\\\n\u306b\u3088\u3063\u3066\u4e0e\u3048\u3089\u308c\u308b\u7a7a\u9593\u66f2\u7dda c \u306e \\\\\nc(0) \u3092\u59cb\u70b9\u3068\u3057\u3066 c(t) \u307e\u3067\u306e\u5f27\u9577\u3092 s(t) \u3068\u3059\u308b\u3068 \\\\\n\ns(t) = \\displaystyle \\int_0^t \\sqrt { (\\frac {dx}{dt})^2 \n + (\\frac {dy}{dt})^2 + (\\frac {dz}{dt})^2}\n\\\\\n\u3068\u8868\u3055\u308c\u308b\u3002\n```\n\n\nc(t) = (x (t), y(t), z(t)) \\\\\n\u306b\u3088\u3063\u3066\u4e0e\u3048\u3089\u308c\u308b\u7a7a\u9593\u66f2\u7dda c \u306e \\\\\nc(0) \u3092\u59cb\u70b9\u3068\u3057\u3066 c(t) \u307e\u3067\u306e\u5f27\u9577\u3092 s(t) \u3068\u3059\u308b\u3068 \\\\\n\ns(t) = \\displaystyle \\int_0^t \\sqrt { (\\frac {dx}{dt})^2 \n + (\\frac {dy}{dt})^2 + (\\frac {dz}{dt})^2}\n\\\\\n\u3068\u8868\u3055\u308c\u308b\u3002\n\n\n# \u5fae\u5206\u53ef\u80fd\n\n\n**\u5fae\u5206\u53ef\u80fd**\n\n\u95a2\u6570 $f$ \u304c\u958b\u533a\u9593 $I$ \u4e0a\u3067 $n$ \u56de\u5fae\u5206\u53ef\u80fd\u3067\u3042\u308b\u3068\u3059\u308b\u3002 \n\u3053\u306e\u3068\u304d\u3001$a, b \\in I$ \u306b\u5bfe\u3057\u3001\n\n$$\nf(b) = \\displaystyle f(a)+ \\frac{f'(a)}{1!} (b - a) \n + \\frac{f''(a)}{2!} (b - a)^2 + \\cdots \n + \\frac{f^{(n - 1)}(a)}{(n - 1)!} (b - a)^{(n - 1)} + R_n(c)\n$$\n\n\u3092\u6e80\u305f\u3059 $c$ \u304c $a$ \u3068 $b$ \u306e\u9593\u306b\u5b58\u5728\u3059\u308b\u3002\n\n\n```python\n# \u5fae\u5206\u53ef\u80fd\n%%latex\n\u95a2\u6570 f \u304c\u958b\u533a\u9593 I \u4e0a\u3067 n \u56de\u5fae\u5206\u53ef\u80fd\u3067\u3042\u308b\u3068\u3059\u308b\u3002 \\\\\n\u3053\u306e\u3068\u304d\u3001a, b \\in I \u306b\u5bfe\u3057\u3001\\\\\n\nf(b) = \\displaystyle f(a)+ \\frac{f'(a)}{1!} (b - a) \n + \\frac{f''(a)}{2!} (b - a)^2 + \\cdots \n + \\frac{f^{(n - 1)}(a)}{(n - 1)!} (b - a)^{(n - 1)} + R_n(c)\n\\\\\n\u3092\u6e80\u305f\u3059 c \u304c a \u3068 b \u306e\u9593\u306b\u5b58\u5728\u3059\u308b\u3002\n\n```\n\n\n\u95a2\u6570 f \u304c\u958b\u533a\u9593 I \u4e0a\u3067 n \u56de\u5fae\u5206\u53ef\u80fd\u3067\u3042\u308b\u3068\u3059\u308b\u3002 \\\\\n\u3053\u306e\u3068\u304d\u3001a, b \\in I \u306b\u5bfe\u3057\u3001\\\\\n\nf(b) = \\displaystyle f(a)+ \\frac{f'(a)}{1!} (b - a) \n + \\frac{f''(a)}{2!} (b - a)^2 + \\cdots \n + \\frac{f^{(n - 1)}(a)}{(n - 1)!} (b - a)^{(n - 1)} + R_n(c)\n\\\\\n\u3092\u6e80\u305f\u3059 c \u304c a \u3068 b \u306e\u9593\u306b\u5b58\u5728\u3059\u308b\u3002\n\n\n# $n$ \u6b21\u6b63\u65b9\u884c\u5217\n\n\n**$n$ \u6b21\u6b63\u65b9\u884c\u5217**\n$$\nJ (\\alpha, m) = \\begin {bmatrix}\n \\alpha & 1 & 0 & \\ldots & 0 \\\\\n 0 & \\alpha & 1 & \\ddots & \\vdots \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & 0 \\\\\n \\vdots & & \\ddots & \\ddots & 1 \\\\\n 0 & \\ldots & \\ldots & 0 & \\alpha\n \\end {bmatrix}\n$$\n\n\u3092 Jordan \u7d30\u80de\u3068\u547c\u3076\u3002 \u6b63\u65b9\u884c\u5217 $A$ \u304c\u6b63\u5247\u884c\u5217 $P$ \u306b\u3088\u3063\u3066\n\n$$\n\\begin {array} {lcl}\nP^{-1} A P &=& J(\\alpha_1, m_1) \\oplus J(\\alpha_2, m_2) \\oplus \\cdots \\oplus J(\\alpha_k, m_k) \\\\\n&=& \\begin {bmatrix}\n J(\\alpha_1, m_1) & & & \\\\\n & J(\\alpha_2, m_2) & & \\\\\n & & \\ddots & \\\\\n & & & j(\\alpha_k, m_k)\n \\end {bmatrix}\n\\end {array} \\\\\n$$\n\n\u3068 Jordan \u7d30\u80de\u306e\u76f4\u548c\u306b\u306a\u308b\u3068\u304d\u3001\u3053\u308c\u3092 $A$ \u306e Jordan \u6a19\u6e96\u5f62\u3068\u547c\u3076\u3002\n\n\n\n```python\n# n\u6b21\u6b63\u65b9\u884c\u5217\n%%latex\nJ (\\alpha, m) = \\begin {bmatrix}\n \\alpha & 1 & 0 & \\ldots & 0 \\\\\n 0 & \\alpha & 1 & \\ddots & \\vdots \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & 0 \\\\\n \\vdots & & \\ddots & \\ddots & 1 \\\\\n 0 & \\ldots & \\ldots & 0 & \\alpha\n \\end {bmatrix} \\\\\n\u3092 Jordan \u7d30\u80de\u3068\u547c\u3076\u3002 \u6b63\u65b9\u884c\u5217 A \u304c\u6b63\u5247\u884c\u5217 P \u306b\u3088\u3063\u3066 \\\\\n\\begin {array} {lcl}\nP^{-1} A P &=& J(\\alpha_1, m_1) \\oplus J(\\alpha_2, m_2) \\oplus \\cdots \\oplus J(\\alpha_k, m_k) \\\\\n&=& \\begin {bmatrix}\n J(\\alpha_1, m_1) & & & \\\\\n & J(\\alpha_2, m_2) & & \\\\\n & & \\ddots & \\\\\n & & & j(\\alpha_k, m_k)\n \\end {bmatrix}\n\\end {array} \\\\\n\u3068 Jordan \u7d30\u80de\u306e\u76f4\u548c\u306b\u306a\u308b\u3068\u304d\u3001\u3053\u308c\u3092 A \u306e Jordan \u6a19\u6e96\u5f62\u3068\u547c\u3076\u3002\n\n```\n\n\nJ (\\alpha, m) = \\begin {bmatrix}\n \\alpha & 1 & 0 & \\ldots & 0 \\\\\n 0 & \\alpha & 1 & \\ddots & \\vdots \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & 0 \\\\\n \\vdots & & \\ddots & \\ddots & 1 \\\\\n 0 & \\ldots & \\ldots & 0 & \\alpha\n \\end {bmatrix} \\\\\n\u3092 Jordan \u7d30\u80de\u3068\u547c\u3076\u3002 \u6b63\u65b9\u884c\u5217 A \u304c\u6b63\u5247\u884c\u5217 P \u306b\u3088\u3063\u3066 \\\\\n\\begin {array} {lcl}\nP^{-1} A P &=& J(\\alpha_1, m_1) \\oplus J(\\alpha_2, m_2) \\oplus \\cdots \\oplus J(\\alpha_k, m_k) \\\\\n&=& \\begin {bmatrix}\n J(\\alpha_1, m_1) & & & \\\\\n & J(\\alpha_2, m_2) & & \\\\\n & & \\ddots & \\\\\n & & & j(\\alpha_k, m_k)\n \\end {bmatrix}\n\\end {array} \\\\\n\u3068 Jordan \u7d30\u80de\u306e\u76f4\u548c\u306b\u306a\u308b\u3068\u304d\u3001\u3053\u308c\u3092 A \u306e Jordan \u6a19\u6e96\u5f62\u3068\u547c\u3076\u3002\n\n\n# \u4e8c\u9805\u95a2\u4fc2\n\n\n\n\n**\u5b9a\u7fa9 1** \u96c6\u5408 $X$ \u4e0a\u306e\u4e8c\u9805\u95a2\u4fc2 $\\rho$ \u306b\u3064\u3044\u3066\u3001\u6b21\u306e\u6027\u8cea\u3092\u8003\u3048\u308b\u3002\n\n1. \u3059\u3079\u3066\u306e $x \\in X$ \u306b\u3064\u3044\u3066\u3001$x \\;\\rho\\; x$ \u304c\u6210\u308a\u7acb\u3064\u3002(\u53cd\u5c04\u5f8b)\n\n1. $x, y \\in X$ \u306b\u3064\u3044\u3066\u3001$x \\;\\rho\\; y$ \u306a\u3089\u3070 $y \\;\\rho\\; x$ \u304c\u6210\u308a\u7acb\u3064\u3002(\u5bfe\u79f0\u5f8b)\n\n1. $x, y, z \\in X$ \u306b\u3064\u3044\u3066\u3001$x \\;\\rho\\; y$ \u304b\u3064 $y \\;\\rho\\; z$ \u306a\u3089\u3070 $x \\;\\rho\\; z$ \u304c\u6210\u308a\u7acb\u3064\u3002(\u63a8\u79fb\u5f8b)\n\n1. $x, y \\in X$ \u306b\u3064\u3044\u3066\u3001$x \\;\\rho\\; y$ \u304b\u3064 $y \\;\\rho\\; x$ \u306a\u3089\u3070 $x = y$ \u304c\u6210\u308a\u7acb\u3064\u3002(\u53cd\u5bfe\u79f0\u5f8b)\n\n\u6027\u8cea $\\it{1, 2, 3}$ \u3092\u6e80\u305f\u3059\u4e8c\u9805\u95a2\u4fc2\u3092**\u540c\u5024\u95a2\u4fc2**\u3068\u547c\u3073\u3001\u6027\u8cea $\\it{1, 3, 4}$ \u3092\u6e80\u305f\u3059\u4e8c\u9805\u95a2\u4fc2\u3092**\u9806\u5e8f\u95a2\u4fc2**\u3068\u547c\u3076\u3002\n\n* reflexive law \u53cd\u5c04\u5f8b\n* transitive law \u63a8\u79fb\u5f8b\n* symmetric law \u5bfe\u79f0\u5f8b\n* antisymmetric law \u53cd\u5bfe\u79f0\u5f8b\n\n\n```latex\n%%latex\n\u5b9a\u7fa9 1 \u96c6\u5408 X \u4e0a\u306e\u4e8c\u9805\u95a2\u4fc2 \\;\\rho\\; \u306b\u3064\u3044\u3066\u3001\u6b21\u306e\u6027\u8cea\u3092\u8003\u3048\u308b\u3002\\\\\n\n1. \u3059\u3079\u3066\u306e x \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; x \u304c\u6210\u308a\u7acb\u3064\u3002(\u53cd\u5c04\u5f8b) \\\\\n2. x, y \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; y \u306a\u3089\u3070 y \\;\\rho\\; x \u304c\u6210\u308a\u7acb\u3064\u3002(\u5bfe\u79f0\u5f8b) \\\\\n3. x, y, z \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; y \u304b\u3064 y \\;\\rho\\; z \u306a\u3089\u3070 x \\;\\rho\\; z \u304c\u6210\u308a\u7acb\u3064\u3002(\u63a8\u79fb\u5f8b) \\\\\n4. x, y \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; y \u304b\u3064 y \\;\\rho\\; x \u306a\u3089\u3070 x = y \u304c\u6210\u308a\u7acb\u3064\u3002(\u53cd\u5bfe\u79f0\u5f8b) \\\\\n\u6027\u8cea \\it{1, 2, 3} \u3092\u6e80\u305f\u3059\u4e8c\u9805\u95a2\u4fc2\u3092\u540c\u5024\u95a2\u4fc2\u3068\u547c\u3073\u3001\u6027\u8cea \\it{1, 3, 4} \u3092\u6e80\u305f\u3059\u4e8c\u9805\u95a2\u4fc2\u3092\u9806\u5e8f\u95a2\u4fc2\u3068\u547c\u3076\u3002 \\\\\n```\n\n\n\u5b9a\u7fa9 1 \u96c6\u5408 X \u4e0a\u306e\u4e8c\u9805\u95a2\u4fc2 \\;\\rho\\; \u306b\u3064\u3044\u3066\u3001\u6b21\u306e\u6027\u8cea\u3092\u8003\u3048\u308b\u3002\\\\\n\n1. \u3059\u3079\u3066\u306e x \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; x \u304c\u6210\u308a\u7acb\u3064\u3002(\u53cd\u5c04\u5f8b) \\\\\n2. x, y \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; y \u306a\u3089\u3070 y \\;\\rho\\; x \u304c\u6210\u308a\u7acb\u3064\u3002(\u5bfe\u79f0\u5f8b) \\\\\n3. x, y, z \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; y \u304b\u3064 y \\;\\rho\\; z \u306a\u3089\u3070 x \\;\\rho\\; z \u304c\u6210\u308a\u7acb\u3064\u3002(\u63a8\u79fb\u5f8b) \\\\\n4. x, y \\in X \u306b\u3064\u3044\u3066\u3001x \\;\\rho\\; y \u304b\u3064 y \\;\\rho\\; x \u306a\u3089\u3070 x = y \u304c\u6210\u308a\u7acb\u3064\u3002(\u53cd\u5bfe\u79f0\u5f8b) \\\\\n\u6027\u8cea \\it{1, 2, 3} \u3092\u6e80\u305f\u3059\u4e8c\u9805\u95a2\u4fc2\u3092\u540c\u5024\u95a2\u4fc2\u3068\u547c\u3073\u3001\u6027\u8cea \\it{1, 3, 4} \u3092\u6e80\u305f\u3059\u4e8c\u9805\u95a2\u4fc2\u3092\u9806\u5e8f\u95a2\u4fc2\u3068\u547c\u3076\u3002 \\\\\n\n\n# \u96c6\u5408\u306e\u5185\u5305\u8868\u8a18 set comprehension\n\n\u53c2\u8003 \u96c6\u5408\u3067\u7fd2\u3046\u96c6\u5408\u306e\u5185\u5305\u8868\u8a18\u306f\u6570\u5f0f\u3067\u6b21\u306e\u69d8\u306b\u66f8\u304f\u3002\n\n$\nS= \\{2x \\mid x \\in \\mathbb{N}, \\ x \\leq 10 \\}\n$\n\n\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u3067\u306f list comprehension \u3068\u8a00\u3046\u3068\u601d\u3046\u304c\u3002\n\n\n\n```python\n# \u53c2\u8003 \u96c6\u5408\u3067\u7fd2\u3046\u96c6\u5408\u306e\u5185\u5305\u8868\u8a18\u306f\u6570\u5f0f\u3067\u6b21\u306e\u69d8\u306b\u66f8\u304f\u3002\n%%latex\n\nS= \\{2x \\mid x \\in \\mathbb{N}, \\ x \\leq 10 \\}\n\n```\n\n\n\nS= \\{2x \\mid x \\in \\mathbb{N}, \\ x \\leq 10 \\}\n\n\n# \u7df4\u7fd2\u554f\u984c\n\n---\n2 \u6b21\u65b9\u7a0b\u5f0f\n$$ ax^{2}+bx+c=0 $$ \n\u306e\u89e3\u306f\n$$ x = \\frac{-b\\pm\\sqrt{b^{2}-4ac}}{2a} \\tag{1}$$\n\u3067\u3042\u308b\u3002\n\n---\n\u7dcf\u548c\u8a18\u53f7 \u30b7\u30b0\u30de \u306f\u3053\u3093\u306a\u611f\u3058\u3002\n\n$$\\sum_{k=1}^{n} a_{k} = a_{1} + a_{2} + \\dots + a_{n}$$\n\n---\n\u30ac\u30a6\u30b9\u7a4d\u5206\n\n$$\n\\int_{-\\infty}^{\\infty} e^{-x^{2}} \\, dx = \\sqrt{\\pi}\n$$\n\n---\n\u95a2\u6570 $f(x)$ \u306e\u5c0e\u95a2\u6570\u306f\n$$\nf\u2019(x) = \\lim_{\\varDelta x \\to 0} \\frac{ f(x+\\varDelta x) - f(x) }{\\varDelta x}$$\n\u3067\u3042\u308b\u3002\n\n---\n\u4e09\u89d2\u95a2\u6570\u306e\u7a4d\u5206\n\n$$\\int \\tan\\theta \\, d\\theta = \\int \\frac{\\sin\\theta}{\\cos\\theta} \\, d\\theta= -\\log |\\cos\\theta| + C$$\n\n---\n\u5f0f\u306e\u5909\u5f62\n$$\n\\begin{align}\\cos 2\\theta &= \\cos^{2} \\theta - \\sin^{2} \\theta \\\\&= 2\\cos^{2} \\theta - 1 \\\\&= 1 - 2\\sin^{2} \\theta\\end{align}\n$$\n\n---\n\u7247\u62ec\u5f27\u3001\u5927\u62ec\u5f27\u3001\u5927\u6ce2\u62ec\u5f27\u3001cases\n\n$$\n|x| = \n\\begin{cases}\nfx & x \\ge 0\u306e\u3068\u304d\\\\\n-x & x \\lt 0\u306e\u3068\u304d\n\\end{cases}\\\n$$\n\n---\n\u884c\u5217 $n \\times n$ \u884c\u5217\n\n$$A =\\begin{pmatrix}\na_{11} & a_{12} & \\ldots & a_{1n} \\\\\na_{21} & a_{22} & \\ldots & a_{2n} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{n1} & a_{n2} & \\ldots & a_{nn}\n\\end{pmatrix} $$\n\n\u304c\u9006\u884c\u5217 $A^{-1}$ \u3092\u3082\u3064\u305f\u3081\u306e\u5fc5\u8981\u5341\u5206\u6761\u4ef6\u306f\u3001$ \\det A \\neq 0 $ \u3067\u3042\u308b\u3002\n\n---\n\u884c\u5217\u3092\u56f2\u3080\u62ec\u5f27\u306e\u3044\u308d\u3044\u308d\n\n\u4e38\u62ec\u5f27\u3001\u89d2\u62ec\u5f27\u3001\u6ce2\u62ec\u5f27\u3001\u7e26\u68d2\u3001 \u4e8c\u91cd\u306e\u7e26\u68d2\u62ec\u5f27\u306a\u3057\n\n$$\n\\begin{pmatrix}\na & b \\\\\nc & d\n\\end{pmatrix},\\;\n\\begin{bmatrix}\na & b \\\\\nc & d\n\\end{bmatrix},\\;\n\\begin{Bmatrix}\na & b \\\\\nc & d\n\\end{Bmatrix},\\;\n\\begin{vmatrix}\na & b \\\\\nc & d\n\\end{vmatrix},\\;\n\\begin{Vmatrix}\na & b \\\\\nc & d\n\\end{Vmatrix},\\;\n\\begin{matrix}\na & b \\\\\nc & d\n\\end{matrix}\n$$\n\n\n\n\n\n---\n$$\n\n# \u30de\u30af\u30ed\u306e\u5b9a\u7fa9\n$$\n\\def\\RR{{\\mathbb R}}\n\\def\\bol#1{{\\bf #1}}\n\\RR \\\\\n\\bol {crazy\\;rich\\;tycoon}\\\\\n\\def \\x {\\times}\n3 \\x 3 = 9\\\\\n\\def\\dd#1#2{\\frac{\\partial #1}{\\partial #2}}\n\\dd{x}{y}\n$$\n\n\n```latex\n%%latex\n\\def\\RR{{\\mathbb R}}\n\\def\\bol#1{{\\bf #1}}\n\\RR \\\\\n\\bol {crazy\\;rich\\;tycoon}\\\\\n\\def \\x {\\times}\n3 \\x 3 = 9\\\\\n\\def\\dd#1#2{\\frac{\\partial #1}{\\partial #2}}\n\\dd{x}{y}\n```\n\n\n\\def\\RR{{\\mathbb R}}\n\\def\\bol#1{{\\bf #1}}\n\\RR \\\\\n\\bol {crazy\\;rich\\;tycoon}\\\\\n\\def \\x {\\times}\n3 \\x 3 = 9\\\\\n\\def\\dd#1#2{\\frac{\\partial #1}{\\partial #2}}\n\\dd{x}{y}\n\n\n---\n\u30d9\u30af\u30c8\u30eb\u5834 $\\boldsymbol B (x,y,z)$ \u304c\n\n$$\n\\def \\x {\\times}\n\\boldsymbol B = \\nabla \\x\\boldsymbol A \\tag{1.1}\n$$\n\n\u3068\u3044\u3046\u5f62\u306b\u66f8\u3051\u308b\u6642\u3001\u305d\u306e\u767a\u6563\n\n$$\n\\def\\dd#1#2{\\frac{\\partial #1}{\\partial #2}}\n\\nabla \\cdot\\boldsymbol{B} = \\dd{B_{x}}{x} + \\dd{B_{y}}{y} + \\dd{B_{z}}{z} \\tag{1.2}\n$$\n\n\u306f $0$ \u306b\u306a\u308b\u3002 \u5f0f (1.1) \u306b\u73fe\u308c\u308b $\\boldsymbol{A,B}$ \u3092\u30d9\u30af\u30c8\u30eb\u30dd\u30c6\u30f3\u30b7\u30e3\u30eb\u3068\u8a00\u3046\u3002\n\n# \u6f14\u7fd2\u554f\u984c\n\n---\n\u30aa\u30a4\u30e9\u30fc\u306e\u516c\u5f0f\n\n$$\ne^{i\\theta}=\\cos \\theta + i \\sin \\theta\n$$\n\n---\n\u30c6\u30a4\u30e9\u30fc\u5c55\u958b\n\n$$\nf(x) = \\sum^\\infty_{n=0}\\frac{f^{(n)}(a)}{n !} (x-a)^n\n$$\n\n---\n\u6b63\u898f\u5206\u5e03\n\n$$\nf(x)=\\frac 1 {\\sqrt{2\\pi \\sigma^2}}\\exp\\left (-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right)\n$$\n\n---\n\u30cb\u30e5\u30fc\u30c8\u30f3\u306e\u904b\u52d5\u65b9\u7a0b\u5f0f\n\n$$\nm \\frac{d^2 \\overrightarrow r}{d t^2}=\\overrightarrow F\n$$\n\n---\n\u30e9\u30b0\u30e9\u30f3\u30b8\u30e5\u306e\u904b\u52d5\u65b9\u7a0b\u5f0f\n\n$$\n\\frac d {dt}\\left(\\frac{\\partial \\mathcal L}{\\partial \\dot q} \\right) - \\frac{\\partial \\mathcal L}{\\partial q} = 0\n$$\n\n---\n\u30d5\u30fc\u30ea\u30a8\u5909\u63db\n\n$$\n\\hat f (\\xi) = \\int_{\\mathbb R ^n} f(x) e ^{-2 \\pi i x \\cdot \\xi} dx\n$$\n\n\n---\n\u30b3\u30fc\u30b7\u30fc\u306e\u7a4d\u5206\u65b9\u7a0b\u5f0f\n\n$$\nf(\\alpha)=\\frac 1 {2\\pi i} \\oint_C \\frac{f(z)}{z - \\alpha} d z\n$$\n\n\n---\n\u30ac\u30a6\u30b9\u306e\u5b9a\u7406\n\n$$\n\\iiint_V \\nabla \\cdot\\boldsymbol A \\; dV = \\iint_{\\partial V}\\boldsymbol A \\cdot\\boldsymbol n \\; dS\n$$\n\n---\n\u30b7\u30e5\u30ec\u30fc\u30c7\u30a3\u30f3\u30ac\u30fc\u65b9\u7a0b\u5f0f\n\n$$\ni \\hbar \\frac \\partial {\\partial t} \\psi (r,t) = \\left (-\\frac{\\hbar}{2m}\\nabla^2+V(r,t) \\right)\\psi(r,t)\n$$\n\n\n\n---\n\u30e1\u30e2 $\\quad$ \u4e0a\u8a18\u3067\u306f `\\left (, \\right )` \u3092\u4f7f\u3063\u305f\u304c\u3001\n* \\bigl,\\Bigl,\\biggl,\\Biggl\n* \\bigr,\\Bigr,\\biggr,\\Biggr\n* \\bigm,\\Bigm,\\biggm,\\Biggm \\\\\n\u3068\u3044\u3046\u4f53\u7cfb\u3082\u3042\u308b\u3002\n\n\n---\n\u71b1\u5316\u5b66\u65b9\u7a0b\u5f0f\n\n$$\n\\mathrm{H_2(g) + {1 \\over 2} O_2(g) \\rightarrow H_2O(l)} \\quad \\varDelta H^\\circ = -286\\mathrm{kJ}\n$$\n\n\n\u96c6\u5408\u8a18\u53f7\u30fb\u5185\u5305\u8868\u8a18 list comprehension\n\n$$\nA \\cap B = \\{x \\;|\\; x \\in A \\land x \\in B\\}\n$$\n\n\n\n---\n\u30e1\u30e2 \n* \\\\{ , \\\\}\uff1a $\\{$, $\\}$ \u6ce2\u62ec\u5f27\u3060\u3051\u3067\u306f\u8868\u793a\u3055\u308c\u306a\u3044\n* \\cap, \\cup, \\wedge, \\land, \\lor, \\vee\uff1a$\\cap, \\cup, \\wedge, \\land, \\lor, \\vee$\n* \\in, \\ni, \\notin, \\subset, \\supset\uff1a$\\in, \\ni, \\notin, \\subset, \\supset$\n* \\emptyset, \\forall, \\exists, \\neg\uff1a$\\emptyset, \\forall, \\exists, \\neg$\n\n---\n\u4e8c\u9805\u4fc2\u6570\n\n$$\n{}_n C_r = \\binom n r = \\frac{n!}{r! (n-r)!}\n$$\n\n---\n\u30de\u30af\u30b9\u30a6\u30a7\u30eb\u65b9\u7a0b\u5f0f\n\n$$\n\\begin{array}{ll}\n\\displaystyle\n\\nabla \\cdot E = \\frac \\rho {\\varepsilon_0},\n&\\qquad\n\\displaystyle\n\\nabla \\cdot E = - \\frac {\\partial B}{\\partial t}\\\\\n\\nabla \\cdot B = 0,\n&\\qquad\n\\nabla \\cdot B = \\mu_0 i + \\displaystyle \\frac 1 {c^2} \\frac {\\partial E}{\\partial t}\n\\end{array}\n$$\n", "meta": {"hexsha": "a1628bb1f4183caec2a3f4fa4a22163c49ebc333", "size": 68257, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "learnlatex.ipynb", "max_stars_repo_name": "kalz2q/mycolabnotebooks", "max_stars_repo_head_hexsha": "dcd040539505185366b07d79d808669f11896ad6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-16T03:45:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-16T03:45:19.000Z", "max_issues_repo_path": "learnlatex.ipynb", "max_issues_repo_name": "kalz2q/mycolabnotebooks", "max_issues_repo_head_hexsha": "dcd040539505185366b07d79d808669f11896ad6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "learnlatex.ipynb", "max_forks_repo_name": "kalz2q/mycolabnotebooks", "max_forks_repo_head_hexsha": "dcd040539505185366b07d79d808669f11896ad6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.322406639, "max_line_length": 823, "alphanum_fraction": 0.3818802467, "converted": true, "num_tokens": 13489, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121444839335, "lm_q2_score": 0.2689414096510108, "lm_q1q2_score": 0.10351881472930263}} {"text": "\n\n# Python Basics\n\u00a9 Explore Data Science Academy\n\n## Learning Objectives:\nBy the end of this train, you should be able to:\n* Perform basic print functions and string manipulation; and\n* Create basic Python functions.\n\n## Outline\nIn this train we will:\n* Introduce print statements;\n* Perform basic string manipulation; and\n* Breakdown the different aspects of Python Functions.\n\n## Print Statements and Strings\n\nThe standard introduction into any programming language is the ''Hello world!'' program. This is a computer program that outputs \"Hello world!\" to your console window. In Python, this program can be implemented using the **print** built-in function as follows: \n\n\n```python\nprint(\"Hello world!\")\n```\n\n Hello world!\n\n\nThe **print** function \"prints\" the value stored in Python variables/objects as a **string** to the console, or other standard output devices. A **string** is a data type used to represent text, i.e. a sequence of characters. In the Python programming language, strings can be specified by encasing a sequence of characters within single or double-quotes.\n\n\n```python\nString_1 = \"This is a string.\"\nString_2 = ' This is also a string :)'\n\nprint(String_1)\nprint(String_2)\n```\n\n This is a string.\n This is also a string :)\n\n\nSometimes, strings will also contain quotation marks or other special characters. To avoid syntax errors in such cases, we need to use the escape character or backslash ```\"\\\"```. Prefixing a special character with ```\"\\\"``` turns it into an ordinary character. Additionally, the backslash ```\"\\\"``` can be used to specify special characters such as the newline ```\"\\n\"```, tab ```\"\\t\"```, carriage return ```\"\\r\"```, etc. \n\n\n```python\nString_3 = 'This shouldn't work.'\n```\n\n\n```python\nString_3 = 'This shouldn\\'t break.'\nprint(String_3)\n```\n\n This shouldn't break.\n\n\nThe escape character can also be used to specify characters using unicodes.\n\n\n```python\nprint(\"Grinning face: \\U0001f600\")\nprint(\"Squinting face: \\U0001F606\")\nprint(\"ROFL face: \\U0001F923\")\n```\n\n Grinning face: \ud83d\ude00\n Squinting face: \ud83d\ude06\n ROFL face: \ud83e\udd23\n\n\nAs you might have noticed, the print function appends a newline character ```\"\\n\"``` to its output. This forces consecutive print calls to start on a new line. We can avoid this by assigning the 'end' argument in the print function to an empty string.\n\n\n```python\nprint(String_1, end='')\nprint(String_2)\n```\n\n This is a string. This is also a string :)\n\n\nWe can achieve the same effect by using string concatenation. This operation allows us to combine two or more strings together.\n\n\n```python\nprint(String_1 + String_2)\n```\n\n This is a string. This is also a string :)\n\n\nWe can also concatenate strings with other data types by first converting them into strings using the **str()** in-built function.\n\n\n```python\nnum_chars = len(String_1)\nprint(\"String_1 is \"+str(num_chars)+\" characters long\")\n```\n\n String_1 is 17 characters long\n\n\nAlternatively, this concatenation can be performed by passing a list of comma-separated inputs into the print function. \n\n\n```python\nprint(\"String_1 is\",num_chars,\"characters long\")\n```\n\n String_1 is 17 characters long\n\n\nNotice that we didn't need to add extra spacing around the first and last strings and didn't need to convert the num_chars integer into a string.\n\nThe **print** function can be extremely useful for debugging purposes. For example, printing out the value of the variable before or after mathematical operations to ensure the correct operation occurred.\n\n\n```python\na = 3\nb = a%2\nc = b**2\nd = a/(c*5) + b\n\nprint(d)\n```\n\n 1.6\n\n\n## Functions\n\nA function is a block of organised, reusable code that is used to perform an action. \n\n\n\nThe image above points out all the components of a function in Python. \n\n### def\n\nWe can define a function using the `def` keyword. This `def` keyword is then followed by a name for the function and two brackets (we'll get back to this later). It is important to note that everything inside the function must be indented by one tab deeper than `def`.\n\n\n```python\ndef name_of_your_function(a, b, c):\n some_result = do_something_with(a and b and c)\n return some_result\n```\n\nHere is a simple example.\n\n\n```python\ndef monthly_expenses(rent, food):\n total_expenses = rent + food\n return total_expenses\n```\n\nNow lets consider the following function.\n\n\n```python\ndef print_something():\n print('SoMeThInG')\n```\n\nWe can run this function by writing the name of the function, followed by two brackets:\n\n\n```python\nprint_something()\n```\n\n SoMeThInG\n\n\n### return\n\nIn the above example, we printed something in the function. But, more often than not, we would want to **return** something from the function. It's useful (at least at the start) to think of **return** as the function passing something back to whoever ran it.\n\n\n```python\ndef return_something():\n return 'SoMeThInG'\n```\n\n\n```python\nreturn_something()\n```\n\n\n\n\n 'SoMeThInG'\n\n\n\nWe notice ``return_something`` returns a string. This is different from `print_something` which won't give us any result **out**, but merely *print it*:\n\n\n```python\nprint_something()\n```\n\n SoMeThInG\n\n\n### Arguments \n\nLet's say we want to write a function that returns the result of the future value equation:\n\n$(1 + i)^n$\n\nwhere $i$, and $n$ are both numbers. We can pass in the values of i and n into the function, by defining it as follows.\n\n\n```python\ndef future_value(i, n):\n result = (1 + i)**n\n return result\n```\n\nWe can then call our function with any values of i and n.\n\n\n```python\nfuture_value(0.05, 20)\n```\n\n\n\n\n 2.653297705144422\n\n\n\nThe `i` and the `n` inside `equation(i, n)` are called **arguments** to the function. Function arguments allow us to make generic functions that can be used with infinitely many variations. \n\n\n```python\nfuture_value(0.1, 20)\n```\n\n\n\n\n 6.727499949325611\n\n\n\n\n```python\nfuture_value(0.15, 20)\n```\n\n\n\n\n 16.36653739294609\n\n\n\n## Scope of Variables\n\nVariable scope refers to how accessible a variable is to different parts of the program. The scope of a variable can be **local** or **global**, we illustrate the difference in the example below.\n\n\n```python\ny = 10\ndef my_function():\n x = 2\n print(\"Inside function, x =\",x) \n print(\"Inside function, y =\",y) \n \n return \n\nmy_function()\nprint(\"Outside function, y =\",y)\nprint(\"Outside function, x =\",x)\n\n```\n\n**Local variables** only exist within a context, in the above example, this refers to the body of the function. Furthermore, they can only be accessed within this context. On the other hand, **global variables** can be accessed from anywhere in the code. ```x``` is a local variable and only exists within ```my_function``` and attempting to access if outside the function, results in an error. ```y``` however, is a global variable and can be accessed both inside and outside of the function.\n\nTo declare global variables within a context, we can use the ```global``` keyword as follows:\n\n\n```python\ny = 9\ndef my_other_function():\n global x\n x = 3\n print(\"Inside function, x =\",x) \n print(\"Inside function, y =\",y) \n \n return \n\nmy_other_function()\nprint(\"Outside function, y =\",y)\nprint(\"Outside function, x =\",x)\n```\n\n Inside function, x = 3\n Inside function, y = 9\n Outside function, y = 9\n Outside function, x = 3\n\n\n## Exercises\n### Exercise 1: Interest rates\n\nYou just turned 20 and you want to buy a new pair of shoes to wear at your party. The shoes cost R1000. \nYou're broke right now, but you know that in a year's time - when you turn 21 - you will get a lot of money from your relatives for your 21st birthday.\n\nFedBank is willing to lend you R1000, at 20% interest per year.\n\nAssuming that you take the loan - how much will you have to pay back in one year?\n\n***\nLoan summary:\n\n* $PV$: **R1000**\n* $n$: **1 year**\n* $i$: **20% interest** per annum, compounded annually\n\nGiven a present value loan amount, PV, the formula for a future repayment (FV) is given by:\n\n\n\\begin{equation}\nFV = PV(1 + i)^n\n\\end{equation}\n\n\n***\n\nIn Python we'd calculate this value as follows:\n\n\n```python\n# Present Value of the Loan amount:\nPV = 1000\n\n# Interest rate, i:\ni = 20 / 100\n\n# Term in years, n:\nn = 1\n\n#Calculate the Future Value, FV:\nPV*(1 + i)**n\n```\n\n\n\n\n 1200.0\n\n\n\nSo, if you decide to go ahead with the purchase, you'll need to pay an extra R200 to FedBank after 1 year.\n\n### Exercise 2: Future Value Formula\n\nNow, perform the exact same calculation, just using a function! Create a function called `future_value`, that takes the following arguments: present value $PV$, interest rate $i$, and a term $n$, and returns the future repayment value ($FV$) of that loan. \n\n\n```python\ndef future_value_of(PV, i, n):\n # YOUR CODE HERE:\n # FV = some formula\n return FV\n```\n\n\n```python\nfuture_value_of(500, 0.15, 10)\n```\n\nYour code should give the following results:\n\n\n* `future_value(100, 0.1, 20) = 672.7499949325611`\n* `future_value(500, 0.15, 10) = 2022.7788678539534`\n\n\n\n## Conclusion\n\nIn this train, you learned to perform basic operations using print statements and strings, as well as the basic aspects of Python functions and the scope of variables. The reader is expected to complete the exercises before moving forward to ensure familiarity with these concepts.\n\n\n## Appendix\n\n- [Print Statement](https://www.w3schools.com/python/ref_func_print.asp)\n\n- [Functions](https://www.w3schools.com/python/python_functions.asp)\n", "meta": {"hexsha": "9ffa87aa3839240423f7aa1a2c2365e99e994dbc", "size": 26560, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python_basics_2076.ipynb", "max_stars_repo_name": "MafikengZ/explore-data-science", "max_stars_repo_head_hexsha": "cc438256190d72652ae5d37dcb241a1842d26d29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Python_basics_2076.ipynb", "max_issues_repo_name": "MafikengZ/explore-data-science", "max_issues_repo_head_hexsha": "cc438256190d72652ae5d37dcb241a1842d26d29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python_basics_2076.ipynb", "max_forks_repo_name": "MafikengZ/explore-data-science", "max_forks_repo_head_hexsha": "cc438256190d72652ae5d37dcb241a1842d26d29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.8695652174, "max_line_length": 692, "alphanum_fraction": 0.4832454819, "converted": true, "num_tokens": 2430, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4073334000459302, "lm_q2_score": 0.2538610126142736, "lm_q1q2_score": 0.10340606940727483}} {"text": "\n\n# Lambda School Data Science Module 143\n\n## Introduction to Bayesian Inference\n\n!['Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'](https://imgs.xkcd.com/comics/frequentists_vs_bayesians.png)\n\n*[XKCD 1132](https://www.xkcd.com/1132/)*\n\n\n## Prepare - Bayes' Theorem and the Bayesian mindset\n\nBayes' theorem possesses a near-mythical quality - a bit of math that somehow magically evaluates a situation. But this mythicalness has more to do with its reputation and advanced applications than the actual core of it - deriving it is actually remarkably straightforward.\n\n### The Law of Total Probability\n\nBy definition, the total probability of all outcomes (events) if some variable (event space) $A$ is 1. That is:\n\n$$P(A) = \\sum_n P(A_n) = 1$$\n\nThe law of total probability takes this further, considering two variables ($A$ and $B$) and relating their marginal probabilities (their likelihoods considered independently, without reference to one another) and their conditional probabilities (their likelihoods considered jointly). A marginal probability is simply notated as e.g. $P(A)$, while a conditional probability is notated $P(A|B)$, which reads \"probability of $A$ *given* $B$\".\n\nThe law of total probability states:\n\n$$P(A) = \\sum_n P(A | B_n) P(B_n)$$\n\nIn words - the total probability of $A$ is equal to the sum of the conditional probability of $A$ on any given event $B_n$ times the probability of that event $B_n$, and summed over all possible events in $B$.\n\n### The Law of Conditional Probability\n\nWhat's the probability of something conditioned on something else? To determine this we have to go back to set theory and think about the intersection of sets:\n\nThe formula for actual calculation:\n\n$$P(A|B) = \\frac{P(A \\cap B)}{P(B)}$$\n\n\n\nThink of the overall rectangle as the whole probability space, $A$ as the left circle, $B$ as the right circle, and their intersection as the red area. Try to visualize the ratio being described in the above formula, and how it is different from just the $P(A)$ (not conditioned on $B$).\n\nWe can see how this relates back to the law of total probability - multiply both sides by $P(B)$ and you get $P(A|B)P(B) = P(A \\cap B)$ - replaced back into the law of total probability we get $P(A) = \\sum_n P(A \\cap B_n)$.\n\nThis may not seem like an improvement at first, but try to relate it back to the above picture - if you think of sets as physical objects, we're saying that the total probability of $A$ given $B$ is all the little pieces of it intersected with $B$, added together. The conditional probability is then just that again, but divided by the probability of $B$ itself happening in the first place.\n\n\\begin{align}\nP(A|B) &= \\frac{P(A \\cap B)}{P(B)}\\\\\n\\Rightarrow P(A|B)P(B) &= P(A \\cap B)\\\\\nP(B|A) &= \\frac{P(B \\cap A)}{P(A)}\\\\\n\\Rightarrow P(B|A)P(A) &= P(B \\cap A)\\\\\n\\Rightarrow P(A|B)P(B) &= P(B|A)P(A) \\\\\nP(A \\cap B) &= P(B \\cap A)\\\\\nP(A|B) &= \\frac{P(B|A) \\times P(A)}{P(B)}\n\\end{align}\n\n### Bayes Theorem\n\nHere is is, the seemingly magic tool:\n\n$$P(A|B) = \\frac{P(B|A)P(A)}{P(B)}$$\n\nIn words - the probability of $A$ conditioned on $B$ is the probability of $B$ conditioned on $A$, times the probability of $A$ and divided by the probability of $B$. These unconditioned probabilities are referred to as \"prior beliefs\", and the conditioned probabilities as \"updated.\"\n\nWhy is this important? Scroll back up to the XKCD example - the Bayesian statistician draws a less absurd conclusion because their prior belief in the likelihood that the sun will go nova is extremely low. So, even when updated based on evidence from a detector that is $35/36 = 0.972$ accurate, the prior belief doesn't shift enough to change their overall opinion.\n\nThere's many examples of Bayes' theorem - one less absurd example is to apply to [breathalyzer tests](https://www.bayestheorem.net/breathalyzer-example/). You may think that a breathalyzer test that is 100% accurate for true positives (detecting somebody who is drunk) is pretty good, but what if it also has 8% false positives (indicating somebody is drunk when they're not)? And furthermore, the rate of drunk driving (and thus our prior belief) is 1/1000.\n\nWhat is the likelihood somebody really is drunk if they test positive? Some may guess it's 92% - the difference between the true positives and the false positives. But we have a prior belief of the background/true rate of drunk driving. Sounds like a job for Bayes' theorem!\n\n$$\n\\begin{aligned}\nP(Drunk | Positive) &= \\frac{P(Positive | Drunk)P(Drunk)}{P(Positive)} \\\\\n&= \\frac{1 \\times 0.001}{0.08} \\\\\n&= 0.0125\n\\end{aligned}\n$$\n\nIn other words, the likelihood that somebody is drunk given they tested positive with a breathalyzer in this situation is only 1.25% - probably much lower than you'd guess. This is why, in practice, it's important to have a repeated test to confirm (the probability of two false positives in a row is $0.08 * 0.08 = 0.0064$, much lower), and Bayes' theorem has been relevant in court cases where proper consideration of evidence was important.\n\n\n\nSource: \n\n## Live Lecture - Deriving Bayes' Theorem, Calculating Bayesian Confidence\n\nNotice that $P(A|B)$ appears in the above laws - in Bayesian terms, this is the belief in $A$ updated for the evidence $B$. So all we need to do is solve for this term to derive Bayes' theorem. Let's do it together!\n\n\n```\n# Activity 2 - Use SciPy to calculate Bayesian confidence intervals\n# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bayes_mvs.html#scipy.stats.bayes_mvs\n```\n\n\n```\nfrom scipy import stats\nimport numpy as np\n\nnp.random.seed(seed=42)\n\ncoinflips = np.random.binomial(n=1, p=.5, size=100)\nprint(coinflips)\n```\n\n [0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 1 0 0 1 1 1 0\n 0 1 0 0 0 0 1 0 1 0 1 1 0 1 1 1 1 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 0 1\n 1 1 1 0 0 0 1 1 0 0 0 0 1 1 1 0 0 1 1 1 1 0 1 0 0 0]\n\n\n\n```\ndef confidence_interval(data, confidence=.95):\n n = len(data)\n mean = sum(data)/n\n data = np.array(data)\n stderr = stats.sem(data)\n interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n-1)\n return (mean , mean-interval, mean+interval)\n```\n\n\n```\nconfidence_interval(coinflips, confidence=.95)\n```\n\n\n\n\n (0.47, 0.3704689875017368, 0.5695310124982632)\n\n\n\n\n```\nmean_CI, _, _ = stats.bayes_mvs(coinflips, alpha=.95)\n \nmean_CI\n```\n\n\n\n\n Mean(statistic=0.47, minmax=(0.37046898750173674, 0.5695310124982632))\n\n\n\n\n```\n??stats.bayes_mvs\n```\n\n\n```\ncoinflips_mean_dist, _, _ = stats.mvsdist(coinflips)\ncoinflips_mean_dist\n```\n\n\n\n\n \n\n\n\n\n```\ncoinflips_mean_dist.rvs(1000)\n```\n\n\n\n\n array([0.47447628, 0.51541425, 0.54722018, 0.4589882 , 0.51501386,\n 0.53819192, 0.43382292, 0.53546659, 0.47026173, 0.44967562,\n 0.4621107 , 0.42691904, 0.37324325, 0.47531437, 0.46052277,\n 0.48711257, 0.52456771, 0.43332181, 0.49545882, 0.44671454,\n 0.47520117, 0.47047251, 0.41828918, 0.50159477, 0.42965501,\n 0.45273383, 0.48045849, 0.45342529, 0.48238344, 0.53966291,\n 0.48230241, 0.48073422, 0.48553525, 0.47962228, 0.41274185,\n 0.42892633, 0.5170948 , 0.42678096, 0.42249309, 0.51499109,\n 0.47059199, 0.39903942, 0.41790336, 0.46406817, 0.42232382,\n 0.42163269, 0.47848227, 0.48232842, 0.4731858 , 0.51077244,\n 0.3957508 , 0.48504646, 0.49014295, 0.53252732, 0.45495376,\n 0.47883978, 0.60393033, 0.4492549 , 0.44797902, 0.54782121,\n 0.43380002, 0.5760073 , 0.36941266, 0.44467418, 0.4939245 ,\n 0.45278835, 0.55635162, 0.48695459, 0.39080983, 0.45948606,\n 0.2941779 , 0.35950718, 0.44805696, 0.4725126 , 0.42218381,\n 0.45985418, 0.47545393, 0.44317753, 0.46267013, 0.4458753 ,\n 0.44204707, 0.51334913, 0.50914181, 0.49923748, 0.46895674,\n 0.43892798, 0.45984946, 0.44984632, 0.53560791, 0.45865723,\n 0.48646824, 0.55937503, 0.41464303, 0.50701457, 0.46934196,\n 0.37681534, 0.42748113, 0.49812825, 0.48278895, 0.4964763 ,\n 0.3891381 , 0.43956744, 0.48413544, 0.45477873, 0.48725027,\n 0.49464113, 0.50575373, 0.47327346, 0.47520013, 0.58130199,\n 0.5845843 , 0.46478398, 0.4258629 , 0.52948199, 0.48513203,\n 0.49687534, 0.41137211, 0.46621924, 0.3914774 , 0.48360179,\n 0.38619449, 0.48277886, 0.47026304, 0.45226139, 0.47583911,\n 0.51800201, 0.48765985, 0.47519588, 0.56197092, 0.41764152,\n 0.49955199, 0.4476301 , 0.53072591, 0.51503605, 0.54521753,\n 0.51825987, 0.38392617, 0.46969675, 0.40735953, 0.41644585,\n 0.46704857, 0.44673322, 0.44172829, 0.39682358, 0.56863866,\n 0.49382431, 0.46425614, 0.43441607, 0.45352793, 0.43280667,\n 0.49838641, 0.42134069, 0.39030482, 0.46056071, 0.43477593,\n 0.48030697, 0.46963763, 0.58135074, 0.41707759, 0.54735952,\n 0.40234266, 0.44587394, 0.43824819, 0.34994202, 0.45715098,\n 0.48171551, 0.49707708, 0.56201387, 0.43796178, 0.48736057,\n 0.48396275, 0.4137432 , 0.43730294, 0.44127354, 0.49414193,\n 0.37391405, 0.48951459, 0.49203495, 0.48750347, 0.4535989 ,\n 0.4826649 , 0.45727017, 0.35957717, 0.52627891, 0.48671508,\n 0.5146115 , 0.40126273, 0.49351532, 0.47899387, 0.41170621,\n 0.47372827, 0.45349404, 0.45541059, 0.44761163, 0.50985422,\n 0.38946749, 0.38924167, 0.477608 , 0.47523283, 0.48057958,\n 0.55631265, 0.47918939, 0.41974198, 0.59314567, 0.46179892,\n 0.52111564, 0.39858206, 0.39293582, 0.45738699, 0.51094648,\n 0.55605523, 0.42063349, 0.4553239 , 0.47003479, 0.47070228,\n 0.46428309, 0.46828548, 0.55559626, 0.54327956, 0.48485723,\n 0.39503943, 0.45169487, 0.51312502, 0.43261878, 0.44449548,\n 0.45205734, 0.50467902, 0.55919291, 0.50052268, 0.39552378,\n 0.44554284, 0.54545754, 0.41285254, 0.37820216, 0.4433361 ,\n 0.51902109, 0.45162443, 0.57347586, 0.47871392, 0.40561444,\n 0.48058706, 0.56598937, 0.48203328, 0.42126387, 0.368201 ,\n 0.45272922, 0.43585457, 0.54199909, 0.42996167, 0.474737 ,\n 0.44127776, 0.39061556, 0.46844006, 0.38929335, 0.49974341,\n 0.38804905, 0.46641358, 0.52312717, 0.49613505, 0.44815583,\n 0.49130684, 0.51080517, 0.41943377, 0.52715474, 0.51901749,\n 0.40173031, 0.48157307, 0.45698766, 0.54181905, 0.5128087 ,\n 0.4738456 , 0.53469041, 0.58876563, 0.37350851, 0.44841936,\n 0.41531469, 0.46828303, 0.41863695, 0.52030773, 0.59197971,\n 0.47809192, 0.39139708, 0.43735205, 0.44473506, 0.54450722,\n 0.4877697 , 0.48142576, 0.4282081 , 0.43828492, 0.49536959,\n 0.46056192, 0.51769419, 0.44435832, 0.2833451 , 0.44709257,\n 0.39013597, 0.49752388, 0.48941684, 0.51950258, 0.43841402,\n 0.461676 , 0.4364845 , 0.47132422, 0.5159512 , 0.40504394,\n 0.54411978, 0.48126155, 0.53768622, 0.44783793, 0.45195711,\n 0.53732665, 0.48919172, 0.54916543, 0.38184422, 0.3839936 ,\n 0.50047602, 0.4827814 , 0.45782355, 0.57051467, 0.51586565,\n 0.41297865, 0.49549503, 0.4867028 , 0.49218095, 0.47941133,\n 0.4179382 , 0.43990307, 0.43267506, 0.51435874, 0.45603811,\n 0.44264597, 0.5258102 , 0.42116497, 0.59109176, 0.45889992,\n 0.42601209, 0.41855971, 0.51763858, 0.53603004, 0.55891986,\n 0.51308977, 0.47539497, 0.57980186, 0.45166958, 0.4360487 ,\n 0.4160565 , 0.46894016, 0.42544503, 0.4718965 , 0.44509759,\n 0.4553363 , 0.51417409, 0.40125374, 0.40141203, 0.52444062,\n 0.38433692, 0.53755945, 0.49124436, 0.44092107, 0.48664193,\n 0.49809931, 0.35939896, 0.45019818, 0.51452836, 0.44702996,\n 0.39014382, 0.4742493 , 0.45802077, 0.54117637, 0.50917065,\n 0.48864846, 0.45513837, 0.46638664, 0.46289285, 0.474597 ,\n 0.47679289, 0.53272938, 0.4273865 , 0.53018322, 0.48459184,\n 0.46054965, 0.46864369, 0.47940797, 0.47963348, 0.50495819,\n 0.43197032, 0.46684607, 0.48552696, 0.45851019, 0.52062144,\n 0.45638092, 0.4765386 , 0.40863058, 0.42996211, 0.43454883,\n 0.47898572, 0.44026601, 0.47275271, 0.39097285, 0.58139265,\n 0.49820118, 0.45762952, 0.43127976, 0.42291755, 0.47822454,\n 0.54221029, 0.41974753, 0.42307496, 0.4404098 , 0.54071199,\n 0.47650072, 0.52908201, 0.43292955, 0.52911544, 0.40416927,\n 0.51208142, 0.43676583, 0.59252479, 0.50098008, 0.52513111,\n 0.43895871, 0.48582562, 0.43385598, 0.51551279, 0.49560729,\n 0.4116628 , 0.47181415, 0.44020566, 0.48571059, 0.40538225,\n 0.55172833, 0.47509918, 0.49899901, 0.42421471, 0.43601874,\n 0.44018693, 0.5304447 , 0.43289087, 0.476795 , 0.41250698,\n 0.38083118, 0.58788278, 0.46971184, 0.45125409, 0.47414778,\n 0.4974292 , 0.46069729, 0.42235771, 0.52285515, 0.59676334,\n 0.4705739 , 0.44988487, 0.47274685, 0.37493384, 0.42223226,\n 0.49987446, 0.46030573, 0.44077887, 0.43844871, 0.47083241,\n 0.49024836, 0.49153355, 0.40008594, 0.53218928, 0.43465945,\n 0.51603003, 0.39652748, 0.41985494, 0.53091204, 0.40977991,\n 0.46225922, 0.41771646, 0.43867606, 0.38712168, 0.58344414,\n 0.48316133, 0.47170139, 0.47396495, 0.45185247, 0.43308114,\n 0.53336288, 0.44655484, 0.52674401, 0.49790806, 0.45346429,\n 0.49966867, 0.43964157, 0.5347767 , 0.49514565, 0.49845113,\n 0.40907362, 0.4988595 , 0.45864058, 0.40669431, 0.46175527,\n 0.5317036 , 0.50075453, 0.48638633, 0.49108861, 0.471713 ,\n 0.48383151, 0.37494445, 0.50690883, 0.43971337, 0.45880774,\n 0.48454783, 0.41166892, 0.48265585, 0.43225349, 0.39086731,\n 0.50734673, 0.42186418, 0.48059622, 0.55935268, 0.39964071,\n 0.47968735, 0.44197047, 0.5523577 , 0.5194387 , 0.46967629,\n 0.46114995, 0.51547562, 0.41173477, 0.42714514, 0.54287129,\n 0.47917532, 0.52899054, 0.52902622, 0.55529675, 0.39260093,\n 0.47808929, 0.5227214 , 0.49686402, 0.41385472, 0.46877338,\n 0.51290447, 0.42081246, 0.48763814, 0.46488503, 0.48815416,\n 0.51874676, 0.44349542, 0.35529184, 0.48235864, 0.38829235,\n 0.41629837, 0.49353573, 0.42837918, 0.43078333, 0.51282674,\n 0.49055841, 0.48687382, 0.4024712 , 0.45031963, 0.49709223,\n 0.54003902, 0.43554303, 0.53183842, 0.486558 , 0.45249906,\n 0.51469574, 0.42098649, 0.45018556, 0.37915825, 0.55746338,\n 0.50905594, 0.49594724, 0.51327984, 0.4526535 , 0.48421933,\n 0.58224419, 0.47947599, 0.46611747, 0.52237733, 0.46120613,\n 0.47167891, 0.49850872, 0.4311296 , 0.47774032, 0.45230789,\n 0.35840294, 0.44659314, 0.51071187, 0.44069454, 0.55320876,\n 0.39988476, 0.49035529, 0.48985295, 0.44694677, 0.45049715,\n 0.51842605, 0.37342115, 0.49553783, 0.504753 , 0.49098663,\n 0.4218805 , 0.52620235, 0.4827884 , 0.44288146, 0.45916104,\n 0.49631062, 0.51646158, 0.48630302, 0.37307539, 0.41265663,\n 0.49024564, 0.46467903, 0.47432696, 0.47325263, 0.48613461,\n 0.51737977, 0.49745443, 0.43226223, 0.51386209, 0.54409309,\n 0.42166633, 0.45683158, 0.49113578, 0.47195372, 0.46461796,\n 0.43912749, 0.4570565 , 0.3981925 , 0.45969044, 0.45356353,\n 0.49012313, 0.46231133, 0.42623662, 0.52407443, 0.4489394 ,\n 0.36793671, 0.50496954, 0.4459393 , 0.47762308, 0.45557782,\n 0.42430219, 0.46342973, 0.49607806, 0.42021132, 0.47986594,\n 0.43995321, 0.47310004, 0.46830237, 0.6095986 , 0.47867353,\n 0.50938602, 0.44119682, 0.41853036, 0.54135276, 0.3737122 ,\n 0.54427806, 0.4251556 , 0.41348475, 0.41993261, 0.52989098,\n 0.462017 , 0.51346035, 0.56842082, 0.44612654, 0.4650062 ,\n 0.46543262, 0.37686614, 0.50593036, 0.38350366, 0.41051578,\n 0.5477685 , 0.37572632, 0.40238182, 0.37546585, 0.46061846,\n 0.34000573, 0.48379551, 0.4102443 , 0.46841925, 0.48235662,\n 0.4521498 , 0.50212742, 0.46316433, 0.52688369, 0.39250788,\n 0.44273506, 0.60936845, 0.46729244, 0.48883352, 0.45995963,\n 0.52954227, 0.50744425, 0.5702215 , 0.4322026 , 0.52990493,\n 0.51626873, 0.4946539 , 0.5082119 , 0.49850001, 0.46857659,\n 0.37680806, 0.42922449, 0.4714559 , 0.47006439, 0.46103295,\n 0.38448095, 0.51598495, 0.51233212, 0.39171157, 0.47295778,\n 0.42799097, 0.31999544, 0.43777493, 0.51361593, 0.48083238,\n 0.49048985, 0.37754081, 0.44390605, 0.43851769, 0.45367766,\n 0.43004286, 0.39810176, 0.52425887, 0.5132496 , 0.46711766,\n 0.5371266 , 0.49789306, 0.47440018, 0.48044375, 0.46275003,\n 0.32760769, 0.43969128, 0.53361144, 0.50404316, 0.45660878,\n 0.39614646, 0.5306167 , 0.41652062, 0.47978152, 0.44229313,\n 0.38296985, 0.4576275 , 0.51705712, 0.46901214, 0.57001682,\n 0.50423767, 0.45819868, 0.47460827, 0.52497238, 0.47857488,\n 0.34748446, 0.46412874, 0.43491473, 0.47103418, 0.45914633,\n 0.4506799 , 0.48795458, 0.49316724, 0.41450339, 0.45860263,\n 0.48590433, 0.43353272, 0.47182887, 0.57180098, 0.51429135,\n 0.36982541, 0.45893858, 0.44927164, 0.47235794, 0.58265714,\n 0.478167 , 0.49140614, 0.46531855, 0.50984351, 0.4827639 ,\n 0.45424265, 0.5015955 , 0.40968418, 0.49247972, 0.44791535,\n 0.43087735, 0.5079453 , 0.39380662, 0.38242163, 0.49299987,\n 0.41208436, 0.39335919, 0.45047663, 0.40227791, 0.55079414,\n 0.51004866, 0.46107434, 0.44619307, 0.40856549, 0.45213558,\n 0.34076475, 0.44746926, 0.50151825, 0.47512069, 0.44447584,\n 0.51219988, 0.41074984, 0.52785383, 0.37876592, 0.51172916,\n 0.51014685, 0.5534993 , 0.4745541 , 0.49519006, 0.50658855,\n 0.51617094, 0.55167752, 0.52080632, 0.48118055, 0.4497149 ,\n 0.43954218, 0.51988854, 0.46973126, 0.49375973, 0.45512846,\n 0.4670614 , 0.51303675, 0.56130338, 0.49572266, 0.41883276,\n 0.44433704, 0.48790926, 0.50805016, 0.47367689, 0.41275913,\n 0.53529189, 0.4393815 , 0.44798915, 0.47777408, 0.41248419,\n 0.44957019, 0.44111031, 0.47174419, 0.54963872, 0.37056181,\n 0.42624852, 0.42007032, 0.47428632, 0.44194326, 0.53917971,\n 0.51442597, 0.39569021, 0.52024419, 0.45939336, 0.51860329,\n 0.4722443 , 0.49892044, 0.45117057, 0.4687997 , 0.48571876,\n 0.44523495, 0.47080056, 0.40803152, 0.4873699 , 0.42852689,\n 0.5576894 , 0.44129667, 0.48988382, 0.47362904, 0.53799032,\n 0.43168666, 0.47733785, 0.42619853, 0.52326113, 0.40582344,\n 0.3752876 , 0.44395294, 0.43526222, 0.44753265, 0.4335338 ,\n 0.50883482, 0.43585868, 0.41200332, 0.36602514, 0.49333628,\n 0.40624739, 0.45769445, 0.39957451, 0.51484301, 0.45243127,\n 0.49550451, 0.42045661, 0.51606437, 0.45627401, 0.45883254,\n 0.40159611, 0.39777387, 0.47548967, 0.37814115, 0.52078691,\n 0.33737182, 0.49376712, 0.42425788, 0.49313496, 0.51393986,\n 0.33733477, 0.61310296, 0.4179583 , 0.48252206, 0.48776153,\n 0.52774351, 0.48715976, 0.42955008, 0.45700497, 0.43991845,\n 0.45648164, 0.37957614, 0.39961823, 0.43406117, 0.53066173,\n 0.505644 , 0.48217836, 0.49081739, 0.50618318, 0.4919582 ,\n 0.4350554 , 0.48444719, 0.49467042, 0.4789851 , 0.46491457,\n 0.42527415, 0.42989511, 0.47073809, 0.48158046, 0.49392888,\n 0.52054431, 0.47831854, 0.42700402, 0.49578621, 0.52062022,\n 0.43633741, 0.42671723, 0.48976181, 0.41265183, 0.45424771,\n 0.44743247, 0.50648504, 0.46491952, 0.46800249, 0.3828106 ,\n 0.49856068, 0.51699582, 0.48166775, 0.56224234, 0.49789532,\n 0.46000952, 0.49959486, 0.46650966, 0.42187689, 0.47007628,\n 0.51639958, 0.49191647, 0.50020547, 0.51637026, 0.54369003,\n 0.42976058, 0.48321571, 0.47720863, 0.44630105, 0.42892523,\n 0.41553131, 0.46174644, 0.51717268, 0.48445115, 0.44363908,\n 0.486894 , 0.45906175, 0.43506012, 0.44476889, 0.38141848,\n 0.40464606, 0.44997479, 0.44733676, 0.45134756, 0.46831684,\n 0.53670241, 0.47772302, 0.40203076, 0.46568984, 0.39886807,\n 0.55712779, 0.45029969, 0.45676884, 0.55615739, 0.53303594,\n 0.45722586, 0.55022421, 0.48445879, 0.58295224, 0.3706536 ,\n 0.48182352, 0.42183159, 0.44396719, 0.473292 , 0.53361495,\n 0.47621795, 0.44416008, 0.43392763, 0.42497657, 0.48451716])\n\n\n\n## Assignment - Code it up!\n\nMost of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions!\n\nSpecific goals/targets:\n\n1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations\n2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week\n3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach\n4. In your own words, summarize the difference between Bayesian and Frequentist statistics\n\nIf you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar!\n\nStretch goals:\n\n- Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective\n- Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples\n- Take PyMC3 further - see if you can build something with it!\n\n### Imports\n\n\n```\nfrom scipy import stats\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n```\n\n### Bayes' Theorem Definition\n\n\n```\ndef prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk):\n return((prob_positive_drunk*prob_drunk_prior) / prob_positive)\n```\n\n\n```\n# Let's check it\n\nprob_drunk_prior = 1/1000\nprob_positive = 8/100\nprob_positive_drunk = 1\n```\n\n\n```\nprob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)\n```\n\n\n\n\n 0.0125\n\n\n\n### Likelihood over Ranges of `prob_drunk_prior` and `prob_positive`\n\n\n```\ndf = pd.DataFrame()\n```\n\n\n```\ndrunk_list = list(range(1, 51))\npositive_list = list(range(1, 10, 1))\n\ndrunk_list = [i/100 for i in drunk_list]\npositive_list = [i/100 for i in positive_list]\n```\n\n\n```\ndrunk = []\npositive = []\nlikelihood = []\n```\n\n\n```\nfor x in drunk_list:\n for y in positive_list:\n drunk.append(x)\n positive.append(y)\n likelihood.append(prob_drunk_given_positive(x, y, 1))\n```\n\n\n```\ndf['prob_drunk'] = drunk\ndf['prob_positive'] = positive\ndf['likelihood'] = likelihood\n```\n\n\n```\nfig = plt.figure(figsize=(10, 10))\n\nax = df.plot.scatter('prob_drunk', 'prob_positive', c='likelihood')\n\n# Title\nax.text(x=-.05, y=.11, s=\"Likelihood the Breathylyzer is Correct\", fontsize=14, fontweight='bold');\n\n# Set x-axis label\nplt.xlabel(x=.5, y=-.1, xlabel=\"The proporion of citizens drunk at any given time\", fontsize=12, fontweight=\"bold\", labelpad=15);\nplt.xticks([.1, .2, .3, .4, .5], labels=['.1', '.2', '.3', '.4', '.5']);\n\n# Set y-axis label\nplt.ylabel(x=1, y=.5, ylabel=\"False Positive Rate\", fontsize=12, fontweight=\"bold\", labelpad=15);\nplt.yticks([0, .02, .04, .06, .08, .1]);\n```\n\n### Recursive Definition\n\n\n```\ni = 1\ndf = pd.DataFrame()\nresult = []\nind = []\npost_list = []\n```\n\n\n```\ndef prob_drunk_given_positive_recur(prob_drunk_prior, prob_positive, prob_positive_drunk, n):\n global result\n global i\n \n post_prob = (prob_positive_drunk*prob_drunk_prior) / (prob_positive + prob_drunk_prior)\n ind.append(int(i))\n post_list.append(post_prob)\n #print(i, post_prob)\n i += 1\n while i < n:\n prob_drunk_given_positive_recur(post_prob, prob_positive, prob_positive_drunk, n)\n return(result)\n```\n\n\n```\nprob_drunk_given_positive_recur(prob_drunk_prior, prob_positive, prob_positive_drunk, 20);\n```\n\n\n```\ndf['index'] = ind\ndf['post_prob'] = post_list\n```\n\n\n```\ndf.plot.scatter('index', 'post_prob')\n```\n\nRunning the test over and over again will cause the posterior probability to converge to 1 - (False Positive Rate) as we would expect.\n\n\n```\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
indexpost_prob
010.012346
120.133690
230.625626
340.886625
\n
\n\n\n\n### Frequentist vs. Bayesian Statistics\n\nFrequentist and Bayesian Statistics are two sides of the same coin, with fundamentally different philosophies. Frequentists are primarily concerned with the frequency with which events happen, and Bayesians are concerned with our own uncertainty of events.\n\n\nBayesian approaches factor in our own observations and perceptions of events more naturally than Frequentist approaches.\n\n## Resources\n\n- [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator)\n- [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139)\n\n\n```\ndef prob_drunk_given_positive_recur(prob_drunk_prior, prob_positive, prob_positive_drunk, n):\n global result\n post_prob = (prob_positive_drunk*prob_drunk_prior) / (prob_positive + prob_drunk_prior)\n global i\n i += 1\n while i < n:\n result.append(prob_drunk_given_positive_recur(post_prob, prob_positive, prob_positive_drunk, n))\n \n return(i, result) # This will give x, y.\n```\n\n\n```\npd.DataFrame(columns=[x, y])\n```\n", "meta": {"hexsha": "901f7ce1557d3370eadd847fb598521cda1f8ecf", "size": 85586, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "EricWuerfel_LS_DS5_143_Assignment.ipynb", "max_stars_repo_name": "ewuerfel66/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_stars_repo_head_hexsha": "182f57fdafa2a9477ae2f0c6107fc5c9118f2edd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EricWuerfel_LS_DS5_143_Assignment.ipynb", "max_issues_repo_name": "ewuerfel66/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_issues_repo_head_hexsha": "182f57fdafa2a9477ae2f0c6107fc5c9118f2edd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EricWuerfel_LS_DS5_143_Assignment.ipynb", "max_forks_repo_name": "ewuerfel66/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments", "max_forks_repo_head_hexsha": "182f57fdafa2a9477ae2f0c6107fc5c9118f2edd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.6899641577, "max_line_length": 31000, "alphanum_fraction": 0.7086673054, "converted": true, "num_tokens": 10916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.23370635691404026, "lm_q2_score": 0.44167300566462553, "lm_q1q2_score": 0.1032217891011539}} {"text": "# [ATM 623: Climate Modeling](../index.ipynb)\n\n[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany\n\n# Lecture 6: Elementary greenhouse models\n\n### About these notes:\n\nThis document uses the interactive [`Jupyter notebook`](https://jupyter.org) format. The notes can be accessed in several different ways:\n\n- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware\n- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)\n- A complete snapshot of the notes as of May 2017 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2017/Notes/index.html).\n\n[Also here is a legacy version from 2015](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).\n\nMany of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab\n\n\n```python\n# Ensure compatibility with Python 2 and 3\nfrom __future__ import print_function, division\n```\n\n## Contents\n\n1. [A single layer atmosphere](#section1)\n2. [Introducing the two-layer grey gas model](#section2)\n3. [Tuning the grey gas model to observations](#section3)\n4. [Level of emission](#section4)\n5. [Radiative forcing in the 2-layer grey gas model](#section5)\n6. [Radiative equilibrium in the 2-layer grey gas model](#section6)\n7. [Summary](#section7)\n\n____________\n\n\n## 1. A single layer atmosphere\n____________\n\nWe will make our first attempt at quantifying the greenhouse effect in the simplest possible greenhouse model: a single layer of atmosphere that is able to absorb and emit longwave radiation.\n\n\n\n### Assumptions\n\n- Atmosphere is a single layer of air at temperature $T_a$\n- Atmosphere is **completely transparent to shortwave** solar radiation.\n- The **surface** absorbs shortwave radiation $(1-\\alpha) Q$\n- Atmosphere is **completely opaque to infrared** radiation\n- Both surface and atmosphere emit radiation as **blackbodies** ($\\sigma T_s^4, \\sigma T_a^4$)\n- Atmosphere radiates **equally up and down** ($\\sigma T_a^4$)\n- There are no other heat transfer mechanisms\n\nWe can now use the concept of energy balance to ask what the temperature need to be in order to balance the energy budgets at the surface and the atmosphere, i.e. the **radiative equilibrium temperatures**.\n\n\n### Energy balance at the surface\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n(1-\\alpha) Q + \\sigma T_a^4 &= \\sigma T_s^4 \\\\\n\\end{align}\n\nThe presence of the atmosphere above means there is an additional source term: downwelling infrared radiation from the atmosphere.\n\nWe call this the **back radiation**.\n\n### Energy balance for the atmosphere\n\n\\begin{align}\n\\text{energy in} &= \\text{energy out} \\\\\n\\sigma T_s^4 &= A\\uparrow + A\\downarrow = 2 \\sigma T_a^4 \\\\\n\\end{align}\n\nwhich means that \n$$ T_s = 2^\\frac{1}{4} T_a \\approx 1.2 T_a $$\n\nSo we have just determined that, in order to have a purely **radiative equilibrium**, we must have $T_s > T_a$. \n\n*The surface must be warmer than the atmosphere.*\n\n### Solve for the radiative equilibrium surface temperature\n\nNow plug this into the surface equation to find\n\n$$ \\frac{1}{2} \\sigma T_s^4 = (1-\\alpha) Q $$\n\nand use the definition of the emission temperature $T_e$ to write\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n*In fact, in this model, $T_e$ is identical to the atmospheric temperature $T_a$, since all the OLR originates from this layer.*\n\nSolve for the surface temperature:\n$$ T_s = 2^\\frac{1}{4} T_e $$\n\nPutting in observed numbers, $T_e = 255$ K gives a surface temperature of \n$$T_s = 303 ~\\text{K}$$\n\nThis model is one small step closer to reality: surface is warmer than atmosphere, emissions to space generated in the atmosphere, atmosphere heated from below and helping to keep surface warm.\n\nBUT our model now overpredicts the surface temperature by about 15\u00baC (or K).\n\nIdeas about why?\n\nBasically we just need to read our **list of assumptions** above and realize that none of them are very good approximations:\n\n- Atmosphere absorbs some solar radiation.\n- Atmosphere is NOT a perfect absorber of longwave radiation\n- Absorption and emission varies strongly with wavelength *(atmosphere does not behave like a blackbody)*.\n- Emissions are not determined by a single temperature $T_a$ but by the detailed *vertical profile* of air temperture.\n- Energy is redistributed in the vertical by a variety of dynamical transport mechanisms (e.g. convection and boundary layer turbulence).\n\n\n\n____________\n\n\n## 2. Introducing the two-layer grey gas model\n____________\n\nLet's generalize the above model just a little bit to build a slighly more realistic model of longwave radiative transfer.\n\nWe will address two shortcomings of our single-layer model:\n1. No vertical structure\n2. 100% longwave opacity\n\nRelaxing these two assumptions gives us what turns out to be a very useful prototype model for **understanding how the greenhouse effect works**.\n\n### Assumptions\n\n- The atmosphere is **transparent to shortwave radiation** (still)\n- Divide the atmosphere up into **two layers of equal mass** (the dividing line is thus at 500 hPa pressure level)\n- Each layer **absorbs only a fraction $\\epsilon$ ** of whatever longwave radiation is incident upon it.\n- We will call the fraction $\\epsilon$ the **absorptivity** of the layer.\n- Assume $\\epsilon$ is the same in each layer\n\nThis is called the **grey gas** model, where grey here means the emission and absorption have no spectral dependence.\n\nWe can think of this model informally as a \"leaky greenhouse\".\n\nNote that the assumption that $\\epsilon$ is the same in each layer is appropriate if the absorption is actually carried out by a gas that is **well-mixed** in the atmosphere.\n\nOut of our two most important absorbers:\n\n- CO$_2$ is well mixed\n- H$_2$O is not (mostly confined to lower troposphere due to strong temperature dependence of the saturation vapor pressure).\n\nBut we will ignore this aspect of reality for now.\n\nIn order to build our model, we need to introduce one additional piece of physics known as **Kirchoff's Law**:\n\n$$ \\text{absorptivity} = \\text{emissivity} $$\n\nSo if a layer of atmosphere at temperature $T$ absorbs a fraction $\\epsilon$ of incident longwave radiation, it must emit\n\n$$ \\epsilon ~\\sigma ~T^4 $$\n\nboth up and down.\n\n### A sketch of the radiative fluxes in the 2-layer atmosphere\n\n\n\n- Surface temperature is $T_s$\n- Atm. temperatures are $T_0, T_1$ where $T_0$ is closest to the surface.\n- absorptivity of atm layers is $\\epsilon$\n- Surface emission is $\\sigma T_s^4$\n- Atm emission is $\\epsilon \\sigma T_0^4, \\epsilon \\sigma T_1^4$ (up and down)\n- Absorptivity = emissivity for atmospheric layers\n- a fraction $(1-\\epsilon)$ of the longwave beam is **transmitted** through each layer\n\n### A fun aside: symbolic math with the `sympy` package\n\nThis two-layer grey gas model is simple enough that we can work out all the details algebraically. There are three temperatures to keep track of $(T_s, T_0, T_1)$, so we will have 3x3 matrix equations.\n\nWe all know how to work these things out with pencil and paper. But it can be tedious and error-prone. \n\nSymbolic math software lets us use the computer to automate a lot of tedious algebra.\n\nThe [sympy](http://www.sympy.org/en/index.html) package is a powerful open-source symbolic math library that is well-integrated into the scientific Python ecosystem. \n\n\n```python\nimport sympy\n# Allow sympy to produce nice looking equations as output\nsympy.init_printing()\n# Define some symbols for mathematical quantities\n# Assume all quantities are positive (which will help simplify some expressions)\nepsilon, T_e, T_s, T_0, T_1, sigma = \\\n sympy.symbols('epsilon, T_e, T_s, T_0, T_1, sigma', positive=True)\n# So far we have just defined some symbols, e.g.\nT_s\n```\n\n\n```python\n# We have hard-coded the assumption that the temperature is positive\nsympy.ask(T_s>0)\n```\n\n\n\n\n True\n\n\n\n### Longwave emissions\n\nLet's denote the emissions from each layer as\n\\begin{align}\nE_s &= \\sigma T_s^4 \\\\\nE_0 &= \\epsilon \\sigma T_0^4 \\\\\nE_1 &= \\epsilon \\sigma T_1^4 \n\\end{align}\n\nrecognizing that $E_0$ and $E_1$ contribute to **both** the upwelling and downwelling beams.\n\n\n```python\n# Define these operations as sympy symbols \n# And display as a column vector:\nE_s = sigma*T_s**4\nE_0 = epsilon*sigma*T_0**4\nE_1 = epsilon*sigma*T_1**4\nE = sympy.Matrix([E_s, E_0, E_1])\nE\n```\n\n### Shortwave radiation\nSince we have assumed the atmosphere is transparent to shortwave, the incident beam $Q$ passes unchanged from the top to the surface, where a fraction $\\alpha$ is reflected upward out to space.\n\n\n```python\n# Define some new symbols for shortwave radiation\nQ, alpha = sympy.symbols('Q, alpha', positive=True)\n# Create a dictionary to hold our numerical values\ntuned = {}\ntuned[Q] = 341.3 # global mean insolation in W/m2\ntuned[alpha] = 101.9/Q.subs(tuned) # observed planetary albedo\ntuned[sigma] = 5.67E-8 # Stefan-Boltzmann constant in W/m2/K4\ntuned\n# Numerical value for emission temperature\n#T_e.subs(tuned)\n```\n\n### Upwelling beam\n\nLet $U$ be the upwelling flux of longwave radiation. \n\nThe upward flux from the surface to layer 0 is\n$$ U_0 = E_s $$\n(just the emission from the suface).\n\n\n```python\nU_0 = E_s\nU_0\n```\n\nFollowing this beam upward, we can write the upward flux from layer 0 to layer 1 as the sum of the transmitted component that originated below layer 0 and the new emissions from layer 0:\n\n$$ U_1 = (1-\\epsilon) U_0 + E_0 $$\n\n\n```python\nU_1 = (1-epsilon)*U_0 + E_0\nU_1\n```\n\nContinuing to follow the same beam, the upwelling flux above layer 1 is\n$$ U_2 = (1-\\epsilon) U_1 + E_1 $$\n\n\n```python\nU_2 = (1-epsilon) * U_1 + E_1\n```\n\nSince there is no more atmosphere above layer 1, this upwelling flux is our Outgoing Longwave Radiation for this model:\n\n$$ OLR = U_2 $$\n\n\n```python\nU_2\n```\n\nThe three terms in the above expression represent the **contributions to the total OLR that originate from each of the three levels**. \n\nLet's code this up explicitly for future reference:\n\n\n```python\n# Define the contributions to OLR originating from each level\nOLR_s = (1-epsilon)**2 *sigma*T_s**4\nOLR_0 = epsilon*(1-epsilon)*sigma*T_0**4\nOLR_1 = epsilon*sigma*T_1**4\n\nOLR = OLR_s + OLR_0 + OLR_1\n\nprint( 'The expression for OLR is')\nOLR\n```\n\n### Downwelling beam\n\nLet $D$ be the downwelling longwave beam. Since there is no longwave radiation coming in from space, we begin with \n\n\n```python\nfromspace = 0\nD_2 = fromspace\n```\n\nBetween layer 1 and layer 0 the beam contains emissions from layer 1:\n\n$$ D_1 = (1-\\epsilon)D_2 + E_1 = E_1 $$\n\n\n```python\nD_1 = (1-epsilon)*D_2 + E_1\nD_1\n```\n\nFinally between layer 0 and the surface the beam contains a transmitted component and the emissions from layer 0:\n\n$$ D_0 = (1-\\epsilon) D_1 + E_0 = \\epsilon(1-\\epsilon) \\sigma T_1^4 + \\epsilon \\sigma T_0^4$$\n\n\n```python\nD_0 = (1-epsilon)*D_1 + E_0\nD_0\n```\n\nThis $D_0$ is what we call the **back radiation**, i.e. the longwave radiation from the atmosphere to the surface.\n\n____________\n\n\n## 3. Tuning the grey gas model to observations\n____________\n\nIn building our new model we have introduced exactly one parameter, the absorptivity $\\epsilon$. We need to choose a value for $\\epsilon$.\n\nWe will tune our model so that it **reproduces the observed global mean OLR** given **observed global mean temperatures**.\n\nTo get appropriate temperatures for $T_s, T_0, T_1$, let's revisit the [global, annual mean lapse rate plot from NCEP Reanalysis data](Lecture05 -- Radiation.ipynb) from the previous lecture.\n\n### Temperatures\n\nFirst, we set \n$$T_s = 288 \\text{ K} $$\n\nFrom the lapse rate plot, an average temperature for the layer between 1000 and 500 hPa is \n\n$$ T_0 = 275 \\text{ K}$$\n\nDefining an average temperature for the layer between 500 and 0 hPa is more ambiguous because of the lapse rate reversal at the tropopause. We will choose\n\n$$ T_1 = 230 \\text{ K}$$\n\nFrom the graph, this is approximately the observed global mean temperature at 275 hPa or about 10 km.\n\n\n```python\n# add to our dictionary of values:\ntuned[T_s] = 288.\ntuned[T_0] = 275.\ntuned[T_1] = 230.\ntuned\n```\n\n### OLR\n\nFrom the [observed global energy budget](Lecture01 -- Planetary energy budget.ipynb) we set \n\n$$ OLR = 238.5 \\text{ W m}^{-2} $$\n\n### Solving for $\\epsilon$\n\nWe wrote down the expression for OLR as a function of temperatures and absorptivity in our model above. \n\nWe just need to equate this to the observed value and solve a **quadratic equation** for $\\epsilon$.\n\nThis is where the real power of the symbolic math toolkit comes in. \n\nSubsitute in the numerical values we are interested in:\n\n\n```python\n# the .subs() method for a sympy symbol means\n# substitute values in the expression using the supplied dictionary\n# Here we use observed values of Ts, T0, T1 \nOLR2 = OLR.subs(tuned)\nOLR2\n```\n\nWe have a quadratic equation for $\\epsilon$.\n\nNow use the `sympy.solve` function to solve the quadratic:\n\n\n```python\n# The sympy.solve method takes an expression equal to zero\n# So in this case we subtract the tuned value of OLR from our expression\neps_solution = sympy.solve(OLR2 - 238.5, epsilon)\neps_solution\n```\n\nThere are two roots, but the second one is unphysical since we must have $0 < \\epsilon < 1$.\n\nJust for fun, here is a simple of example of *filtering a list* using powerful Python *list comprehension* syntax:\n\n\n```python\n# Give me only the roots that are between zero and 1!\nlist_result = [eps for eps in eps_solution if 0\n\n## 4. Level of emission\n____________\n\nEven in this very simple greenhouse model, there is **no single level** at which the OLR is generated.\n\nThe three terms in our formula for OLR tell us the contributions from each level.\n\n\n```python\nOLRterms = sympy.Matrix([OLR_s, OLR_0, OLR_1])\nOLRterms\n```\n\nNow evaluate these expressions for our tuned temperature and absorptivity:\n\n\n```python\nOLRtuned = OLRterms.subs(tuned)\nOLRtuned\n```\n\nSo we are getting about 67 W m$^{-2}$ from the surface, 79 W m$^{-2}$ from layer 0, and 93 W m$^{-2}$ from the top layer.\n\nIn terms of fractional contributions to the total OLR, we have (limiting the output to two decimal places):\n\n\n```python\nsympy.N(OLRtuned / 239., 2)\n```\n\nNotice that the largest single contribution is coming from the top layer. This is in spite of the fact that the emissions from this layer are weak, because it is so cold.\n\nComparing to observations, the actual contribution to OLR from the surface is about 22 W m$^{-2}$ (or about 9% of the total), not 67 W m$^{-2}$. So we certainly don't have all the details worked out yet!\n\nAs we will see later, to really understand what sets that observed 22 W m$^{-2}$, we will need to start thinking about the spectral dependence of the longwave absorptivity.\n\n____________\n\n\n## 5. Radiative forcing in the 2-layer grey gas model\n____________\n\nAdding some extra greenhouse absorbers will mean that a greater fraction of incident longwave radiation is absorbed in each layer.\n\nThus **$\\epsilon$ must increase** as we add greenhouse gases.\n\nSuppose we have $\\epsilon$ initially, and the absorptivity increases to $\\epsilon_2 = \\epsilon + \\delta_\\epsilon$.\n\nSuppose further that this increase happens **abruptly** so that there is no time for the temperatures to respond to this change. **We hold the temperatures fixed** in the column and ask how the radiative fluxes change.\n\n**Do you expect the OLR to increase or decrease?**\n\nLet's use our two-layer leaky greenhouse model to investigate the answer.\n\nThe components of the OLR before the perturbation are\n\n\n```python\nOLRterms\n```\n\nAfter the perturbation we have\n\n\n```python\ndelta_epsilon = sympy.symbols('delta_epsilon')\nOLRterms_pert = OLRterms.subs(epsilon, epsilon+delta_epsilon)\nOLRterms_pert\n```\n\nLet's take the difference\n\n\n```python\ndeltaOLR = OLRterms_pert - OLRterms\ndeltaOLR\n```\n\nTo make things simpler, we will neglect the terms in $\\delta_\\epsilon^2$. This is perfectly reasonably because we are dealing with **small perturbations** where $\\delta_\\epsilon << \\epsilon$.\n\nTelling `sympy` to set the quadratic terms to zero gives us\n\n\n```python\ndeltaOLR_linear = sympy.expand(deltaOLR).subs(delta_epsilon**2, 0)\ndeltaOLR_linear\n```\n\nRecall that the three terms are the contributions to the OLR from the three different levels. In this case, the **changes** in those contributions after adding more absorbers.\n\nNow let's divide through by $\\delta_\\epsilon$ to get the normalized change in OLR per unit change in absorptivity:\n\n\n```python\ndeltaOLR_per_deltaepsilon = \\\n sympy.simplify(deltaOLR_linear / delta_epsilon)\ndeltaOLR_per_deltaepsilon\n```\n\nNow look at the **sign** of each term. Recall that $0 < \\epsilon < 1$. **Which terms in the OLR go up and which go down?**\n\n**THIS IS VERY IMPORTANT, SO STOP AND THINK ABOUT IT.**\n\nThe contribution from the **surface** must **decrease**, while the contribution from the **top layer** must **increase**.\n\n**When we add absorbers, the average level of emission goes up!**\n\n### \"Radiative forcing\" is the change in radiative flux at TOA after adding absorbers\n\nIn this model, only the longwave flux can change, so we define the radiative forcing as\n\n$$ R = - \\delta OLR $$\n\n(with the minus sign so that $R$ is positive when the climate system is gaining extra energy).\n\nWe just worked out that whenever we add some extra absorbers, the emissions to space (on average) will originate from higher levels in the atmosphere. \n\nWhat does this mean for OLR? Will it increase or decrease?\n\nTo get the answer, we just have to sum up the three contributions we wrote above:\n\n\n```python\nR = -sum(deltaOLR_per_deltaepsilon)\nR\n```\n\nIs this a positive or negative number? The key point is this:\n\n**It depends on the temperatures, i.e. on the lapse rate.**\n\n### Greenhouse effect for an isothermal atmosphere\n\nStop and think about this question:\n\nIf the **surface and atmosphere are all at the same temperature**, does the OLR go up or down when $\\epsilon$ increases (i.e. we add more absorbers)?\n\nUnderstanding this question is key to understanding how the greenhouse effect works.\n\n#### Let's solve the isothermal case\n\nWe will just set $T_s = T_0 = T_1$ in the above expression for the radiative forcing.\n\n\n```python\nR.subs([(T_0, T_s), (T_1, T_s)])\n```\n\nwhich then simplifies to\n\n\n```python\nsympy.simplify(R.subs([(T_0, T_s), (T_1, T_s)]))\n```\n\n#### The answer is zero\n\nFor an isothermal atmosphere, there is **no change** in OLR when we add extra greenhouse absorbers. Hence, no radiative forcing and no greenhouse effect.\n\nWhy?\n\nThe level of emission still must go up. But since the temperature at the upper level is the **same** as everywhere else, the emissions are exactly the same.\n\n### The radiative forcing (change in OLR) depends on the lapse rate!\n\nFor a more realistic example of radiative forcing due to an increase in greenhouse absorbers, we can substitute in our tuned values for temperature and $\\epsilon$. \n\nWe'll express the answer in W m$^{-2}$ for a 1% increase in $\\epsilon$.\n\nThe three components of the OLR change are\n\n\n```python\ndeltaOLR_per_deltaepsilon.subs(tuned) * 0.01\n```\n\nAnd the net radiative forcing is\n\n\n```python\nR.subs(tuned) * 0.01\n```\n\nSo in our example, **the OLR decreases by 2.2 W m$^{-2}$**, or equivalently, the radiative forcing is +2.2 W m$^{-2}$.\n\nWhat we have just calculated is this:\n\n*Given the observed lapse rates, a small increase in absorbers will cause a small decrease in OLR.*\n\nThe greenhouse effect thus gets stronger, and energy will begin to accumulate in the system -- which will eventually cause temperatures to increase as the system adjusts to a new equilibrium.\n\n____________\n\n\n## 6. Radiative equilibrium in the 2-layer grey gas model\n____________\n\nIn the previous section we:\n\n- made no assumptions about the processes that actually set the temperatures. \n- used the model to calculate radiative fluxes, **given observed temperatures**. \n- stressed the importance of knowing the lapse rates in order to know how an increase in emission level would affect the OLR, and thus determine the radiative forcing.\n\nA key question in climate dynamics is therefore this:\n\n**What sets the lapse rate?**\n\nIt turns out that lots of different physical processes contribute to setting the lapse rate. \n\nUnderstanding how these processes acts together and how they change as the climate changes is one of the key reasons for which we need more complex climate models.\n\nFor now, we will use our prototype greenhouse model to do the most basic lapse rate calculation: the **radiative equilibrium temperature**.\n\nWe assume that\n\n- the only exchange of energy between layers is longwave radiation\n- equilibrium is achieved when the **net radiative flux convergence** in each layer is zero.\n\n### Compute the radiative flux convergence\n\nFirst, the **net upwelling flux** is just the difference between flux up and flux down:\n\n\n```python\n# Upwelling and downwelling beams as matrices\nU = sympy.Matrix([U_0, U_1, U_2])\nD = sympy.Matrix([D_0, D_1, D_2])\n# Net flux, positive up\nF = U-D\nF\n```\n\n#### Net absorption is the flux convergence in each layer\n\n(difference between what's coming in the bottom and what's going out the top of each layer)\n\n\n```python\n# define a vector of absorbed radiation -- same size as emissions\nA = E.copy()\n\n# absorbed radiation at surface\nA[0] = F[0]\n# Compute the convergence\nfor n in range(2):\n A[n+1] = -(F[n+1]-F[n])\n\nA\n```\n\n#### Radiative equilibrium means net absorption is ZERO in the atmosphere\n\nThe only other heat source is the **shortwave heating** at the **surface**.\n\nIn matrix form, here is the system of equations to be solved:\n\n\n```python\nradeq = sympy.Equality(A, sympy.Matrix([(1-alpha)*Q, 0, 0]))\nradeq\n```\n\nJust as we did for the 1-layer model, it is helpful to rewrite this system using the definition of the **emission temperture** $T_e$\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n\n```python\nradeq2 = radeq.subs([((1-alpha)*Q, sigma*T_e**4)])\nradeq2\n```\n\nIn this form we can see that we actually have a **linear system** of equations for a set of variables $T_s^4, T_0^4, T_1^4$.\n\nWe can solve this matrix problem to get these as functions of $T_e^4$.\n\n\n```python\n# Solve for radiative equilibrium \nfourthpower = sympy.solve(radeq2, [T_s**4, T_1**4, T_0**4])\nfourthpower\n```\n\nThis produces a dictionary of solutions for the fourth power of the temperatures!\n\nA little manipulation gets us the solutions for temperatures that we want:\n\n\n```python\n# need the symbolic fourth root operation\nfrom sympy.simplify.simplify import nthroot\n\nfourthpower_list = [fourthpower[key] for key in [T_s**4, T_0**4, T_1**4]]\nsolution = sympy.Matrix([nthroot(item,4) for item in fourthpower_list])\n# Display result as matrix equation!\nT = sympy.Matrix([T_s, T_0, T_1])\nsympy.Equality(T, solution)\n```\n\nIn more familiar notation, the radiative equilibrium solution is thus\n\n\\begin{align} \nT_s &= T_e \\left( \\frac{2+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_0 &= T_e \\left( \\frac{1+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_1 &= T_e \\left( \\frac{ 1}{2 - \\epsilon} \\right)^{1/4}\n\\end{align}\n\nPlugging in the tuned value $\\epsilon = 0.586$ gives\n\n\n```python\nTsolution = solution.subs(tuned)\n# Display result as matrix equation!\nsympy.Equality(T, Tsolution)\n```\n\nNow we just need to know the Earth's emission temperature $T_e$!\n\n(Which we already know is about 255 K)\n\n\n```python\n# Here's how to calculate T_e from the observed values\nsympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)\n```\n\n\n```python\n# Need to unpack the list\nTe_value = sympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)[0]\nTe_value\n```\n\n#### Now we finally get our solution for radiative equilibrium\n\n\n```python\n# Output 4 significant digits\nTrad = sympy.N(Tsolution.subs([(T_e, Te_value)]), 4)\nsympy.Equality(T, Trad)\n```\n\nCompare these to the values we derived from the **observed lapse rates**:\n\n\n```python\nsympy.Equality(T, T.subs(tuned))\n```\n\nThe **radiative equilibrium** solution is substantially **warmer at the surface** and **colder in the lower troposphere** than reality.\n\nThis is a very general feature of radiative equilibrium, and we will see it again very soon in this course.\n\n____________\n\n\n## 7. Summary\n____________\n\n## Key physical lessons\n\n- Putting a **layer of longwave absorbers** above the surface keeps the **surface substantially warmer**, because of the **backradiation** from the atmosphere (greenhouse effect).\n- The **grey gas** model assumes that each layer absorbs and emits a fraction $\\epsilon$ of its blackbody value, independent of wavelength.\n\n- With **incomplete absorption** ($\\epsilon < 1$), there are contributions to the OLR from every level and the surface (there is no single **level of emission**)\n- Adding more absorbers means that **contributions to the OLR** from **upper levels** go **up**, while contributions from the surface go **down**.\n- This upward shift in the weighting of different levels is what we mean when we say the **level of emission goes up**.\n\n- The **radiative forcing** caused by an increase in absorbers **depends on the lapse rate**.\n- For an **isothermal atmosphere** the radiative forcing is zero and there is **no greenhouse effect**\n- The radiative forcing is positive for our atmosphere **because tropospheric temperatures tends to decrease with height**.\n- Pure **radiative equilibrium** produces a **warm surface** and **cold lower troposphere**.\n- This is unrealistic, and suggests that crucial heat transfer mechanisms are missing from our model.\n\n### And on the Python side...\n\nDid we need `sympy` to work all this out? No, of course not. We could have solved the 3x3 matrix problems by hand. But computer algebra can be very useful and save you a lot of time and error, so it's good to invest some effort into learning how to use it. \n\nHopefully these notes provide a useful starting point.\n\n### A follow-up assignment\n\nYou are now ready to tackle [Assignment 5](../Assignments/Assignment05 -- Radiative forcing in a grey radiation atmosphere.ipynb), where you are asked to extend this grey-gas analysis to many layers. \n\nFor more than a few layers, the analytical approach we used here is no longer very useful. You will code up a numerical solution to calculate OLR given temperatures and absorptivity, and look at how the lapse rate determines radiative forcing for a given increase in absorptivity.\n\n
\n[Back to ATM 623 notebook home](../index.ipynb)\n
\n\n____________\n## Version information\n____________\n\n\n\n```python\n%load_ext version_information\n%version_information sympy\n```\n\n\n\n\n
SoftwareVersion
Python3.6.2 64bit [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
IPython6.1.0
OSDarwin 17.7.0 x86_64 i386 64bit
sympy1.1.1
Tue Jan 15 13:50:53 2019 EST
\n\n\n\n____________\n\n## Credits\n\nThe author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\nIt was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php)\n\nDevelopment of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.\n____________\n\n\n```python\n\n```\n", "meta": {"hexsha": "1b9720735460637bfdd2752d975535867c40d50e", "size": 166915, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture07 -- Elementary greenhouse models.ipynb", "max_stars_repo_name": "katrinafandrich/ClimateModeling_courseware", "max_stars_repo_head_hexsha": "6f13fd38706cfef91e81f7e7065d9fab6fb8bb2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/Lecture07 -- Elementary greenhouse models.ipynb", "max_issues_repo_name": "katrinafandrich/ClimateModeling_courseware", "max_issues_repo_head_hexsha": "6f13fd38706cfef91e81f7e7065d9fab6fb8bb2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture07 -- Elementary greenhouse models.ipynb", "max_forks_repo_name": "katrinafandrich/ClimateModeling_courseware", "max_forks_repo_head_hexsha": "6f13fd38706cfef91e81f7e7065d9fab6fb8bb2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.0271274619, "max_line_length": 6102, "alphanum_fraction": 0.7476320283, "converted": true, "num_tokens": 7507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.36658972248186006, "lm_q2_score": 0.2814056194821861, "lm_q1q2_score": 0.10316040795081052}} {"text": "# Detect and Mitigate Unfairness in Models\n\nMachine learning models can incorporate unintentional bias, which can lead to issues with *fairness*. For example, a model that predicts the likelihood of diabetes might work well for some age groups, but not for others - subjecting a subset of patients to unnecessary tests, or depriving them of tests that would confirm a diabetes diagnosis.\n\nIn this notebook, you'll use the **Fairlearn** package to analyze a model and explore disparity in prediction performance for different subsets of patients based on age.\n\n> **Note**: Integration with the Fairlearn package is in preview at this time. You may experience some unexpected errors.\n\n## Important - Considerations for fairness\n\n> This notebook is designed as a practical exercise to help you explore the Fairlearn package and its integration with Azure Machine Learning. However, there are a great number of considerations that an organization or data science team must discuss related to fairness before using the tools. Fairness is a complex *sociotechnical* challenge that goes beyond simply running a tool to analyze models.\n>\n> Microsoft Research has co-developed a [fairness checklist](https://www.microsoft.com/en-us/research/publication/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai/) that provides a great starting point for the important discussions that need to take place before a single line of code is written.\n\n## Install the required SDKs\n\nTo use the Fairlearn package with Azure Machine Learning, you need the Azure Machine Learning and Fairlearn Python packages, so run the following cell verify that the **azureml-contrib-fairness** package is installed. \n\n\n```python\n!pip show azureml-contrib-fairness\n```\n\n Name: azureml-contrib-fairness\r\n Version: 1.34.0\r\n Summary: Uploads fairness dashboards to AzureML (preview).\r\n Home-page: https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py\r\n Author: Microsoft Corp\r\n Author-email: None\r\n License: Proprietary https://aka.ms/azureml-preview-sdk-license \r\n Location: /anaconda/envs/azureml_py36/lib/python3.6/site-packages\r\n Requires: azureml-core, jsonschema\r\n Required-by: \r\n\n\nYou'll also need the **fairlearn** package itself, and the **raiwidgets** package (which is used by Fairlearn to visualize dashboards). Run the following cell to install them.\n\n\n```python\n!pip install --upgrade fairlearn==0.7.0 raiwidgets\n```\n\n Requirement already up-to-date: fairlearn==0.7.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (0.7.0)\n Collecting raiwidgets\n Downloading raiwidgets-0.13.0-py3-none-any.whl (1.9 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.9 MB 14.1 MB/s eta 0:00:01 |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258d | 686 kB 14.1 MB/s eta 0:00:01\n \u001b[?25hRequirement already satisfied, skipping upgrade: numpy>=1.17.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from fairlearn==0.7.0) (1.18.5)\n Requirement already satisfied, skipping upgrade: scipy>=1.4.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from fairlearn==0.7.0) (1.5.2)\n Requirement already satisfied, skipping upgrade: scikit-learn>=0.22.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from fairlearn==0.7.0) (0.22.2.post1)\n Requirement already satisfied, skipping upgrade: pandas>=0.25.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from fairlearn==0.7.0) (0.25.3)\n Requirement already satisfied, skipping upgrade: ipython==7.16.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from raiwidgets) (7.16.1)\n Requirement already satisfied, skipping upgrade: rai-core-flask==0.2.4 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from raiwidgets) (0.2.4)\n Collecting responsibleai==0.13.0\n Downloading responsibleai-0.13.0-py3-none-any.whl (65 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 65 kB 3.7 MB/s eta 0:00:01\n \u001b[?25hRequirement already satisfied, skipping upgrade: lightgbm>=2.0.11 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from raiwidgets) (2.3.0)\n Requirement already satisfied, skipping upgrade: ipykernel<6.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from raiwidgets) (5.5.5)\n Collecting jinja2==2.11.3\n Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 125 kB 79.8 MB/s eta 0:00:01\n \u001b[?25hCollecting erroranalysis>=0.1.24\n Downloading erroranalysis-0.1.26-py3-none-any.whl (19 kB)\n Requirement already satisfied, skipping upgrade: joblib>=0.11 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from scikit-learn>=0.22.1->fairlearn==0.7.0) (0.14.1)\n Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from pandas>=0.25.1->fairlearn==0.7.0) (2021.1)\n Requirement already satisfied, skipping upgrade: python-dateutil>=2.6.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from pandas>=0.25.1->fairlearn==0.7.0) (2.8.2)\n Requirement already satisfied, skipping upgrade: traitlets>=4.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (4.3.3)\n Requirement already satisfied, skipping upgrade: setuptools>=18.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (50.3.0)\n Requirement already satisfied, skipping upgrade: pickleshare in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (0.7.5)\n Requirement already satisfied, skipping upgrade: backcall in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (0.2.0)\n Requirement already satisfied, skipping upgrade: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (3.0.20)\n Requirement already satisfied, skipping upgrade: pexpect; sys_platform != \"win32\" in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (4.8.0)\n Requirement already satisfied, skipping upgrade: jedi>=0.10 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (0.17.2)\n Requirement already satisfied, skipping upgrade: pygments in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (2.10.0)\n Requirement already satisfied, skipping upgrade: decorator in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipython==7.16.1->raiwidgets) (5.0.9)\n Collecting Flask-Cors==3.0.9\n Downloading Flask_Cors-3.0.9-py2.py3-none-any.whl (14 kB)\n Requirement already satisfied, skipping upgrade: Flask~=1.0.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from rai-core-flask==0.2.4->raiwidgets) (1.0.3)\n Collecting greenlet==0.4.17\n Downloading greenlet-0.4.17-cp36-cp36m-manylinux1_x86_64.whl (44 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 44 kB 2.4 MB/s eta 0:00:01\n \u001b[?25hCollecting gevent==20.9.0\n Downloading gevent-20.9.0-cp36-cp36m-manylinux2010_x86_64.whl (5.3 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.3 MB 75.0 MB/s eta 0:00:01 |\u2588\u2588\u2588\u258a | 614 kB 75.0 MB/s eta 0:00:01\n \u001b[?25hRequirement already satisfied, skipping upgrade: networkx<=2.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from responsibleai==0.13.0->raiwidgets) (2.5)\n Requirement already satisfied, skipping upgrade: econml~=0.12.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from responsibleai==0.13.0->raiwidgets) (0.12.0)\n Collecting interpret-community>=0.20.0\n Downloading interpret_community-0.21.0-py3-none-any.whl (136 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 136 kB 79.8 MB/s eta 0:00:01\n \u001b[?25hRequirement already satisfied, skipping upgrade: semver~=2.13.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from responsibleai==0.13.0->raiwidgets) (2.13.0)\n Collecting dice-ml<0.8,>=0.7.2\n Downloading dice_ml-0.7.2-py3-none-any.whl (242 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 242 kB 74.5 MB/s eta 0:00:01\n \u001b[?25hRequirement already satisfied, skipping upgrade: tornado>=4.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipykernel<6.0->raiwidgets) (6.1)\n Requirement already satisfied, skipping upgrade: jupyter-client in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from ipykernel<6.0->raiwidgets) (6.1.12)\n Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from jinja2==2.11.3->raiwidgets) (2.0.1)\n Requirement already satisfied, skipping upgrade: six>=1.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from python-dateutil>=2.6.1->pandas>=0.25.1->fairlearn==0.7.0) (1.16.0)\n Requirement already satisfied, skipping upgrade: ipython-genutils in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from traitlets>=4.2->ipython==7.16.1->raiwidgets) (0.2.0)\n Requirement already satisfied, skipping upgrade: wcwidth in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython==7.16.1->raiwidgets) (0.2.5)\n Requirement already satisfied, skipping upgrade: ptyprocess>=0.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from pexpect; sys_platform != \"win32\"->ipython==7.16.1->raiwidgets) (0.7.0)\n Collecting parso<0.8.0,>=0.7.0\n Downloading parso-0.7.1-py2.py3-none-any.whl (109 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 109 kB 101.7 MB/s eta 0:00:01\n \u001b[?25hRequirement already satisfied, skipping upgrade: click>=5.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from Flask~=1.0.0->rai-core-flask==0.2.4->raiwidgets) (8.0.1)\n Requirement already satisfied, skipping upgrade: Werkzeug>=0.14 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from Flask~=1.0.0->rai-core-flask==0.2.4->raiwidgets) (1.0.1)\n Requirement already satisfied, skipping upgrade: itsdangerous>=0.24 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from Flask~=1.0.0->rai-core-flask==0.2.4->raiwidgets) (2.0.1)\n Requirement already satisfied, skipping upgrade: zope.interface in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from gevent==20.9.0->rai-core-flask==0.2.4->raiwidgets) (5.4.0)\n Requirement already satisfied, skipping upgrade: zope.event in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from gevent==20.9.0->rai-core-flask==0.2.4->raiwidgets) (4.5.0)\n Requirement already satisfied, skipping upgrade: dowhy in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.6)\n Requirement already satisfied, skipping upgrade: sparse in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.13.0)\n Requirement already satisfied, skipping upgrade: numba!=0.42.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.53.1)\n Requirement already satisfied, skipping upgrade: shap<0.40.0,>=0.38.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.39.0)\n Requirement already satisfied, skipping upgrade: statsmodels>=0.10 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.10.2)\n Requirement already satisfied, skipping upgrade: interpret-core[required]<=0.2.6,>=0.1.20 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from interpret-community>=0.20.0->responsibleai==0.13.0->raiwidgets) (0.2.5)\n Requirement already satisfied, skipping upgrade: packaging in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from interpret-community>=0.20.0->responsibleai==0.13.0->raiwidgets) (21.0)\n Requirement already satisfied, skipping upgrade: h5py in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from dice-ml<0.8,>=0.7.2->responsibleai==0.13.0->raiwidgets) (3.1.0)\n Requirement already satisfied, skipping upgrade: jsonschema in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from dice-ml<0.8,>=0.7.2->responsibleai==0.13.0->raiwidgets) (3.2.0)\n Requirement already satisfied, skipping upgrade: tqdm in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from dice-ml<0.8,>=0.7.2->responsibleai==0.13.0->raiwidgets) (4.62.2)\n Requirement already satisfied, skipping upgrade: jupyter-core>=4.6.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from jupyter-client->ipykernel<6.0->raiwidgets) (4.7.1)\n Requirement already satisfied, skipping upgrade: pyzmq>=13 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from jupyter-client->ipykernel<6.0->raiwidgets) (22.2.1)\n Requirement already satisfied, skipping upgrade: importlib-metadata; python_version < \"3.8\" in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from click>=5.1->Flask~=1.0.0->rai-core-flask==0.2.4->raiwidgets) (4.8.1)\n Requirement already satisfied, skipping upgrade: sympy>=1.4 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from dowhy->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (1.8)\n Requirement already satisfied, skipping upgrade: pydot>=1.4 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from dowhy->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (1.4.2)\n Requirement already satisfied, skipping upgrade: llvmlite<0.37,>=0.36.0rc1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from numba!=0.42.1->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.36.0)\n Requirement already satisfied, skipping upgrade: cloudpickle in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from shap<0.40.0,>=0.38.1->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (1.6.0)\n Requirement already satisfied, skipping upgrade: slicer==0.0.7 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from shap<0.40.0,>=0.38.1->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.0.7)\n Requirement already satisfied, skipping upgrade: patsy>=0.4.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from statsmodels>=0.10->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (0.5.1)\n Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from packaging->interpret-community>=0.20.0->responsibleai==0.13.0->raiwidgets) (2.4.7)\n Requirement already satisfied, skipping upgrade: cached-property; python_version < \"3.8\" in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from h5py->dice-ml<0.8,>=0.7.2->responsibleai==0.13.0->raiwidgets) (1.5.2)\n Requirement already satisfied, skipping upgrade: pyrsistent>=0.14.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from jsonschema->dice-ml<0.8,>=0.7.2->responsibleai==0.13.0->raiwidgets) (0.18.0)\n Requirement already satisfied, skipping upgrade: attrs>=17.4.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from jsonschema->dice-ml<0.8,>=0.7.2->responsibleai==0.13.0->raiwidgets) (21.2.0)\n Requirement already satisfied, skipping upgrade: zipp>=0.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from importlib-metadata; python_version < \"3.8\"->click>=5.1->Flask~=1.0.0->rai-core-flask==0.2.4->raiwidgets) (3.5.0)\n Requirement already satisfied, skipping upgrade: typing-extensions>=3.6.4; python_version < \"3.8\" in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from importlib-metadata; python_version < \"3.8\"->click>=5.1->Flask~=1.0.0->rai-core-flask==0.2.4->raiwidgets) (3.10.0.2)\n Requirement already satisfied, skipping upgrade: mpmath>=0.19 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from sympy>=1.4->dowhy->econml~=0.12.0->responsibleai==0.13.0->raiwidgets) (1.2.1)\n \u001b[31mERROR: sqlalchemy 1.4.26 has requirement greenlet!=0.4.17; python_version >= \"3\" and (platform_machine == \"aarch64\" or (platform_machine == \"ppc64le\" or (platform_machine == \"x86_64\" or (platform_machine == \"amd64\" or (platform_machine == \"AMD64\" or (platform_machine == \"win32\" or platform_machine == \"WIN32\")))))), but you'll have greenlet 0.4.17 which is incompatible.\u001b[0m\n \u001b[31mERROR: azureml-train-automl-runtime 1.34.0 has requirement jinja2<=2.11.2, but you'll have jinja2 2.11.3 which is incompatible.\u001b[0m\n \u001b[31mERROR: azureml-responsibleai 1.34.0 has requirement responsibleai==0.9.4, but you'll have responsibleai 0.13.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azureml-interpret 1.34.0 has requirement interpret-community==0.19.*, but you'll have interpret-community 0.21.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement antlr4-python3-runtime~=4.7.2, but you'll have antlr4-python3-runtime 4.8 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement azure-graphrbac~=0.60.0, but you'll have azure-graphrbac 0.61.1 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement azure-mgmt-containerregistry==3.0.0rc17, but you'll have azure-mgmt-containerregistry 8.1.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement azure-mgmt-keyvault==9.0.0, but you'll have azure-mgmt-keyvault 9.1.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement azure-mgmt-resource==18.0.0, but you'll have azure-mgmt-resource 13.0.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement azure-mgmt-storage~=18.0.0, but you'll have azure-mgmt-storage 11.2.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement packaging~=20.9, but you'll have packaging 21.0 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement pytz==2019.1, but you'll have pytz 2021.1 which is incompatible.\u001b[0m\n \u001b[31mERROR: azure-cli 2.24.0 has requirement websocket-client~=0.56.0, but you'll have websocket-client 1.2.1 which is incompatible.\u001b[0m\n Installing collected packages: interpret-community, erroranalysis, dice-ml, responsibleai, jinja2, raiwidgets, Flask-Cors, greenlet, gevent, parso\n Attempting uninstall: interpret-community\n Found existing installation: interpret-community 0.19.3\n Uninstalling interpret-community-0.19.3:\n Successfully uninstalled interpret-community-0.19.3\n Attempting uninstall: erroranalysis\n Found existing installation: erroranalysis 0.1.17\n Uninstalling erroranalysis-0.1.17:\n Successfully uninstalled erroranalysis-0.1.17\n Attempting uninstall: dice-ml\n Found existing installation: dice-ml 0.6.1\n Uninstalling dice-ml-0.6.1:\n Successfully uninstalled dice-ml-0.6.1\n Attempting uninstall: responsibleai\n Found existing installation: responsibleai 0.10.0\n\n\n## Train a model\n\nYou'll start by training a classification model to predict the likelihood of diabetes. In addition to splitting the data into training and test sets of features and labels, you'll extract *sensitive* features that are used to define subpopulations of the data for which you want to compare fairness. In this case, you'll use the **Age** column to define two categories of patient: those over 50 years old, and those 50 or younger.\n\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\n\n# load the diabetes dataset\nprint(\"Loading Data...\")\ndata = pd.read_csv('data/diabetes.csv')\n\n# Separate features and labels\nfeatures = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']\nX, y = data[features].values, data['Diabetic'].values\n\n# Get sensitive features\nS = data[['Age']].astype(int)\n# Change value to represent age groups\nS['Age'] = np.where(S.Age > 50, 'Over 50', '50 or younger')\n\n# Split data into training set and test set\nX_train, X_test, y_train, y_test, S_train, S_test = train_test_split(X, y, S, test_size=0.20, random_state=0, stratify=y)\n\n# Train a classification model\nprint(\"Training model...\")\ndiabetes_model = DecisionTreeClassifier().fit(X_train, y_train)\n\nprint(\"Model trained.\")\n```\n\nNow that you've trained a model, you can use the Fairlearn package to compare its behavior for different sensitive feature values. In this case, you'll:\n\n- Use the fairlearn **selection_rate** function to return the selection rate (percentage of positive predictions) for the overall population.\n- Use **scikit-learn** metric functions to calculate overall accuracy, recall, and precision metrics.\n- Use a **MetricFrame** to calculate selection rate, accuracy, recall, and precision for each age group in the **Age** sensitive feature. Note that a mix of **fairlearn** and **scikit-learn** metric functions are used to calculate the performance values.\n\n\n```python\nfrom fairlearn.metrics import selection_rate, MetricFrame\nfrom sklearn.metrics import accuracy_score, recall_score, precision_score\n\n# Get predictions for the witheld test data\ny_hat = diabetes_model.predict(X_test)\n\n# Get overall metrics\nprint(\"Overall Metrics:\")\n# Get selection rate from fairlearn\noverall_selection_rate = selection_rate(y_test, y_hat) # Get selection rate from fairlearn\nprint(\"\\tSelection Rate:\", overall_selection_rate)\n# Get standard metrics from scikit-learn\noverall_accuracy = accuracy_score(y_test, y_hat)\nprint(\"\\tAccuracy:\", overall_accuracy)\noverall_recall = recall_score(y_test, y_hat)\nprint(\"\\tRecall:\", overall_recall)\noverall_precision = precision_score(y_test, y_hat)\nprint(\"\\tPrecision:\", overall_precision)\n\n# Get metrics by sensitive group from fairlearn\nprint('\\nMetrics by Group:')\nmetrics = {'selection_rate': selection_rate,\n 'accuracy': accuracy_score,\n 'recall': recall_score,\n 'precision': precision_score}\n\ngroup_metrics = MetricFrame(metrics=metrics,\n y_true=y_test,\n y_pred=y_hat,\n sensitive_features=S_test['Age'])\n\nprint(group_metrics.by_group)\n```\n\nFrom these metrics, you should be able to discern that a larger proportion of the older patients are predicted to be diabetic. *Accuracy* should be more or less equal for the two groups, but a closer inspection of *precision* and *recall* indicates some disparity in how well the model predicts for each age group.\n\nIn this scenario, consider *recall*. This metric indicates the proportion of positive cases that were correctly identified by the model. In other words, of all the patients who are actually diabetic, how many did the model find? The model does a better job of this for patients in the older age group than for younger patients.\n\nIt's often easier to compare metrics visually. To do this, you'll use the Fairlearn fairness dashboard:\n\n1. Run the cell below to generate a dashboard from the model you created previously.\n2. When the widget is displayed, use the **Get started** link to start configuring your visualization.\n3. Select the sensitive features you want to compare (in this case, there's only one: **Age**).\n4. Select the model performance metric you want to compare (in this case, it's a binary classification model so the options are *Accuracy*, *Balanced accuracy*, *Precision*, and *Recall*). Start with **Recall**.\n5. Select the type of fairness comparison you want to view. Start with **Demographic parity difference**.\n6. View the dashboard visualization, which shows:\n - **Disparity in performance** - how the selected performance metric compares for the subpopulations, including *underprediction* (false negatives) and *overprediction* (false positives).\n - **Disparity in predictions** - A comparison of the number of positive cases per subpopulation.\n7. Edit the configuration to compare the predictions based on different performance and fairness metrics.\n\n\n```python\nfrom raiwidgets import FairnessDashboard\n\n# View this model in Fairlearn's fairness dashboard, and see the disparities which appear:\nFairnessDashboard(sensitive_features=S_test,\n y_true=y_test,\n y_pred={\"diabetes_model\": diabetes_model.predict(X_test)})\n```\n\nThe results show a much higher selection rate for patients over 50 than for younger patients. However, in reality, age is a genuine factor in diabetes, so you would expect more positive cases among older patients.\n\nIf we base model performance on *accuracy* (in other words, the percentage of predictions the model gets right), then it seems to work more or less equally for both subpopulations. However, based on the *precision* and *recall* metrics, the model tends to perform better for patients who are over 50 years old.\n\nLet's see what happens if we exclude the **Age** feature when training the model.\n\n\n```python\n# Separate features and labels\nageless = features.copy()\nageless.remove('Age')\nX2, y2 = data[ageless].values, data['Diabetic'].values\n\n# Split data into training set and test set\nX_train2, X_test2, y_train2, y_test2, S_train2, S_test2 = train_test_split(X2, y2, S, test_size=0.20, random_state=0, stratify=y2)\n\n# Train a classification model\nprint(\"Training model...\")\nageless_model = DecisionTreeClassifier().fit(X_train2, y_train2)\nprint(\"Model trained.\")\n\n# View this model in Fairlearn's fairness dashboard, and see the disparities which appear:\nFairnessDashboard(sensitive_features=S_test2,\n y_true=y_test2,\n y_pred={\"ageless_diabetes_model\": ageless_model.predict(X_test2)})\n```\n\nExplore the model in the dashboard.\n\nWhen you review *recall*, note that the disparity has reduced, but the overall recall has also reduced because the model now significantly underpredicts positive cases for older patients. Even though **Age** was not a feature used in training, the model still exhibits some disparity in how well it predicts for older and younger patients.\n\nIn this scenario, simply removing the **Age** feature slightly reduces the disparity in *recall*, but increases the disparity in *precision* and *accuracy*. This underlines one the key difficulties in applying fairness to machine learning models - you must be clear about what *fairness* means in a particular context, and optimize for that.\n\n## Register the model and upload the dashboard data to your workspace\n\nYou've trained the model and reviewed the dashboard locally in this notebook; but it might be useful to register the model in your Azure Machine Learning workspace and create an experiment to record the dashboard data so you can track and share your fairness analysis.\n\nLet's start by registering the original model (which included **Age** as a feature).\n\n> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.\n\n\n```python\nfrom azureml.core import Workspace, Experiment, Model\nimport joblib\nimport os\n\n# Load the Azure ML workspace from the saved config file\nws = Workspace.from_config()\nprint('Ready to work with', ws.name)\n\n# Save the trained model\nmodel_file = 'diabetes_model.pkl'\njoblib.dump(value=diabetes_model, filename=model_file)\n\n# Register the model\nprint('Registering model...')\nregistered_model = Model.register(model_path=model_file,\n model_name='diabetes_classifier',\n workspace=ws)\nmodel_id= registered_model.id\n\n\nprint('Model registered.', model_id)\n```\n\nNow you can use the FairLearn package to create binary classification group metric sets for one or more models, and use an Azure Machine Learning experiment to upload the metrics.\n\n> **Note**: This may take a while, and may result in some warning messages (which you can ignore). When the experiment has completed, the dashboard data will be downloaded and displayed to verify that it was uploaded successfully.\n\n\n```python\nfrom fairlearn.metrics._group_metric_set import _create_group_metric_set\nfrom azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id\n\n# Create a dictionary of model(s) you want to assess for fairness \nsf = { 'Age': S_test.Age}\nys_pred = { model_id:diabetes_model.predict(X_test) }\ndash_dict = _create_group_metric_set(y_true=y_test,\n predictions=ys_pred,\n sensitive_features=sf,\n prediction_type='binary_classification')\n\nexp = Experiment(ws, 'mslearn-diabetes-fairness')\nprint(exp)\n\nrun = exp.start_logging()\n\n# Upload the dashboard to Azure Machine Learning\ntry:\n dashboard_title = \"Fairness insights of Diabetes Classifier\"\n upload_id = upload_dashboard_dictionary(run,\n dash_dict,\n dashboard_name=dashboard_title)\n print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n\n # To test the dashboard, you can download it\n downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n print(downloaded_dict)\nfinally:\n run.complete()\n```\n\nThe preceding code downloaded the metrics generated in the experiement just to confirm it completed successfully. The real benefit of uploading the metrics to an experiement is that you can now view the FairLearn dashboard in Azure Machine Learning studio.\n\nRun the cell below to see the experiment details, and click the **View Run details** link in the widget to see the run in Azure Machine Learning studio. Then view the **Fairness** tab of the experiment run to view the dashboard for the fairness ID assigned to the metrics you uploaded, which behaves the same way as the widget you viewed previously in this notebook.\n\n\n```python\nfrom azureml.widgets import RunDetails\n\nRunDetails(run).show()\n```\n\nYou can also find the fairness dashboard by selecting a model in the **Models** page of Azure Machine Learning studio and reviewing its **Fairness** tab. This enables your organization to maintain a log of fairness analysis for the models you train and register.\n\n## Mitigate unfairness in the model\n\nNow that you've analyzed the model for fairness, you can use any of the *mitigation* techniques supported by the FairLearn package to find a model that balances predictive performance and fairness.\n\nIn this exercise, you'll use the **GridSearch** feature, which trains multiple models in an attempt to minimize the disparity of predictive performance for the sensitive features in the dataset (in this case, the age groups). You'll optimize the models by applying the **EqualizedOdds** parity constraint, which tries to ensure that models that exhibit similar true and false positive rates for each sensitive feature grouping. \n\n> *This may take some time to run*\n\n\n```python\nfrom fairlearn.reductions import GridSearch, EqualizedOdds\nimport joblib\nimport os\n\nprint('Finding mitigated models...')\n\n# Train multiple models\nsweep = GridSearch(DecisionTreeClassifier(),\n constraints=EqualizedOdds(),\n grid_size=20)\n\nsweep.fit(X_train, y_train, sensitive_features=S_train.Age)\nmodels = sweep.predictors_\n\n# Save the models and get predictions from them (plus the original unmitigated one for comparison)\nmodel_dir = 'mitigated_models'\nos.makedirs(model_dir, exist_ok=True)\nmodel_name = 'diabetes_unmitigated'\nprint(model_name)\njoblib.dump(value=diabetes_model, filename=os.path.join(model_dir, '{0}.pkl'.format(model_name)))\npredictions = {model_name: diabetes_model.predict(X_test)}\ni = 0\nfor model in models:\n i += 1\n model_name = 'diabetes_mitigated_{0}'.format(i)\n print(model_name)\n joblib.dump(value=model, filename=os.path.join(model_dir, '{0}.pkl'.format(model_name)))\n predictions[model_name] = model.predict(X_test)\n\n```\n\nNow you can use the FairLearn dashboard to compare the mitigated models:\n\nRun the following cell and then use the wizard to visualize **Age** by **Recall**.\n\n\n```python\nFairnessDashboard(sensitive_features=S_test,\n y_true=y_test,\n y_pred=predictions)\n```\n\nThe models are shown on a scatter plot. You can compare the models by measuring the disparity in predictions (in other words, the selection rate) or the disparity in the selected performance metric (in this case, *recall*). In this scenario, we expect disparity in selection rates (because we know that age *is* a factor in diabetes, with more positive cases in the older age group). What we're interested in is the disparity in predictive performance, so select the option to measure **Disparity in recall**.\n\nThe chart shows clusters of models with the overall *recall* metric on the X axis, and the disparity in recall on the Y axis. Therefore, the ideal model (with high recall and low disparity) would be at the bottom right corner of the plot. You can choose the right balance of predictive performance and fairness for your particular needs, and select an appropriate model to see its details.\n\nAn important point to reinforce is that applying fairness mitigation to a model is a trade-off between overall predictive performance and disparity across sensitive feature groups - generally you must sacrifice some overall predictive performance to ensure that the model predicts fairly for all segments of the population.\n\n> **Note**: Viewing the *precision* metric may result in a warning that precision is being set to 0.0 due to no predicted samples - you can ignore this.\n\n## Upload the mitigation dashboard metrics to Azure Machine Learning\n\nAs before, you might want to keep track of your mitigation experimentation. To do this, you can:\n\n1. Register the models found by the GridSearch process.\n2. Compute the performance and disparity metrics for the models.\n3. Upload the metrics in an Azure Machine Learning experiment.\n\n\n```python\n# Register the models\nregistered_model_predictions = dict()\nfor model_name, prediction_data in predictions.items():\n model_file = os.path.join(model_dir, model_name + \".pkl\")\n registered_model = Model.register(model_path=model_file,\n model_name=model_name,\n workspace=ws)\n registered_model_predictions[registered_model.id] = prediction_data\n\n# Create a group metric set for binary classification based on the Age feature for all of the models\nsf = { 'Age': S_test.Age}\ndash_dict = _create_group_metric_set(y_true=y_test,\n predictions=registered_model_predictions,\n sensitive_features=sf,\n prediction_type='binary_classification')\n\nexp = Experiment(ws, \"mslearn-diabetes-fairness\")\nprint(exp)\n\nrun = exp.start_logging()\nRunDetails(run).show()\n\n# Upload the dashboard to Azure Machine Learning\ntry:\n dashboard_title = \"Fairness Comparison of Diabetes Models\"\n upload_id = upload_dashboard_dictionary(run,\n dash_dict,\n dashboard_name=dashboard_title)\n print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\nfinally:\n run.complete()\n```\n\n> **Note**: A warning that precision is being set to 0.0 due to no predicted samples may be displayed - you can ignore this.\n\n\nWhen the experiment has finished running, click the **View Run details** link in the widget to view the run in Azure Machine Learning studio (you may need to scroll past the initial output to see the widget), and view the FairLearn dashboard on the **fairness** tab.\n", "meta": {"hexsha": "298851a2f323958028dffcdf0433c2394950388b", "size": 42563, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/15 - Detect Unfairness-checkpoint.ipynb", "max_stars_repo_name": "ldtanh/MS-Azure-DP-100", "max_stars_repo_head_hexsha": "099a474d00aec1d68d3e123ee827eb081fa0cfdd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": ".ipynb_checkpoints/15 - Detect Unfairness-checkpoint.ipynb", "max_issues_repo_name": "ldtanh/MS-Azure-DP-100", "max_issues_repo_head_hexsha": "099a474d00aec1d68d3e123ee827eb081fa0cfdd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/15 - Detect Unfairness-checkpoint.ipynb", "max_forks_repo_name": "ldtanh/MS-Azure-DP-100", "max_forks_repo_head_hexsha": "099a474d00aec1d68d3e123ee827eb081fa0cfdd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6836419753, "max_line_length": 518, "alphanum_fraction": 0.6684914127, "converted": true, "num_tokens": 9537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.480478692926354, "lm_q2_score": 0.21469141911224193, "lm_q1q2_score": 0.10315465243755406}} {"text": "# Click \"Edit App\" to see the code\n\nPython is a very powerful object oriented programming language, which is used in most fields of science both for computing and for data analysis.\nThis document is a very simplistic introduction to Python written by a non-Python expert, and is designed to be a quick introduction to Python for CHEM2000 students.\nIn many places the nomenclature is likely to be inaccurate and you are encouraged to consult more rigorous resources developed by real Python programmers.\n\nThis document is aimed at providing enough knowledge to solve all the numerical laboratories that are part of the CHEM2000 unit using Python, but it is far from comprehensive. There are many different ways to solve problems numerically, different libraries/packages can be used and here we are providing only one (or a few) possible way of doing things.\n\nThe entire document and the examples therein have been developed using a **Jupyter Notebook**, and work best there.\n\nA Jupyter notebook consists of a series of cells that can be either _text_ or _code_, the cell can then be _executed_ by pressing the **run** button or _Shift+Return_ (if you want to look like a pro). If things go pear-shaped, restarting the kernel and rerunning all the cells may help; this can be done by pressing the _fast forward_ button.\n\nText cells accept _Markdown_ syntax and once executed will render the text. Markdown provides a simple way of producing formatted text and it accept _LaTeX_ commands for equations, which look much nicer that those made with the equation editor in MS Word. For a Markdown tutorial click [here](https://www.markdowntutorial.com)\n\nThe code cells contain Python 3 commands. Note that some Python 3 is different from Python 2, _e.g._ the **print** command works differently.\n\n# My first Jupyter Notebook\nIn general you would use the first _code_ cell to import all the packages that we need to use in the remainder of the code. We can import entire packages, part of the packages and assign aliases to the package name, _e.g._\n\n\n```python\n# python packages\nimport pandas as pd # Dataframes and reading CSV files\nimport numpy as np # Numerical libraries\nimport matplotlib.pyplot as plt # Plotting library\nfrom lmfit import Model # Least squares fitting library\n```\n\nthis is particularly important because it ensures that if we define a variable that has the same name of a function belonging to that package there is no confusion about what the code does.\n```python\n# This is a variable\nmean = 10 \n\n# This is the NumPy function to compute the mean of an array of numbers\nnp.mean([1,2,3])\n```\n\nThings to keep in mind when programming in Pythons are\n\n1. It is Case sensitivE\n\n```python\naverage = 0\n```\nis different from\n```python\naVeRagE = 0\n```\n\n2. Variables' names cannot have spaces. This line would give you a _syntax error_\n```ptyhon\nnumber of values = 10\n```\n\n3. Spaces between operators are ignored, _1+2_ is the same as _1 + 2_\n\n\n```python\nprint(1 + 2)\nprint(1+2)\n```\n\n4. \\# is a comment and everything on its right is ignored\n\n\n```python\n# This is a comment\nprint(\"Hello world\") # This is also a comment\n```\n\n5. A single command can spread over multiple lines. If we split the line between variables no continuation line character is required. Otherwise we can use the \"\\\" character. Note the different results of the two commands below.\n\n\n```python\nprint(\"Hello\", \n \"world\")\nprint(\"Hello \\\n world\")\n```\n\n6. Indentation matters! Indentation is used to define the content of loops, functions... This will be more clear when we start using functions.\n\n# Good programming practices\n\n1. Use meaningful name for your variables so that you know what's inside, there is no character limit.\n2. Use a consistent style and convention for your variables, the code will look neater. I like to use the _camel case_ style (numberOfValues) that allows me to have separation between words without using spaces. Alternatively you can use the _Pascal case_ style (NumberOfValues) or use the underscore character to separate words (number_of_variables) or create your own style.\n3. Comment your code well, it may be obvious what it does when you write it, but it won't be so obvious after a year or more.\n\n# Python as a simple calculator\n\nLet's start by doing some simple mathematical operations using this Jupyter Notebook; addition, multiplication, division $\\dots$, just to get familiar with the Jupyter Notebook.\n\n\n```python\n2 + 3\n```\n\n\n```python\n4 * 3\n```\n\n\n```python\n12 / 3\n```\n\n\n```python\n2**3\n```\n\nThe same operations can be done using variables. We can first define two variables _a_ and _b_ and then use them in in the following cells\n\n\n```python\na = 12\nb = 3\na + b\n```\n\nThis is not a very efficient way of working because the result of the operation is not available to the rest of the code after the cell has been executed. Typically, we want to create new variables and than write the result using the _print_ command.\n\n\n```python\na = 10\nb = 20\nc = a + b\nprint(\"Result :\",c)\n```\n\n# Python as a scientific calculator\nNot all scientific operators are natively available in Python, but they can be accessed through optional packages that are loaded at the beginning of your notebook, _e.g._ **NumPy**.\nNumPy is one of the most commonly used Python libraries, it contains all the operators for square root, logarithm, exponential, the trigonometric function, a suite of constants and much more. \nFor more information see [https://numpy.org](https://numpy.org).\n\nIn the examples below we show how to use NumPy to access some of these functions.\n* Note that we access the NumPy function through using the _np_ prefix because of the way we imported the NumPy library at the beginning of this notebook.\n\n\n```python\nprint(\"The approximate value of pi is :\",np.pi)\nprint(\"The approximate value of the Euler constant (e) is :\",np.e)\nprint(\"The square root of two is :\",np.sqrt(2))\nprint(\"The natural logarithm of two is :\",np.log(2))\nprint(\"The logarithm base 10 of two is :\",np.log10(2))\nprint(\"The sine of pi is :\",np.sin(np.pi))\nprint(\"The cosine of pi is :\",np.cos(np.pi))\n```\n\nEven more useful than variables are arrays, which allow us to store many values in one place. Arrays can be created by hand or be the output of other Python functions.\nThey can contain numbers, strings or other variables, or mixed types\n\n\n```python\narrayOfNumbers = [300, 2, 3.2]\nprint(\"Three dimensional array of numbers :\",arrayOfNumbers)\narrayOfStrings = [\"Temperature\" , \"Pressure\" , \"Volume\"]\nprint(\"Three dimensional array of strings :\",arrayOfStrings)\nmixedArray = [\"temperature\" , 300]\nprint(\"The mixed array is :\",mixedArray)\n```\n\nWe can easily access one of the elements of the array, by specifying its location in the array.\n* Note that Python starts counting from zero!\n\n\n```python\nprint(\"Second element of the array of numbers :\",arrayOfNumbers[1])\nprint(\"Second element of the array of strings :\",arrayOfStrings[1])\n```\n\nWe can also construct a loop to cycle over the elements of the array.\nIf we are interested in cycling over one array only we can use the **in** operator.\n* It's important to note here that the indentation of the code determines where the loop finished\n\n\n```python\nfor value in arrayOfNumbers:\n print(\"--- This is inside the loop ----\",value)\nprint(\"--- This is outside the loop ---\")\n```\n\nAlternatively we can create a loop over the indices of the array using the **range** iterator.\n* Note the use of the function **len** to compute the size of the array!\n* Note that the upper limit of the **range** iterator is not included!\n\n\n```python\nnumberOfElements = len(arrayOfNumbers)\nprint(\"Number of elements :\",numberOfElements)\nfor index in range(0,numberOfElements):\n print(index, # index\n arrayOfStrings[index], # sting\n arrayOfNumbers[index]) # number\n```\n\n**range** is a special thing of python 3, and it produces a list of indices only when part of a loop. This is at variance with the NumPy **arange** function, which instead will produce an array that we can use normally.\nBoth the **range** and **arange** function typically take three arguments, the lower limit of the range (included) the upper limit of the range (not included) and the step. The main difference is that **range** being an iterator works only with integer numbers, while **arange** can also work with floating point numbers. \nIf the step is omitted, one is assumed.\nLet's have a look at a couple of examples.\n\n\n```python\nfor i in range(0,4):\n print(i)\n```\n\n\n```python\nfor i in range(0,4,2):\n print(i)\n```\n\n\n```python\nfor i in np.arange(0,4,2):\n print(i)\n```\n\n\n```python\nfor i in np.arange(0,4,0.5):\n print(i)\n```\n\nLet's now see how we can use the **range** iterator and the **np.arange** function to generate arrays of equally spaced values. While this is straightforward with the NumPy function, it is more complicated with **range** iterator, and we need to _recast_ the output into a list.\n\n\n```python\nvalues = np.arange(1.1,4,0.7)\nprint(values)\n\nvalues = list(range(0,5,2))\nprint(values)\n```\n\n# Operations with arrays\nAs an illustrative example of using arrays and variables we can now compute the sum and average of all the elements in an array of numbers. In order to do that we initialise a variable to zero, and progressively add the array elemnts to it.\n* Note how we can increment the value of the variable using the += operator. These two commands are equivalent\n```python\nsumm += value\nsumm = summ + value\n```\nThose two line are also equivalent to the following, where we use a temporary variable, to explicitly show what the code does\n```python\ntmp = summ + value\nsumm = tmp\n```\nThe operators -=, \\*= and /= have analogous meanings.\n\n\n```python\nsumm = 0.\nfor value in arrayOfNumbers:\n summ += value \n\nprint(\"Result of += :\",summ)\n```\n\nThe average can then be computed directly inside the print statement.\n\n\n```python\nprint(\"Average :\",summ/len(arrayOfNumbers))\n```\n\nMany simple operations on arrays can however be more efficiently performed using libraries such as NumPy, _e.g._ summation, average, standard deviation, etc.\nUsing these function will also make your code slimmer and easier to read.\n\n\n```python\ntally = np.sum(arrayOfNumbers)\naverage = np.mean(arrayOfNumbers)\nStDev = np.std(arrayOfNumbers)\n\nprint(\"Sum :\",tally)\nprint(\"Average :\",average)\nprint(\"Standard Deviation :\",StDev)\n```\n\nUnfortunately there is no NumPy function for computing the standard error, but we can easily compute that from its definition\n\n\\begin{equation}\nStdErr = \\frac{\\sigma}{\\sqrt{N}}\n\\end{equation}\nwhere $\\sigma$ is the standard deviation and $N$ the number of values used in the calculation.\n\n\n```python\nprint(\"Standard Error :\",StDev/np.sqrt(len(arrayOfNumbers)))\n```\n\n# Using DataFrames\nDataframes are powerful objects that are part of the _pandas_ package. The most simplistic description of a dataframe is that it is a multi-dimension mixed array. Dataframes are more than that, as they also include functions that operate on the DataFrame content.\nThis definition of DataFrames would probably horrify a Python programmer, but it would suffice for the purpose of this course.\n\nDataframes can be defined by hand, or created by other functions, _e.g._ by reading a Comma Separated Values file (.csv). Let's see first how we can create a an empty DataFrame, with three columns named \"Temperature\", \"Volume\" and \"Pressure\"; with their units.\n\n\n```python\n# This array is used to define the names of the columns\nheader = [\"Temperature (K)\" , \"Volume (L)\" , \"Pressure (bar)\"] \n\n# This is our new dataframe\ndf = pd.DataFrame(data=None, columns=header)\nprint(df)\n```\n\nLet's now fill the dataframe using the ideal gas law\n\n\\begin{equation}\npV = nRT\n\\end{equation}\n\nwhere $p$ is the pressure, $V$ the volume, $T$ the temperature and $R=8.314\\ J/mol/K$ is the ideal gas constant, each expressed with the units specified in the header of the dataframe.\n\nLet's compute the volume of an ideal gas at different pressures and temperatures.\n\nFor simplicity we fix the number of moles to 1.\n* Note how we use the **range** function to create an array of integers and the NumPy **arange** function to create a an array of _floating point numbers_.\n* The variable _index_ is used to count the number of elements that we already have in the DataFrame, and to add the next one. This works because Python starts counting from zero.\n* Note we used the **loc** function to added an array to the DataFrame at a specific position.\n* Note how we also created an array to store all the temperatures we generate, the array is created empty using **= []** and then we append elements to it using the **.append()** function\n\n\n```python\nR = 8.314 # kJ/mol\nn = 1\n# Conversion factor between J/bar to litre\nconversionFactor = 0.01 \n\nlistOfTemperatures = []\nfor T in range(100,301,50):\n listOfTemperatures.append(T)\n \n for p in np.arange(0.1,1,0.02):\n V = (n * R * T / p) * conversionFactor \n index = len(df.index)\n df.loc[index] = [T , V , p] # a vector is added to the DataFrame\n \nprint(df)\n```\n\nThere are many ways to access the data in a DataFrame. Here we'll show you two; one to quickly get an entire column of the array, and one to get selected chunks of data using the **iloc** function.\n\n\n```python\nprint(df[\"Temperature (K)\"])\n```\n\nFor the DataFrame that we have, the **iloc** function takes two arguments, the row and columns indices.\n\n\n```python\nprint(df.iloc[0,1])\n```\n\nWe can then use \"**:**\" to specify a range of elements that we want to use.\n* Note that the lower limit of the range is included while the upper limit is not !\n* Note that if one limit of the range is missing, the start/end of the array is assumed\n* Note how we have used **.values** to cast the output data in an array.\n\n\n```python\nprint(df.iloc[0:3 , 1 ].values) # the first three volumes\nprint(df.iloc[0 , 0:3].values) # the first row\nprint(df.iloc[0 , : ].values) # the first row\nprint(df.iloc[0 , 1: ].values) # the last two elements of the first row\nprint(df.iloc[0 , :2].values) # the first two elements of the first row\n```\n\n# Making a graph\nLet's now make a graph with these data using the **mathplotlib** library. \nAs a start we can make a plot of the entire DataFrame.\n\nThe **subplots** function creates two objects, the _figure_ and the _axes_ of the figure itself.\nEach of those objects contain functions that we can use to customise the final plot. More in another tutorial.\n\n\n```python\n# Create the figure and axes objects\nfig , ax = plt.subplots()\n# Add the data to the plot from the DataFrame\nax.scatter(df[\"Pressure (bar)\"] , df[\"Volume (L)\"])\n# Diplay the figure\nplt.show()\n```\n\nLet's now do some data manipulation to make a better plot, using a line for each isotherm.\nThere are many ways of doing this, but here we'll take an educational approach and use _conditional_ statements to select parts of the dataframe, add them to arrays and plot them.\n\nWhat we are going to do is to choose a temperature, _e.g._ 100K, and create two arrays (p,v) with the corresponding pressures and volumes, than we will use them for plotting.\n* Note that we used the **scatter** and **plot** functions to plot the data as circles with a line overlaid.\n* Note that we set the **set** function for to add the labels to the axes (_ax_)\n\n\n```python\nT=100\npressure = []\nvolume = []\nfor index in range(0,len(df.index)):\n if df.iloc[index,0] == T:\n pressure.append(df.iloc[index,2])\n volume.append(df.iloc[index,1])\n\nprint(\"Pressure array :\",pressure[0:3],\"...\")\nprint(\"Volume array :\",volume[0:3],\"...\")\n\nfig , ax = plt.subplots()\nax.scatter(pressure , volume)\nax.plot(pressure , volume)\n\n# Let's add the labels to the axes\nax.set(xlabel=\"Pressure (bar)\")\nax.set(ylabel=\"Volume (L)\")\n\nplt.show()\n```\n\nIf we want then to plot all isotherm in one graph we can wrap the code above in a loop over all the temperatures we have created. \n\n* Note the indentation of the **for** loops and **if** conditional statement.\n* Note how in this example we select portions of the DataFrame including a conditional statement in **[]** when we call the DataFrame\n* Note that we used **.values** to transform the DataFrame into an array\n\n\n```python\n# First we have to create the figure\nfig , ax = plt.subplots()\n\nfor T in listOfTemperatures:\n pressure = df[df[\"Temperature (K)\"] == T][\"Pressure (bar)\"].values\n volume = df[df[\"Temperature (K)\"] == T][\"Volume (L)\"].values\n\n # add one line to the plot for each temperature\n ax.plot(pressure , volume, label=T)\n\n# Let's add the labels to the axes\nax.set(xlabel=\"Pressure (bar)\")\nax.set(ylabel=\"Volume (L)\")\n\n# Let's also add the legend\nax.legend()\n\nplt.show()\n```\n", "meta": {"hexsha": "314ec75f646c6903882ddc72af170db54c82f345", "size": 26589, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "codeSnippets/0_introductionToPython.ipynb", "max_stars_repo_name": "praiteri/TeachingNotebook", "max_stars_repo_head_hexsha": "75ee8baf8ef81154dffcac556d4739bf73eba712", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "codeSnippets/0_introductionToPython.ipynb", "max_issues_repo_name": "praiteri/TeachingNotebook", "max_issues_repo_head_hexsha": "75ee8baf8ef81154dffcac556d4739bf73eba712", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "codeSnippets/0_introductionToPython.ipynb", "max_forks_repo_name": "praiteri/TeachingNotebook", "max_forks_repo_head_hexsha": "75ee8baf8ef81154dffcac556d4739bf73eba712", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-23T11:36:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-23T11:36:12.000Z", "avg_line_length": 32.9071782178, "max_line_length": 385, "alphanum_fraction": 0.592839144, "converted": true, "num_tokens": 4040, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.21469141911224193, "lm_q2_score": 0.48047867804790706, "lm_q1q2_score": 0.10315464924327918}} {"text": "\n\n\n\n\n```python\nimport matplotlib.pyplot as plt\nimport plotly as py\nimport plotly.graph_objs as go\nimport numpy as np\nimport math\nimport ipywidgets as widgets\nfrom IPython.display import display, Math, Latex, HTML, IFrame\nfrom astropy.table import Table, Column\nfrom ipywidgets import interact, interactive\n\npy.offline.init_notebook_mode(connected=True)\n%matplotlib inline\n\nfont = {'family' : 'sans-serif',\n 'weight' : 'normal',\n 'size' : 14}\n\nplt.rc('font', **font)\n\n'''Above, we are importing all the necessary modules in order to run the notebook. \nNumpy allows us to define arrays of values for our variables to plot them\nmatplotlib is what we use to create the figures\nthe display and widgets are to make the notebook look neat\n'''\n\nHTML('''\n
''')\n \n \n\n```\n\n***\n

**Light and Optics**

\n***\n\n\n
Gif taken from https://giphy.com/gifs/fandor-sun-eclipse-3o7OsM9vKFH2ESl0KA/links, August 1st, 2018.
\n
Figure 1: For hundreds of years, scientists have tried to understand the nature of light. With advances in technology, and inventions like telescopes, we have been able to see farther than ever before.
\n\n***\n\n## Introduction\n\nThroughout most of history, humans did not understand light as we do today. As science and technology have progressed over time, so too has our knowledge of the nature of light. \n\nIn this lesson, when we say the word \"light\", we will be referring to visible light (light that comes from the sun, lightbulbs etc.). We will go over how a few key experiments kickstarted a new way of thinking, and a few of the ways that we are able to manipulate light. We will also talk about how our eyes enable us to see.\n\n## Background\n\nIf you had to describe to someone what light is, you may have a hard time. Some people think of light as the absence of darkness, but even that doesn't say much about light itself.\n\nOur understanding of light truly began around the 17th century, when a few individuals started to realize that light was not a mystical substance. Scientists (or \"natural philosophers\", as they were called during that time) recognized that certain properties of light were measurable, and that some properties could be manipulated. Sir Isaac Newton and Ole R\u00f8mer were among the first scientists to take a step in this direction.\n\n\n> ### Isaac Newton's Prism Experiment\n\nSir Isaac Newton has made contributions to many fields of science and mathematics. In 1666, while spending time at his childhood home in Lincolnshire, England, Newton began experimenting with light. \n\nUsing a small slit in his window shutters, Newton passed a narrow beam of sunlight through a glass prism. The light travelled through the prism, and projected a rainbow of color on the other side!\n\n\n
Picture taken from http://lightingmatters.com.au/wp/what-is-the-colour-of-white/white-light-prism-experiment/, July 30th, 2018.
\n
Figure 2: This picture shows how a prism can create a spectrum of color. This is what Newton would have seen in 1666.
\n\nLater on, scientists determined that the prism was actually splitting light into its component parts. This phenomenon is called **dispersion**.\n\nThrough this experiment, Newton demonstrated that white light was actually made up of all the individual colors of the rainbow!\n\n> ### Ole R\u00f8mer and the Speed of Light\n\nFor many years, people thought that if somebody lit a match, the light from that match would be instantly visible to everyone, no matter how far away they were. However, in 1676 Ole R\u00f8mer proved that this is not the case.\n\nR\u00f8mer spent a long time studying the orbit of Io, one of Jupiter's moons. As part of his study, he began predicting the times when Io should be hidden behind Jupiter's shadow (these periods are called eclipses). However, R\u00f8mer saw that his predictions for when these eclipses should occur were not always accurate. \n\n\n
Gif taken from https://giphy.com/gifs/timelapse-DXIa1beDspYRy, August 1st, 2018.
\n
Figure 3: Here we can see Jupiter as it looks through a telescope. You might be able to see a black spot move from the left to the right across Jupiter's surface. This is actually one of Jupiter's many moons!
\n\nR\u00f8mer then realized that these errors may be because the distance between Io and the Earth was always changing. R\u00f8mer thought that when the distance between Io and the Earth increased, it might take a longer time for light coming from Io to reach Earth. If this were the case, then the light must be travelling at a finite speed!\n\nAfter taking many measurements and using some clever mathematics, R\u00f8mer calculated the speed of light to be roughly 220,000,000 m/s, or 792,000,000 km/h.\n\nToday, we have measured the speed of light to be 299,792,458 m/s. Although he was not exactly right, R\u00f8mer provided one of the first mathematical calculations for the speed of light. \n\n***\nSince the time of R\u00f8mer and Newton, scientists have made many new discoveries about the nature of light. While not all of these discoveries agree with one another, here are two things we know for sure:\n- Light is made up of a spectrum of color\n- Light travels at a speed of 299,792,458 m/s\n\nNow let's talk about some of the ways we can manipulate light.\n***\n\n## Reflection\n\nWe are all familiar with reflection; chances are, you look at your reflection more than once a day. But have you ever stopped to wonder what is really going on? \n\nReflection is the term used to describe how light can change direction when it comes into contact with certain surfaces. \n\nWhen incoming light rays encounter a reflective surface, they bounce off the surface and continue moving in a new direction. The new direction in which it moves is determined by the **law of reflection**.\n\n\\begin{equation} \n\\rm Law\\: of\\: Reflection: Angle\\: of\\: Incidence = Angle\\: of\\: Reflection\n\\end{equation}\n\nOn the animation below, click on the flashlight to turn it on, and move your mouse to change the angle of incidence.\n\n\n```python\nIFrame('Animations/reflect.html',width=500,height=320)\n```\n\nAs seen above, the **normal** is what we call the line that forms a 90$^{\\circ}$ angle with the surface. The **angle of incidence** is what we call the angle between the flash lights beam and the normal. Similarly, the **angle of reflection** is the angle that the newly reflected light beam makes with the normal. The law of reflection states that these two angles will always be equal.\n\n\n\n## Refraction\n\nHave you ever tried to reach down and grab an object sitting at the bottom of a pool of water? If you have, you may have noticed that the object isn't actually in the location that you thought it was.\n\n\n
Image taken from http://legacy.sciencelearn.org.nz/Contexts/Light-and-Sight/Sci-Media/Video/Refraction/(quality)/hi on August 3rd, 2018.
\n
Figure 4: When you are looking into a body of water from above, the objects you see beneath the surface are not actually where they appear to be.
\n\nThis phenomenon occurs because the light travelling to your eyes from the bottom of the pool **refracts**, or changes its direction of travel, when it transitions from water to air. \n\nThe **index of refraction** is a value that we use to show how much light will bend when travelling through a substance. For example, the index of refraction for air is approximately 1.00, and the index of refraction for water is about 1.33. Because these indexes are different, light will bend when passing from water to air, or vice versa.\n\nUse the animation below to see how light refracts when passing from air to water. Click on the flashlight to turn it on.\n\n\n```python\nIFrame('Animations/refract.html',width=520,height=320)\n```\n\nMathematically, reflection can be described using the following equation, known as Snell's Law:\n\n\\begin{equation} \n\\textrm{Snells Law:}\\: n_1\\sin(\\theta_1) = n_2\\sin(\\theta_2)\n\\end{equation}\n\nwhere $n_1$ is the index of refraction for the first medium, $\\theta_1$ is the incident angle, $n_2$ is the index of refraction for the second medium, and $\\theta_2$ is the angle of refraction.\n\nLight will bend *towards* the normal when travelling from a medium with a *lower* index of refraction to one with a *higher* index of refraction, and vice versa.\n\n***\nSome of the most beautiful sights in nature are caused by reflection and refraction. Here are a couple of examples:\n\n### Rainbows\n\nRainbows are a result of both reflection and refraction. As its raining, each water droplet acts like a tiny prism, just like the one we saw in Figure 2. The water droplets split visible light into colors, and these colors are then reflected back towards our eyes. \n\n\n
Image taken from https://waterstories.nestle-waters.com/environment/how-does-a-rainbow-form/ on August 3rd, 2018.
\n
Figure 5: Water droplets use reflection and refraction to create the beautiful rainbows that we see while it is raining.
\n\n\n\n### Mirages\n\nHave you ever been driving on a sunny day, and up ahead it looks as though a stream of water is running across the road? You are really seeing a mirage.\nMirages also occur because of refraction, but they do not result in a display of color like a rainbow. This type of refraction occurs due to a difference in temperature between separate layers of air.\n\nAs we were describing before, refraction occurs when light travels from one substance to another. Well, it turns out that hot air and cold air are actually different enough to act as different substances. Therefore, light will refract when passing through one to the other. \n\n\n
Image taken from https://edexcellence.net/articles/what-the-mirage-gets-wrong-on-teacher-development on August 3rd, 2018.
\n
Figure 6: Although it may look like water running across the road, it is actually a mirage. These commonly occur in desert areas, where the road can become very hot.
\n\nWhen you are looking at a mirage, it can look as though the air is wavy and fluid, which is why it is common to think that you are looking at water. This appearance occurs when layers of hot and cold air are mixing together, and light passing through these layers is constantly being refracted in different directions.\n\nYou may see a mirage appear on top of a hot roadway, behind the exhaust pipe of a plane or car, or around any other source of heat.\n\n## Applications of Reflection and Refraction\n\n### Lenses\n\nIf you have glasses, or contact lenses, then you are constantly using refraction in order to help you see! Lenses use refraction to point light in specific directions.\n\nGenerally speaking, there are two types of lenses: **convex** and **concave**.\n\nTo see how each type of lense affects light, use the following animation.\n\n\n```python\nIFrame('Animations/convex.html',width=520,height=420)\n```\n\nAs seen above, a convex lens focuses light towards a specific point, while a concave lens will spread light away from a point. These lenses can be combined in many ways in order to produce different effects. For example, a camera lens uses a series of both convex and concave lenses in order to direct incoming light towards the back of the camera.\n\n\n
Image taken from https://www.reddit.com/r/pic/comments/3o3b7w/camera_lens_cut_in_half/ on August 3rd, 2018.
\n
Figure 5: This is what the inside of a camera lens looks like. The photographer can adjust how they want the picture to look by changing the distance between the individual lenses.
\n\n\n\n\n## Vision\n\nOur eyes are very complex organs, but the process that enables us to see is actually pretty simple. The basic steps are as follows:\n\n1. Light enters the eye through the **pupil**\n2. The convex **lens** behind the pupil directs incoming light towards the **retina**, which is like a screen at the back of our eye.\n3. The retina then sends this image to the brain.\n4. The brain then interprets the image. \n\n\n
Image taken from https://openclipart.org/detail/261647/eye-diagram on August 3rd, 2018.
\n
Figure 6: This diagram shows some of the key components of the eye that enable us to see.
\n\nHowever, the image that is projected onto the retina is actually upside down! \n\n\n
Image taken from https://www.eetimes.com/author.asp?section_id=14&doc_id=1282795 on August 3rd, 2018.
\n
Figure 7: The convex lens at the front of our eye actually flips images upside down.
\n\nSo the retina actually sends an upside down image to the brain, and the brain automatically flips the image rightside up.\n\nUse the following link to see an animation showing how a convex lens flips images upside down:https://phet.colorado.edu/sims/geometric-optics/geometric-optics_en.html.\n\n\n## Technology & Inventions\n\n### The Telescope\n\nThe first telescope was made by Hans Lippershey in 1608, but it was Galileo Galilee who became famous by using it for astronomy. There are many different types of telescopes, but they all use reflection and refraction to make far away objects appear closer.\n\nA telescope uses a large opening to collect incoming light, and then directs this light towards your eye by using mirrors and lenses.\n\n\n
Image taken from https://www.skyandtelescope.com/press-releases/tips-for-first-time-telescope-buyers/ on August 3rd, 2018.
\n
Figure 8: Telescopes come in many different shapes and sizes.
\n\nThe reason why things look bigger when looking through a telescope is because of the lenses.\n\n\n### The Microscope\n\n\n- talk about invention of microscope\n- why it works\n\n\n## Conclusion\n\nOur understanding of light is the result of hundreds of years of research and innovation. Along the way, we have created incredible new technologies that have allowed us to look further than ever before.\n\n\n
Image taken from https://xenlife.com.au/hubble-space-telescope-important/ on August 3rd, 2018.
\n
Figure 9: The Hubble Space Telescope has shown us pictures of galaxies that are billions of light years away.
\n \n\n\n\n[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)\n", "meta": {"hexsha": "ea8f1dfdb26925dcebdb89d4624c2fd716cec61d", "size": 19793, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_sources/curriculum-notebooks/Science/LightOpticalSystems/light-optical-systems.ipynb", "max_stars_repo_name": "mlamoureux/CallystoJBook", "max_stars_repo_head_hexsha": "058080e1bb6370c0a072ed1c0ded96fe1e2947de", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_sources/curriculum-notebooks/Science/LightOpticalSystems/light-optical-systems.ipynb", "max_issues_repo_name": "mlamoureux/CallystoJBook", "max_issues_repo_head_hexsha": "058080e1bb6370c0a072ed1c0ded96fe1e2947de", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_sources/curriculum-notebooks/Science/LightOpticalSystems/light-optical-systems.ipynb", "max_forks_repo_name": "mlamoureux/CallystoJBook", "max_forks_repo_head_hexsha": "058080e1bb6370c0a072ed1c0ded96fe1e2947de", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19793.0, "max_line_length": 19793, "alphanum_fraction": 0.7164654171, "converted": true, "num_tokens": 3351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41489884579676883, "lm_q2_score": 0.24798742068237775, "lm_q1q2_score": 0.10288969461323628}} {"text": "```python\n%matplotlib widget\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mplcursors\n```\n\n\n```python\n# Scale plot to fit page \nplt.rcParams[\"figure.figsize\"] = (8, 6)\nplt.rcParams[\"font.size\"] = 18\n```\n\n# Radiation Interatctions with Matter\n\n### Learning Objectives\n\n- Define uncollided flux\n- Define linear interaction coefficient\n- Apply linear interaction coefficients to a slab problem\n- Identify the units of intensity, flux density, fluence, reaction rate\n- Compare linear interaction coefficient and cross section\n- Calculate uncollided flux in a medium \n- Calculate mean free path of a particle in a medium\n- Define the half thickness in a medium\n- Apply the concept of buildup factor to attenuation in a slab\n- Define microscopic cross section\n- Calculate macroscopic cross sections, given a microscopic cross section\n- Calculate the mass interaction coefficients of mixtures\n- Calculate flux density\n- Calculate Reaction Rate Density\n- Recognize the dependence of flux on energy, position, and time\n- Define radiation fluence\n- Calculate uncollided flux density from isotropic point sources\n- Apply the Kelin-Nishina formula to Compton Scattering\n- Compare energy dependence of photon interaction cross sections\n- Describe energy dependence of neutron interaction cross sections\n- Recognize the comparative range of heavy vs. light particles \n- Recognize the comparative range of charged particles\n\n## Linear Interaction Coefficient\n\n- The interaction of radiation with matter is always statistical in nature, and, therefore, must be described in probabilistic terms. \n\nConsider a particle travelling through a homogeneous material.\n\n\\begin{align}\nP_i(\\Delta x) &= \\mbox{probability the particle, causes a reaction of type i in distance }\\Delta x\\\\\n\\end{align}\n\nEmpirically, we find that this probability becomes constant as $\\Delta x \\longrightarrow 0$. Thus:\n\n\n\\begin{align}\n\\mu_i &= \\lim_{\\Delta x \\rightarrow 0}\\frac{P_i(\\Delta x)}{\\Delta x}\\\\\n\\end{align}\n\nFacts about $\\mu_\ud835\udc56$:\n\n- $\\mu_i$ is an *intrinsic* property of the material for a given incident particle and interaction. \n- $\\mu_i$ is independent of the path length traveled prior to the interaction. \n- $\\mu_i$ may represent many types of interaction (scattering: $\\mu_s$, absorption: $\\mu_a$, ...)\n- $\\mu_i$ typically depends on particle energy\n\n\nThe probability, per unit path length, that a neutral particle undergoes some sort of reaction, is the sum of the probabilities, per unit path length of travel, for each type :\n\n\\begin{align}\n\\mu_t(E) = \\sum_i \\mu_i(E)\n\\end{align}\n\n## Think Pair Share:\n\nWhat are the units of the linear interaction coefficient?\n\n### Attenuation of Uncollided Flux\n\nImagine a plane of neutral particles strike a slab of some material, normal to the surface. \n\nWe can describe this using $\\mu_t$ or, equivalently, the macroscopic total cross section $\\Sigma_t$. \n\n\n\\begin{align}\nI(x) &= I_0e^{-\\mu_t x}\\\\\nI(x) &= I_0e^{-\\Sigma_t x}\\\\\n\\end{align}\n\nwhere\n\n\\begin{align}\n I(x) &= \\mbox{uncollided intensity at distance x}\\\\\n I_0 &= \\mbox{initial uncollided intensity}\\\\\n \\mu_t &= \\mbox{total linear interaction coefficient} \\\\\n \\Sigma_t &= \\mbox{macroscopic total cross section} \\\\\n x &= \\mbox{distance into material [m]}\\\\\n\\end{align}\n\n\n\n```python\nimport math\ndef attenuation(distance, initial=100, sig_t=1):\n \"\"\"This function describes neutron attenuation into the slab\"\"\"\n return initial*math.exp(-sig_t*distance)\n\n```\n\nRather than intensity, one can find the probability density:\n\nWe have a strong analogy between decay and attenuation, as above. In the case of decay the probability of decay in a time interval dt is:\n\n\\begin{align}\nP(t)dt &= \\lambda e^{-\\lambda t}dt\\\\\n &= \\mbox{probability of decay in interval dt}\n\\end{align}\n\nFrom this, one can find the mean lifetime of a neutron before decay:\n\n\\begin{align}\n\\bar{t} &= \\int_0^\\infty t'P(t')dt'\\\\\n &= \\int_0^\\infty t'\\lambda e^{-\\lambda t'}dt'\\\\ \n &= \\frac{1}{\\lambda}\n\\end{align}\n\nIn the case of attenuation:\n\\begin{align}\nP(x)dx &= \\Sigma_te^{-\\Sigma_tx}dx\n\\end{align}\n\nSuch that: \n\n\\begin{align}\nP(x)dx &= \\mu_t e^{-\\mu_t x}dx\\\\\n &= \\Sigma_t e^{-\\Sigma_t x}dx\\\\\n &= \\mbox{probability of interaction in interval dx}\n\\end{align}\n\n\nSo, the mean free path is:\n\n\\begin{align}\n\\bar{l} &= \\int_0^\\infty x'P(x')dx'\\\\\n &= \\int_0^\\infty x'\\Sigma_te^{-\\Sigma_t x'}dx'\\\\ \n &= \\frac{1}{\\Sigma_t}\n\\end{align}\n\n\nOr, equivalently in $\\mu_t$ notation:\n\n\\begin{align}\n\\bar{x} &= \\int_0^\\infty x'P(x')dx'\\\\\n &= \\int_0^\\infty x'\\mu_te^{-\\mu_t x'}dx'\\\\ \n &= \\frac{1}{\\mu_t}\n\\end{align}\n\n\n\n```python\ndef prob_dens(distance, initial=100, sig_t=1):\n return sig_t*attenuation(distance, initial=100, sig_t=1)\n\n```\n\n\n```python\nsig_t = 0.2\ni_0 = 100\n\n# This code plots attenuation\nimport numpy as np\nz = np.arange(24)\ny = np.arange(24)\nx = np.arange(24)\nfor h in range(0,24):\n x[h] = h\n y[h] = attenuation(h, initial=i_0, sig_t=sig_t)\n z[h] = prob_dens(h, initial=i_0, sig_t=sig_t)\n\n# creates a figure and axes with matplotlib\nfig, ax = plt.subplots()\nscatter = plt.scatter(x, y, color='blue', s=y*20, alpha=0.4) \nax.plot(x, y, color='red') \nax.plot(x, z, color='green') \n\n\n# adds labels to the plot\nax.set_ylabel('Percent of Neutrons')\nax.set_xlabel('Distance into slab')\nax.set_title('Attenuation')\n\n# Add mpl widget for interactivity\nlabels = ['{0}% intensity'.format(i) for i in y]\nmplcursors.cursor(ax).connect(\n \"add\", lambda sel: sel.annotation.set_text(labels[sel.index]))\n\nplt.show()\n```\n\n## Half-thickness\n\nIn another analog to decay, the **half-thickness** of a material is the distance required for half of the incident radiation to interact with a medium:\n\n\\begin{align}\n\\frac{I(x_{1/2})}{I(0)} &= e^{-\\mu_t x_{1/2}}\\\\\n\\implies x_{1/2} &= \\frac{\\ln{2}}{\\mu_t}\n\\end{align}\n\n## Think pair share: \nWhat is the concept in the context of decay that is analogous to the half-thickness?\n\n\n## Microscopic and Macroscopic Cross Sections\n\n- The microscopic cross section $\\sigma_i$ is the likelihood of the event per unit area. \n- The macroscopic cross section $\\Sigma_i$ is the likelihood of the event per unit area of a certain density of target isotopes.\n- The macroscopic cross section $\\Sigma_i$ is equivalent to the linear interaction coefficient $\\mu_i$, but we tend to use $\\Sigma_i$ in nuclear interactions, reserving $\\mu_i$ for photon interactions.\n\n\\begin{align}\n\\mu_i &= \\mbox{linear interaction coefficient}\\\\\n\\Sigma_i &= \\mbox{macroscopic cross section}\\\\\\\\\n &= \\sigma_i N\\\\\n &= \\sigma_i \\frac{\\rho N_a}{A}\\\\\n \\mbox{where }& \\\\\n N &= \\mbox{atom density of medium}\\\\\n \\rho &= \\mbox{mass density of the medium}\\\\\n N_a &= \\mbox{Avogadro's number}\\\\\n A &= \\mbox{atomic weight of the medium}\n\\end{align}\n\n\n\n```python\ndef macroscopic_xs(micro, N):\n \"\"\"Returns the macroscopic cross section [cm^2] or [barns]\n \n Parameters\n ----------\n micro: double\n microscopic cross section [cm^2] or [barns]\n N: double\n atom density in the medium [atoms/cm^3]\n \"\"\"\n return micro*N\n```\n\n\n```python\ndef NA():\n \"\"\"Returns Avogadro's number \n 6.022x10^23 atoms per mole\n \"\"\"\n return 6.022E23\n\ndef num_dens_from_rho(rho, na, a):\n \"\"\"The atomic number density. \n That is, the concentration of atoms or molecules per unit volume (V)\n \n Parameters\n -----------\n rho : double\n material density (in units like g/cm^3 or kg/m^3) of the sample\n na : double\n Avogadro's number\n a : double\n The atomic or molecular weight of the atom or molecule of interest \n \"\"\"\n return rho*na/a\n```\n\n## Example: \nImagine a beam of neutrons striking a body of water, $H_2O$. Many will be absorbed by the hydrogen in the water, particularly $^1H$. \n\nFind the macroscopic absorption cross section\n\n\n```python\n# Find the macroscpic absorption cross section \n# of the 1H in H2O\nsig_1h = 0.333 # barns\n\n# First, molecular density of water\nrho_h2o = 1 # g/cm^3\na_h2o = 18.0153 # g/mol\nn_h2o = num_dens_from_rho(rho_h2o, NA(), a_h2o) # molecules water / cm^3\nn_h2o_barn = n_h2o/10**(24) # 10^24 molecules water / cm^3\nprint('n_h2o [1/cm^3] = ', n_h2o)\nprint('n_h2o [10^(24)/cm^3] = ', n_h2o_barn)\n\n# Now, there are two Hydrogens in each molecule of water, so:\nmacroscopic_h1 = macroscopic_xs(sig_1h, 2*n_h2o_barn)\nprint('absorption in water from 1H = ', macroscopic_h1)\n```\n\n n_h2o [1/cm^3] = 3.342714248444378e+22\n n_h2o [10^(24)/cm^3] = 0.033427142484443784\n absorption in water from 1H = 0.02226247689463956\n\n\n### Mixtures\nIn a medium that is a mixture of isotopes (e.g. $H_2O$), we can calculate the total macroscopic cross section based on individual microscopic cross sections and number densities for each component of the mixture. We may need to include information about relative isotopic abundances (f).\n\nFor the same problem as above (neutrons striking a body of water) we can calculate the absorption by *all* isotopes in the $H_2O$.\n\n\n\\begin{align}\n\\mu^{H_2O} \\equiv \\Sigma^{H_2O} &= N^1\\sigma_a^1 + N^2\\sigma_a^2 + N^{16}\\sigma_a^{16}\n+ N^{17}\\sigma_a^{17} + N^{18}\\sigma_a^{18}\\\\\n&= f^1N^H\\sigma_a^1 + f^2N^H\\sigma_a^2 + f^{16}N^O\\sigma_a^{16} + f^{17}N^O\\sigma_a^{17} + f^{18}N^O\\sigma_a^{18}\n\\end{align}\n\nSuperscripts 1, 2, 16, 17, and 18 indicate isotopes $^1H$, $^2H$, $^{16}O$,$^{17}O$, and $^{18}O$. \n\n\\begin{align}\nN^H = 2N^{H_2O}\\\\\nN^{O} = N^{H_2O}\\\\\nN^{H_2O} = \\frac{\\rho^{H_2O}N_a}{A^{H_2O}}\n\\end{align}\n\nThus:\n\\begin{align}\n\\mu^{H_2O} \\equiv \\Sigma^{H_2O} &= N^{H_2O}\\left[2f^1\\sigma_a^1 + 2f^2\\sigma_a^2 + f^{16}\\sigma_a^{16} + f^{17}\\sigma_a^{17} + f^{18}\\sigma_a^{18}\\right]\n\\end{align}\n\n\n\n```python\n# We need a lot of data\n\n# Abundances\nf_1 = 0.99985\nf_2 = 0.00015\nf_16 = 0.99756\nf_17 = 0.00039\nf_18 = 0.00205\n\n# Then, microscopic absorption cross sections\nsig_1 = 0.333\nsig_2 = 0.000506\nsig_16 = 0.000190\nsig_17 = 0.239\nsig_18 = 0.000160\n\nmacroscopic_h2o = n_h2o_barn*(2*f_1*sig_1 \n + 2*f_2*sig_2\n + f_16*sig_16\n + f_17*sig_17 \n + f_18*sig_18) \nprint('absorption in water from all isos = ', macroscopic_h2o,\"\\n\",\n 'while absorption in water from 1H = ', macroscopic_h1,\"\\n\",\n 'Thus, absorption in water is mostly from 1H.')\n```\n\n absorption in water from all isos = 0.02226860496564809 \n while absorption in water from 1H = 0.02226247689463956 \n Thus, absorption in water is mostly from 1H.\n\n\n### Reaction Rates\n\n- The microscopic cross section is just the likelihood of the event per unit area. \n- The macroscopic cross section is just the likelihood of the event per unit area of a certain density of target isotopes.\n- The reaction rate is the macroscopic cross section times the flux of incident neutrons.\n\n\\begin{align}\nR_{i,j}(\\vec{r}) &= N_j(\\vec{r})\\int dE \\phi(\\vec{r},E)\\sigma_{i,j}(E)\\\\\nR_{i,j}(\\vec{r}) &= \\mbox{reactions of type i involving isotope j } [reactions/cm^3s]\\\\\nN_j(\\vec{r}) &= \\mbox{number of nuclei participating in the reactions } [\\#/cm^3]\\\\\nE &= \\mbox{energy} [MeV]\\\\\n\\phi(\\vec{r},E)&= \\mbox{flux of neutrons with energy E at position i } [\\#/cm^2s]\\\\\n\\sigma_{i,j}(E)&= \\mbox{cross section } [cm^2]\\\\\n\\end{align}\n\n\nThis can be written more simply as $R_x = \\Sigma_x \\phi$. \n\nUsing flux notation, the density of ith type of neutron interaction with isotope j, per unit time is:\n\n\n\\begin{align}\nR_{i,j}(\\vec{r}) = \\Sigma_{i,j}\\phi(\\vec{r})\n\\end{align}\n\n## Flux density from Point Source\nFinding $\\phi(\\vec{r}0$ generally requires *particle transport calculations.*\n\nHowever, in some simple practical situations, the flux density can be approximated by the flux density of uncollided source particles.\n\n### Point Source in Vacuum\n\nConsider a source of particles:\n\n- it emits $S_p$ particles per unit time\n- all particles have energy E\n- and they are emitted radially outward into an infinite vacuum\n- isotropically (equally in all directions)\n- from a single point in space\n\n### Think-pair share: \n\n- How many interactions occur?\n\n\n### At a radius r: \nBecause the source is isotropic, each unit area on an imaginary spherical shell of radius $r$ has the same number of particles crossing it. Thus:\n\n\\begin{align}\n\\phi^o(r) &= \\mbox{uncollided flux at radius r in any direction}\\\\\n&= \\frac{S_p}{4\\pi r^2}\n\\end{align}\n\n\n```python\ndef phi_o_r(r, s):\n \"\"\"Returns the uncolided flux at radius r\n due to an isotropic point source in a vacuum\n \n Parameters\n -----------\n r : double\n radius away from the point [length]\n s : double\n point source strength [particles/time]\n \"\"\"\n return s/(4*math.pi*pow(r,2))\n```\n\n\n```python\ns=200\nplt.clear()\nplt.plot(range(1,10), [phi_o_r(r, s) for r in range(1,10)])\nplt.show\n```\n\nThe plot above, this $1/r^2$ reduction in flux and reaction rate, is called **geometric attenuation**\n\n### Think-pair-share\n\nWhat other phrase do you think we use for attenuation in a medium? \n\n## Point Source in an Attenuating Medium\nSo, the unollided flux is \n\\begin{align}\n\\phi^o(r) &= \\frac{S_p}{4\\pi r^2}\n\\end{align}\n\n### A small volume\n\nAt a distance r, we place a homogeneous mass with a volume $\\Delta V_d$. The interaction rate $R_d$ in the mass is: \n\n\\begin{align}\n&R^o(r)=\\mu_d(E)\\Delta V_d\\frac{S_p}{4\\pi r^2}\\\\\n\\mbox{where}&\\\\\n&\\mu_d(E)=\\mbox{linear interaction coefficient in the volume}\n\\end{align}\n\n### An inifinite volume\n\nFrom this, we can imagine the point source embedded in an infinite medium of this material. A detector is at distance r in the volume:\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{-\\mu r}\\\\\n\\mbox{where}&\\\\\n&e^{-\\mu r}=\\mbox{material attenuation} \\\\\n&\\frac{S_p}{4\\pi r^2}=\\mbox{geometric attenuation}\n\\end{align}\n\n### A slab shield\n\nImagine a slab shield, thickness t, at a distance r, between the point source and a detector.\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{-\\mu t}\\\\\n\\mbox{where}&\\\\\n&t=\\mbox{thickness of the slab}\n\\end{align}\n\nIf it were made of a series of materials $i$, with coefficients $\\mu_i$, and thicknesses $t_i$:\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{\\sum_i -\\mu_i t_i}\\\\\n\\mbox{where}&\\\\\n&\\mu_i=\\mbox{linear interaction coefficient of ith slab}\\\\\n&t_i=\\mbox{thickness of ith slab}\n\\end{align}\n\n### Heterogeneous Medium\n\nAn arbitrary heterogeneous medium can be described as having an interaction coefficient $\\mu(\\vec{r})$ at any point $\\vec{r}$ in the medium, a funciton of position in the medium.\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s) ds\\right]}\\\\\n\\end{align}\n\n# Announcements\n* If you had trouble tagging pages based on problems for HW06, this should be fixed by the end of class. If you already submitted, please edit your submission to tag problems appropriately. \n* There will be a HW assigned next Friday. It will be entirely review material based on things that were difficult on previous assignments and the exam. \n* The quiz and HW are posted. The quiz is still due on Monday. \n\n## Polyenergetic Point Source\n\n- Previous examples assume a **monoenergetic** point source (particles of a single energy, E). \n- But, a single source can emit particles at several discrete energies, or even a continuum of energies.\n\nQuestion: From last lecture, what do we expect of fission neutrons? \n\nLet's define some variables:\n\n\\begin{align}\nf_i &= \\mbox{fraction of the source emitted with energy }E_i\\\\\nE_i &= \\mbox{discrete energy of }f_iS_p\\mbox{ particles}\\\\\nS_p &= \\mbox{still the number of particles emitted from the point source}\n\\end{align}\n\nThe total interaction rate caused by uncollided particles streaming through a small volume mass at distance r from the source is the following, **for some set of i discrete energies**.\n\n\\begin{align}\nR^o(r)=\\sum_i\\frac{S_p f_i\\mu_d(E_i) \\Delta V_d}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s,E_i) ds\\right]}\\\\\n\\end{align}\n\nIf the source emits a continuum of energies, it's best to define the fraction $f_i$ as a differential probability:\n\n\\begin{align}\nN(E)dE\\mbox{ the probability that a source particle is emitted with energy in dE about E}\n\\end{align}\n\n\nWith this definition, the sum over discrete energies becomes an integral.\n\n\\begin{align}\nR^o(r)=\\int_o^\\infty \\left[\\frac{S_p N(E)\\mu_d(E) \\Delta V_d}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s,E) ds\\right]}\\right]dE\\\\\n\\end{align}\n\nPlease note, you may see many nuclear texts list the dE first in the integral... don't be bamboozled. This is equivalent to the above:\n\n\\begin{align}\nR^o(r)=\\int_o^\\infty dE\\frac{S_p N(E)\\mu_d(E) \\Delta V_d}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s,E) ds\\right]}\n\\end{align}\n\n\n### Example 7.4 from your book (Shultis & Faw)\n\nA point source with an activity of 500 Ci emits 2-MeV photons with a frequency of 70% per decay. \n\n\\begin{align}\nS_p = 500 Ci\\\\\nf_2 = 0.7\\\\\n\\end{align}\n\nWhat is the flux density of 2-MeV photons 1 meter from the source? \n\n\n\n```python\ns_p = 500 # Ci\nf_2 = 0.7 # fraction emitted at 2MeV\nmu = 1.0/187.0 # mean free path of 2MeV photon in air is 187m\n\n# first, convert S_p is in number of particles per decay (Bq)\nbq_to_ci = 3.7e10 # Bq/Ci\ns_p = s_p*bq_to_ci \n\n# Now, find uncollided flux of 2MeV photons at 1 m\nr = 1.0 #m\ns = s_p*f_2 # just want 2MeV photons\nphi = phi_o_r(r, s)\nprint(\"Uncollided flux is : \", phi)\n\n# Uh oh, we forgot the material attenuation!\nphi = phi_o_r(r, s)*math.exp(-mu*r)\nprint(\"Flux after material attenuation is : \", phi)\n```\n\n Uncollided flux is : 1030528256520.0223\n Flux after material attenuation is : 1025032118881.2917\n\n\n### Think Pair Share\n\nWhat are the units of $\\phi^o$, above?\n\n\n# Photon Interactions\n\n**Recall:** \n \n\\begin{align}\nc &= \\mbox{speed of light}\\\\ \n &=2.9979\\times10^8\\left[\\frac{m}{s}\\right]\\\\\nE &= \\mbox{photon energy}\\\\\n &=h\\nu\\\\\n &=\\frac{hc}{\\lambda}\\\\\nh &= \\mbox{Planck's constant}\\\\\n &= 6.62608\\times10^{\u221234} [J\\cdot s] \\\\\n\\nu &=\\mbox{photon frequency}\\\\\n\\lambda &= \\mbox{photon wavelength}\n\\end{align}\n\n**Nota bene:**\n- **10eV - 20MeV** photons are important in radiation sheilding\n- At **10eV - 20MeV**, only photoelectric effect, pair production, and Compton Scattering are significant\n\n\n
Figure from: \"Radiation Interactions with Tissue.\" Radiology Key. Jan 8 2016.
\n\n\n\n
Figure from: Cullen, D. E. 1994. \"Photon and Electron Interaction Databases and Their Use in Medical Applications.\" UCRL-JC--117419. Lawrence Livermore National Lab. http://inis.iaea.org/Search/search.aspx?orig_q=RN:26035330.
\n\n\n\n## Klein Nishina\n\nThe total Compton cross section, per atom with Z electrons, based on the free-electron approximation, is given by the well-known Klein-Nishina formula [Evans 1955]:\n\n\\begin{align}\n\\sigma_c(E) =\\pi Zr_e^2\\lambda\\left[(-2\\lambda - 2\\lambda^2)\\ln{\\left(1+\\frac{2}{\\lambda}\\right)} + \\frac{2(1+9\\lambda + 8\\lambda^2 + 2\\lambda^3)}{(\\lambda + 2)^2}\\right]\n\\end{align}\n\nHere $\\lambda \\equiv \\frac{m_ec^2}{E}$, a dimensionless quantity, and $r_e$ is the classical electron radius. The value of $r_e$ is given by:\n\n\\begin{align}\nr_e &\\equiv \\frac{e^2}{4\\pi\\epsilon_om_ec^2}\\\\\n&= 2.8179\\times10^{-13}cm\n\\end{align}\n\n\n### Think pair share:\nConceptually, in the above equation:\n\n- what is $r_e$?\n- what is $e$?\n- what is $\\epsilon_o$?\n- what is $m_ec^2$?\n\n\n\n### Total Photon Cross Section\nVarious types of incoherent scattering, including Compton, are actually present in that intermediate energy range. It is occaisionally important to correct for all types of incoherent scattering, but it can typically be assumed to be primarily Compton scattering. \n\nFor photons, then $\\mu$ becomes:\n\n\\begin{align} \n\\mu(E)&\\equiv N\\left[\\sigma_{ph}(E) + \\sigma_{inc}(E) + \\sigma_{pp}(E)\\right]\\\\\n &\\simeq N\\left[\\sigma_{ph}(E) + \\sigma_{c}(E) + \\sigma_{pp}(E)\\right]\\\\\n N &= \\mbox{atom density}\\\\\n &= \\frac{\\rho N_a}{A} \n\\end{align}\n\nIt is common to denote this as the total mass interaction coefficient:\n\n\\begin{align}\n\\frac{\\mu}{\\rho} &= \\frac{N_a}{A}\\left[\\sigma_{ph}(E) + \\sigma_{c}(E) + \\sigma_{pp}(E)\\right]\\\\\n&= \\frac{N_a}{A}\\left[\\frac{\\mu_{ph}(E)}{\\rho} + \\frac{\\mu_{c}(E)}{\\rho} + \\frac{\\mu_{pp}(E)}{\\rho}\\right]\n\\end{align}\n\n## Neutron Interactions\n\nPhotons tend to interact with electrons in a target atom. **Neutrons tend to interact with the nucleus.**\n\nNeutron cross sections:\n\n- Vary rapidly with the incident neutron energy,\n- Vary erratically from one element to another \n- Even vary dramatically between isotopes of the same element.\n\nThere are lots of sources of neutron cross sections. The best place to start is the Brookhaven National Laboratory National Nuclear Data Center [https://www.nndc.bnl.gov/](https://www.nndc.bnl.gov/).\n\nYour book has a clever table (7.1) listing some of the data needed for high and low energy interaction calculations. These include:\n\n- Elastic scattering cross sections \n- Angular distribution of elastically scattered neutrons \n- Inelastic scattering cross sections \n- Angular distribution of inelastically scattered neutrons \n- Gamma-photon yields from inelastic neutron scattering \n- Resonance absorption cross sections \n- Thermal-averaged absorption cross sections \n- Yield of neutron-capture gamma photons\n- Fission cross sections and associated gamma-photon and neutron yields\n\n\n# Total cross sections\n\n**For light nuclei** ($A<25$) and $E<1keV$, the cross section typically varies as:\n\n\\begin{align}\n\\sigma_t = \\sigma_1 + \\frac{\\sigma_2}{\\sqrt{E}}\n\\mbox{where}&\\\\\n\\sigma_1 \\mbox{ and }\\sigma_2 \\mbox{ are constants}& \\\\\n\\sigma_1&=\\mbox{elastic scattering}\\\\\n\\frac{\\sigma_2}{\\sqrt{E}}&=\\mbox{radiative capture}\\\\\n\\end{align}\n\n**For solids** at energies less than about 0.01 eV, Bragg cutoffs apply. These are energies below which no coherent scattering is possible from the material's crystalline planes.\n\n\n**For heavy nuclei**, the total cross section has a $\\frac{1}{\\sqrt{E}}$ behavior with low energy, narrow resonances and high energy broad resonances:\n\n\\begin{align}\n\\sigma_t \\propto \\frac{1}{\\sqrt{E}}\n\\end{align}\n\n### Recall fission cross sections :\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "b756c38254ca06ba0f261d2f6bd59d241adba531", "size": 171619, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10.06.2-rad_interactions/00-rad-int-matter.ipynb", "max_stars_repo_name": "munkm/npre247", "max_stars_repo_head_hexsha": "5683fa3176e946622a31e3b207484e7ec74f8421", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-01-31T17:44:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T19:54:50.000Z", "max_issues_repo_path": "10.06.2-rad_interactions/00-rad-int-matter.ipynb", "max_issues_repo_name": "munkm/npre247", "max_issues_repo_head_hexsha": "5683fa3176e946622a31e3b207484e7ec74f8421", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2022-01-28T20:32:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-31T17:43:54.000Z", "max_forks_repo_path": "10.06.2-rad_interactions/00-rad-int-matter.ipynb", "max_forks_repo_name": "munkm/npre247", "max_forks_repo_head_hexsha": "5683fa3176e946622a31e3b207484e7ec74f8421", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2022-01-24T16:47:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-21T04:09:25.000Z", "avg_line_length": 183.9431939979, "max_line_length": 68899, "alphanum_fraction": 0.883579324, "converted": true, "num_tokens": 6697, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657965106367595, "lm_q2_score": 0.2146914140875998, "lm_q1q2_score": 0.10231755921223548}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport scipy.stats\nimport matplotlib.pyplot as plt\n\nfrom matplotlib import animation\nfrom matplotlib import rcParams\nrcParams['figure.dpi'] = 120\nfrom IPython.display import HTML\nfrom IPython.display import YouTubeVideo\nfrom functools import partial\nYouTubeVideo_formato = partial(YouTubeVideo, modestbranding=1, disablekb=0,\n width=640, height=360, autoplay=0, rel=0, showinfo=0)\n```\n\n# Estad\u00edstica inferencial\n\nLa inferencia busca\n\n> Extraer **conclusiones** a partir de **hechos u observaciones** a trav\u00e9s de un **m\u00e9todo o premisa**\n\nEn el caso particular de la **inferencia estad\u00edstica** podemos realizar las siguientes asociaciones\n\n- Hechos: Datos\n- Premisa: Modelo probabil\u00edstico\n- Conclusi\u00f3n: Una cantidad no observada que es interesante\n\nY lo que buscamos es\n\n> Cuantificar la incerteza de la conclusi\u00f3n dado los datos y el modelo \n\nLa inferencia estad\u00edstica puede dividirse en los siguientes tres niveles\n\n1. Ajustar un modelo a nuestros datos\n1. Verificar que el modelo sea confiable\n1. Responder una pregunta usando el modelo\n\nEn esta lecci\u00f3n estudiaremos las herramientas m\u00e1s utilizadas asociadas a cada uno de estos niveles\n\n1. **Estimador de m\u00e1xima verosimilitud**\n1. **Bondad de ajuste** e **Intervalos de confianza**\n1. **Test de hip\u00f3tesis**\n\n## Ajuste de modelos: Estimaci\u00f3n de m\u00e1xima verosimilitud\n\nEn este nivel de inferencia se busca **ajustar** un modelo te\u00f3rico sobre nuestros datos. En esta lecci\u00f3n nos enfocaremos en **modelos de tipo par\u00e1metrico**. Un modelo par\u00e1metrico es aquel donde **se explicita una distribuci\u00f3n de probabilidad**. \n\nRecordemos que una distribuci\u00f3n tiene **par\u00e1metros**. Por ejemplo la distribuci\u00f3n Gaussiana (univariada) se describe por su media $\\mu$ y su varianza $\\sigma^2$. Luego ajustar una distribuci\u00f3n Gaussiana corresponde a encontrar el valor de $\\mu$ y $\\sigma$ que hace que el modelo se parezca lo m\u00e1s posible a la distribuci\u00f3n emp\u00edrica de los datos.\n\nA continuaci\u00f3n veremos los pasos necesarios para ajustar una distribuci\u00f3n a nuestros datos\n\n### \u00bfQu\u00e9 distribuci\u00f3n ajustar?\n\nAntes de ajustar debemos realizar un supuesto sobre la distribuci\u00f3n para nuestro modelo. En general podemos ajustar cualquier distribuci\u00f3n pero un mal supuesto podr\u00eda invalidar nuestra inferencia\n\nPodemos usar las herramientas de **estad\u00edstica descriptiva** para estudiar nuestros datos y tomar esta decisi\u00f3n de manera informada\n\nEn el siguiente ejemplo, un histograma de los datos revela que un modelo gaussiano no es una buena decisi\u00f3n \n\n\n\n\u00bfPor qu\u00e9? La distribuci\u00f3n emp\u00edrica es claramente asim\u00e9trica, su cola derecha es m\u00e1s pesada que su cola izquierda. La distribuci\u00f3n Gaussiana es sim\u00e9trica por lo tanto no es apropiada en este caso \u00bfQu\u00e9 distribuci\u00f3n podr\u00eda ser m\u00e1s apropiada?\n\n\n\n### \u00bfC\u00f3mo ajustar mi modelo? Estimaci\u00f3n de m\u00e1xima verosimilitud\n\nA continuaci\u00f3n describiremos un procedimiento para ajustar modelos param\u00e9tricos llamado *maximum likelihood estimation* (MLE)\n\nSea un conjunto de datos $\\{x_1, x_2, \\ldots, x_N\\}$\n\n**Supuesto 1** Los datos siguen el modelo $f(x;\\theta)$ donde $f(\\cdot)$ es una distribuci\u00f3n y $\\theta$ son sus par\u00e1metros\n\n$$\nf(x_1, x_2, \\ldots, x_N |\\theta)\n$$\n\n**Supuesto 2** Las observaciones son independientes e id\u00e9nticamente distribuidas (iid)\n\n- Si dos variables son independientes se cumple que $P(x, y) = P(x)P(y)$\n- Si son adem\u00e1s id\u00e9nticamente distribuidas entonces tienen **la misma distribuci\u00f3n y par\u00e1metros**\n\nUsando esto podemos escribir\n\n$$\n\\begin{align}\nf(x_1, x_2, \\ldots, x_N |\\theta) &= f(x_1|\\theta) f(x_2|\\theta) \\ldots f(x_N|\\theta) \\nonumber \\\\\n& = \\prod_{i=1}^N f(x_i|\\theta) \\nonumber \\\\\n& = \\mathcal{L}(\\theta)\n\\end{align}\n$$\n\ndonde $\\mathcal{L}(\\theta)$ se conoce como la verosimilitud o probabilidad inversa de $\\theta$ \n\nSi consideramos que los datos son fijos podemos buscar el valor de $\\theta$ de m\u00e1xima verosimilitud\n\n$$\n\\begin{align}\n\\hat \\theta &= \\text{arg} \\max_\\theta \\mathcal{L}(\\theta) \\nonumber \\\\\n&= \\text{arg} \\max_\\theta \\log \\mathcal{L}(\\theta) \\nonumber \\\\\n&= \\text{arg} \\max_\\theta \\sum_{i=1}^N \\log f(x_i|\\theta) \n\\end{align}\n$$\n\nEl segundo paso es valido por que el m\u00e1ximo de $g(x)$ y $\\log(g(x))$ es el mismo. El logaritmo es monoticamente creciente. Adem\u00e1s aplicar el logaritmo es muy conveniente ya que convierte la multiplicatoria en una sumatoria. \n\nAhora s\u00f3lo falta encontrar el m\u00e1ximo. Podemos hacerlo\n\n- Anal\u00edticamente, derivando con respecto a $\\theta$ e igualando a cero\n- Usando t\u00e9cnicas de optimizaci\u00f3n iterativas como gradiente descedente\n\n**Ejemplo:** La pesa defectuosa\n\n\n\nSu profesor quiere medir su peso pero sospecha que su pesa est\u00e1 defectuosa. Para comprobarlo mide su peso $N$ veces obteniendo un conjunto de observaciones $\\{x_i\\}$. \u00bfEs posible obtener un estimador del peso real $\\hat x$ a partir de estas observaciones?\n\nModelaremos las observaciones como\n\n$$\nx_i = \\hat x + \\varepsilon_i\n$$\n\ndonde $\\varepsilon_i$ corresponde al ruido o error del instrumento y asumiremos que $\\varepsilon_i \\sim \\mathcal{N}(0, \\sigma_\\varepsilon^2)$, es decir que el ruido es **independiente** y **Gaussiano** con media cero y **varianza** $\\sigma_\\varepsilon^2$ **conocida**\n\nEntonces la distribuci\u00f3n de $x_i$ es\n\n$$\nf(x_i|\\hat x) = \\mathcal{N}(\\hat x, \\sigma_\\varepsilon^2)\n$$\n\nPara encontrar $\\hat x$, primero escribimos el logaritmo de la **verosimilitud**\n\n$$\n\\begin{align}\n\\log \\mathcal{L}(\\hat x) &= \\sum_{i=1}^N \\log f(x_i|\\hat x) \\nonumber \\\\\n&= \\sum_{i=1}^N \\log \\frac{1}{\\sqrt{2\\pi\\sigma_\\varepsilon^2}} \\exp \\left ( - \\frac{1}{2\\sigma_\\varepsilon^2} (x_i - \\hat x)^2 \\right) \\nonumber \\\\\n&= -\\frac{N}{2}\\log(2\\pi\\sigma_\\varepsilon^2) - \\frac{1}{2\\sigma_\\varepsilon^2} \\sum_{i=1}^N (x_i - \\hat x)^2 \\nonumber\n\\end{align}\n$$\n\nLuego debemos resolver\n\n$$\n\\begin{align}\n\\hat \\theta &= \\text{arg} \\max_\\theta \\log \\mathcal{L}(\\theta) \\nonumber \\\\\n&= \\text{arg} \\max_\\theta - \\frac{1}{2\\sigma_\\varepsilon^2} \\sum_{i=1}^N (x_i - \\hat x)^2\n\\end{align}\n$$\n\ndonde podemos ignorar el primer t\u00e9rmino de la verosimilitud ya que no depende de $\\theta$. Para encontrar el m\u00e1ximo derivamos la expresi\u00f3n anterior e igualamos a cero \n\n$$\n-\\frac{1}{2\\sigma_\\varepsilon^2} \\sum_{i=1}^N 2(x_i - \\hat x ) = 0.\n$$\n\nFinalmente si despejamos llegamos a que\n\n$$\n\\hat x = \\frac{1}{N} \\sum_{i=1}^N x_i,\n$$\n\nque se conoce como el estimador de m\u00e1xima verosimilitud **para la media de una Gaussiana**\n\nRecordemos que podemos comprobar que es un m\u00e1ximo utilizando la segunda derivada\n\n### Estimaci\u00f3n MLE con `scipy`\n\nComo vimos en la lecci\u00f3n anterior el m\u00f3dulo [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) provee de un gran n\u00famero de distribuciones te\u00f3ricas organizadas como \n\n- continuas de una variable\n- discretas de una variable\n- multivariadas\n\nLas distribuciones comparten muchos de sus m\u00e9todos, a continuaci\u00f3n revisaremos los m\u00e1s importantes. A modo de ejemplo consideremos la distribuci\u00f3n Gaussiana (Normal)\n\n```python\nfrom scipy.stats import norm\ndist = norm() # Esto crea una Gaussiana con media 0 y desviaci\u00f3n est\u00e1ndar (std) 1\ndist = norm(loc=2, scale=2) # Esto crea una Gaussiana con media 2 y std 2\n```\n\n**Crear una muestra aleatoria con `rvs`**\n\nLuego de crear un objeto distribuci\u00f3n podemos obtener una muestra aleatoria usando el m\u00e9todo el atributo `rvs` \n\n```python\ndist = norm(loc=2, scale=2)\ndist.rvs(size=10, # Cantidad de n\u00fameros aleatorios generados\n random_state=None #Semilla aleatoria\n )\n```\n\nEsto retorna un arreglo de 10 n\u00fameros generados aleatoriamente a partir de `dist`\n\n**Evaluar la funci\u00f3n de densidad de probabilidad** \n\nLa funci\u00f3n de densidad de la Gaussiana es\n\n$$\nf(x; \\mu, \\sigma^2) = \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp \\left( -\\frac{1}{2\\sigma^2} (x-\\mu)^2 \\right) \n$$\n\nLa densidad de un objeto distribuci\u00f3n continuo puede obtenerse con el m\u00e9todo `pdf` el cual es funci\u00f3n de `x`\n\n\n```python\ndist = norm(loc=2, scale=2)\np = dist.pdf(x # Un ndrray que representa x en la ecuaci\u00f3n superior\n )\nplt.plot(x, p) # Luego podemos graficar la fdp\n```\n\nDe forma equivalente, si deseamos la funci\u00f3n de densidad acumulada usamos el m\u00e9todo `cdf`\n\nPara objetos distribuci\u00f3n discretos debemos usar el atributo `pmf` \n\n\n**Ajustar los par\u00e1metros con MLE**\n\nPara hacer el ajuste se usa el m\u00e9todo `fit`\n\n```python \nparams = norm.fit(data # Un ndarray con los datos\n ) \n```\n\nEn el caso de la Gaussiana el vector `params` tiene dos componentes `loc` y `scale`. La cantidad de par\u00e1metros depende de la distribuci\u00f3n que estemos ajustando. Tambi\u00e9n es importante notar que para ajustar se usa `norm` (clase abstracta) y no `norm()` (instancia)\n\nUna vez que tenemos los par\u00e1metros ajustados podemos usarlos con\n\n```python\ndist = norm(loc=params[0], scale=params[1])\n```\n\nPara distribuciones que tienen m\u00e1s de dos par\u00e1metros podemos usar\n\n```python\ndist = norm(*params[:-2], loc=params[-2], scale=params[-1])\n```\n\n### Ejercicio\n\nObserve la siguiente distribuci\u00f3n y reflexione \u00bfQu\u00e9 caracter\u00edsticas resaltan de la misma? \u00bfQu\u00e9 distribuci\u00f3n ser\u00eda apropiado ajustar en este caso?\n\n\n```python\ndf = pd.read_csv('../data/cancer.csv', index_col=0)\ndf = df[[\"diagnosis\", \"radius1\", \"texture1\"]]\nx = df[\"radius1\"].values\nfig, ax = plt.subplots(figsize=(5, 3), tight_layout=True)\nax.hist(x, bins=20, density=True)\nax.set_xlabel('Radio del nucleo');\n```\n\n- Seleccione una distribuci\u00f3n de `scipy.stats` ajustela a los datos\n- Grafique la pdf te\u00f3rica sobre el histograma\n\n\n```python\n\n```\n\n## Verificaci\u00f3n de modelos: Tests de bondad de ajuste\n\nUna vez que hemos ajustado un modelo es buena pr\u00e1ctica verificar que tan confiable es este ajuste. Las herramientas m\u00e1s t\u00edpicas para medir que tan bien se ajusta nuestra distribuci\u00f3n te\u00f3rica son\n\n- el [test de Akaike](https://en.wikipedia.org/wiki/Akaike_information_criterion)\n- los [gr\u00e1ficos cuantil-cuantil](https://es.wikipedia.org/wiki/Gr%C3%A1fico_Q-Q) (QQ plot)\n- el test no-param\u00e9trico de Kolmogorov-Smirnov (KS)\n\nA continuaci\u00f3n revisaremos el test de KS para bondad de ajuste\n\n**El test de Kolmogorov-Smirnov**\n\nEs un test no-param\u00e9trico que compara una muestra de datos estandarizados (distribuci\u00f3n emp\u00edrica) con una distribuci\u00f3n de densidad acumulada (CDF) te\u00f3rica. Este test busca refutar la siguiente hip\u00f3tesis\n\n> **Hip\u00f3tesis nula:** Las distribuciones son id\u00e9nticas\n\nPara aplicar el test primero debemos **estandarizar** los datos. Estandarizar se refiere a la transformaci\u00f3n\n\n$$\nz = \\frac{x - \\mu_x}{\\sigma_x}\n$$\n\nes decir los datos est\u00e1ndarizados tienen media cero y desviaci\u00f3n est\u00e1ndar uno\n\nEsto puede hacerse f\u00e1cilmente con NumPy usando\n\n```python\nz = (x - np.mean(x))/np.std(x)\n```\n\n### Test de KS con `scipy`\n\nPodemos realizar el test de KS con la funci\u00f3n [`scipy.stats.kstest`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstest.html) donde\n\n```python\nscipy.stats.kstest(rvs, # Una muestra de observaciones estandarizadas\n cdf, # Una distribuci\u00f3n acumulada te\u00f3rica, por ejemplo scipy.stats.norm.cdf\n ...\n )\n```\n\nEsta funci\u00f3n retorna el valor del estad\u00edstico de KS y su *p-value* asociado. Mientras m\u00e1s cerca de cero sea el estad\u00edstico de KS mejor es el ajuste. \n\nM\u00e1s adelante haremos un repaso de tests de hip\u00f3tesis en detalle. De momento recordemos que si el *p-value* es menor que una confianza $\\alpha=0.05$ entonces rechazamos la hip\u00f3tesis nula con confianza $1-\\alpha = 0.95$ o $95\\%$\n\n\n### Ejercicio \n\nConsidere la muestra de datos anterior\n- Seleccione un conjunto de distribuciones te\u00f3ricas \n- Encuentra la que tiene mejor ajuste usando `kstest`\n\n\n```python\n\n```\n\n## Responder preguntas con nuestro modelo: Test de hip\u00f3tesis\n\nSe aplica un tratamiento nuevo a una muestra de la poblaci\u00f3n \n\n- \u00bfEs el tratamiento efectivo?\n- \u00bfExiste una diferencia entre los que tomaron el tratamiento y los que no?\n\nEl test de hip\u00f3tesis es un procedimiento estad\u00edstico para comprobar si el resultado de un experimento es significativo en la poblaci\u00f3n\n\nPara esto formulamos dos escenarios cada uno con una hip\u00f3tesis asociada\n\n- Hip\u00f3tesis nula ($H_0$): Por ejemplo\n - \"El experimento no produjo diferencia\"\n - \"El experimento no tuvo efecto\"\n - \"Las observaciones son producto del azar\"\n- Hip\u00f3tesis alternativa ($H_A$): Usualmente el complemento de $H_0$\n\n> El test de hip\u00f3tesis se dise\u00f1a para medir que tan fuerte es la evidencia **en contra** de la hip\u00f3tesis nula\n\n### Algoritmo general de un test de hip\u00f3tesis\n\nEl siguiente es el algoritmo general de un test de hip\u00f3tesis param\u00e9trico\n\n1. Definimos $H_0$ y $H_A$\n1. Definimos un estad\u00edstico $T$\n1. Asumimos una distribuci\u00f3n para $T$ dado que $H_0$ es cierto\n1. Seleccionamos un nivel de significancia $\\alpha$ \n1. Calculamos el $T$ para nuestros datos $T_{data}$\n1. Calculamos el **p-value**\n - Si nuestro test es de una cola:\n - Superior: $p = P(T>T_{data})$\n - Inferior: $p = P(TT_{data}) + P(T Rechazamos la hip\u00f3tesis nula con confianza (1-$\\alpha$)\n\n`De lo contrario`\n \n> No hay suficiente evidencia para rechazar la hip\u00f3tesis nula\n\nEl valor de $\\alpha$ nos permite controlar el **[Error tipo I](https://es.wikipedia.org/wiki/Errores_de_tipo_I_y_de_tipo_II)**, es decir el error que cometemos si rechazamos $H_0$ cuando en realidad era cierta (falso positivo)\n\nTipicamente se usa $\\alpha=0.05$ o $\\alpha=0.01$ \n\n\n**Errores de interpretaci\u00f3n comunes**\n\nMuchas veces se asume que el p-value es la probabilidad de que $H_0$ sea cierta dado nuestras observaciones\n\n$$\np = P(H_0 | T> T_{data})\n$$\n\nEsto es un **grave error**. Form\u00e1lmente el **p-value** es la probabilidad de observar un valor de $T$ m\u00e1s extremo que el observado, es decir \n\n$$\np = P(T> T_{data} | H_0) \n$$\n\nOtro error com\u00fan es creer que no ser capaz de rechazar $H_0$ es lo mismo que aceptar $H_0$\n\nNo tener suficiente evidencia para rechazar no es lo mismo que aceptar\n\n### Un primer test de hip\u00f3tesis: El t-test de una muestra \n\nSea un conjunto de $N$ observaciones iid $X = {x_1, x_2, \\ldots, x_N}$ con media muestral $\\bar x = \\sum_{i=1}^N x_i$ \n\nEl t-test de una muestra es un test de hip\u00f3tesis que busca verificar si $\\bar x$ es significativamente distinta de la **media poblacional** $\\mu$, en el caso de que **no conocemos la varianza poblacional** $\\sigma^2$\n\nLas hip\u00f3tesis son\n\n- $H_0:$ $\\bar x = \\mu$\n- $H_A:$ $\\bar x \\neq \\mu$ (dos colas)\n\nEl estad\u00edstico de prueba es \n\n$$\nt = \\frac{\\bar x - \\mu}{\\hat \\sigma /\\sqrt{N-1}}\n$$\n\ndonde $\\hat \\sigma = \\sqrt{ \\frac{1}{N} \\sum_{i=1}^N (x_i - \\bar x)^2}$ es la desviaci\u00f3n est\u00e1ndar muestral (sesgada)\n\nSi asumimos que $\\bar x$ se distribuye $\\mathcal{N}(\\mu, \\frac{\\sigma^2}{N})$ entonces\n$t$ se distribuye [t-student](https://en.wikipedia.org/wiki/Student%27s_t-distribution) con $N-1$ grados de libertad\n\n- Para muestras iid y $N$ grande el supuesto se cumple por teorema central del l\u00edmite\n- Si $N$ es peque\u00f1o debemos verificar la normalidad de los datos\n\n\n### Aplicaci\u00f3n de t-test para probar que la regresi\u00f3n es significativa\n\nEn un modelo de regresi\u00f3n lineal donde tenemos $N$ ejemplos\n\n$$\ny_i = x_i \\theta_1 + \\theta_0, ~ i=1, 2, \\ldots, N\n$$\n\nPodemos probar que la correlaci\u00f3n entre $x$ es $y$ es significativa con un test sobre $\\theta_1$\n\nPor ejemplo podemos plantear las siguientes hip\u00f3tesis\n\n- $H_0:$ La pendiente es nula $\\theta_1= 0$ \n- $H_A:$ La pendiente no es nula: $\\theta_1\\neq 0$ (dos colas)\n\nY asumiremos que $\\theta_1$ es normal pero que desconocemos su varianza. Bajo este supuesto se puede formular el siguiente estad\u00edstico de prueba \n\n$$\nt = \\frac{(\\theta_1-\\theta^*) }{\\text{SE}_{\\theta_1}/\\sqrt{N-2}} = \\frac{ r\\sqrt{N-2}}{\\sqrt{1-r^2}},\n$$\n\ndonde $r$ es el coeficiente de correlaci\u00f3n de Pearson (detalles m\u00e1s adelante) y la \u00faltima expresi\u00f3n se obtiene reemplazando $\\theta^*=0$ y $\\text{SE}_{\\theta_1} = \\sqrt{ \\frac{\\frac{1}{N} \\sum_i (y_i - \\hat y_i)^2}{\\text{Var}(x)}}$. \n\nEl estad\u00edstico tiene distribuci\u00f3n t-student con dos grados de libertad (modelo de dos par\u00e1metros) \n\n\n## Ejercicio formativo: Regresi\u00f3n lineal \n\nEn lecciones anteriores estudiamos el modelo de regresi\u00f3n lineal el cual nos permite estudiar si existe correlaci\u00f3n entre variables continuas. Tambi\u00e9n vimos como ajustar los par\u00e1metros del modelo usando el m\u00e9todo de m\u00ednimos cuadrados. En este ejercicio formativo veremos como verificar si el modelo de regresi\u00f3n ajustado es correcto\n\nLuego de revisar este ejercicio usted habr\u00e1 aprendido\n\n- La interpretaci\u00f3n probabil\u00edstica de la regresi\u00f3n lineal y la relaci\u00f3n entre m\u00ednimos cuadrados ordinarios y la estimaci\u00f3n por m\u00e1xima verosimilitud\n- El estad\u00edstico $r$ para medir la fuerza de la correlaci\u00f3n entre dos variables\n- Un test de hip\u00f3tesis para verificar que la correlaci\u00f3n encontrada es estad\u00edstica significativa\n\nUsaremos el siguiente dataset de consumo de helados. Referencia: [A handbook of small datasets](https://www.routledge.com/A-Handbook-of-Small-Data-Sets/Hand-Daly-McConway-Lunn-Ostrowski/p/book/9780367449667), estudio realizado en los a\u00f1os 50\n\n\n```python\ndf = pd.read_csv('../data/helados.csv', header=0, index_col=0)\ndf.columns = ['consumo', 'ingreso', 'precio', 'temperatura']\ndisplay(df.head())\n```\n\nEl dataset tiene la temperatura promedio del d\u00eda (grados Fahrenheit), el precio promedio de los helados comprados (dolares), el ingreso promedio familiar semanal de las personas que compraron helado (dolares) y el consumo ([pintas](https://en.wikipedia.org/wiki/Pint) per capita).\n\nA continuaci\u00f3n se muestra un gr\u00e1fico de dispersi\u00f3n del consumo en funci\u00f3n de las dem\u00e1s variables. \u00bfCree usted que existe correlaci\u00f3n en este caso?\n\n\n```python\nfig, ax = plt.subplots(1, 3, figsize=(8, 3), tight_layout=True, sharey=True)\nfor i, col in enumerate(df.columns[1:]):\n ax[i].scatter(df[col], df[\"consumo\"], s=10)\n ax[i].set_xlabel(col)\nax[0].set_ylabel(df.columns[0]);\n```\n\n### Interpretaci\u00f3n probabil\u00edstica y MLE de la regresi\u00f3n lineal\n\nSea $y$ el consumo y $x$ la temperatura.\n\nAsumiremos errores gaussianos iid\n\n$$\ny_i = \\hat y_i + \\epsilon_i, \\epsilon_i \\sim \\mathcal{N}(0, \\sigma^2),\n$$\n\ny un modelo lineal de **dos par\u00e1metros** (linea recta)\n\n$$\n\\hat y_i = \\theta_0 + \\theta_1 x_i\n$$\n\nBajo estos supuestos el estimador de m\u00e1xima verosimilitud es \n\n$$\n\\begin{align}\n\\hat \\theta &= \\text{arg}\\max_\\theta \\log \\mathcal{L}(\\theta) \\nonumber \\\\\n&=\\text{arg}\\max_\\theta - \\frac{1}{2\\sigma^2} \\sum_{i=1}^N (y_i - \\theta_0 - \\theta_1 x_i)^2 \\nonumber\n\\end{align}\n$$\n\nEs decir que el estimador de m\u00e1xima verosimilitud es equivalente al de m\u00ednimos cuadrados ordanrios $\\hat \\theta= (X^T X)^{-1} X^T y$ que vimos anteriormente\n\n**Importante:** Cuando utilizamos la soluci\u00f3n de m\u00ednimos cuadrados estamos asumiendo implicitamente que las observaciones son iid y que la verosimilitud es Gaussiana\n\n\nDerivando con respecto a los par\u00e1metros e igualado a cero tenemos que\n\n$$\n\\begin{align}\n\\sum_i y_i - N\\theta_0 - \\theta_1 \\sum_i x_i &= 0 \\nonumber \\\\\n\\sum_i y_i x_i - \\theta_0 \\sum_i x_i - \\theta_1 \\sum_i x_i^2 &= 0 \\nonumber\n\\end{align}\n$$\n\nFinalmente podemos despejar\n\n$$\n\\begin{align}\n\\theta_0 &= \\bar y - \\theta_1 \\bar x \\nonumber \\\\\n\\theta_1 &= \\frac{\\sum_i x_i y_i - N \\bar x \\bar y}{\\sum_i x_i^2 - M \\bar x^2} \\nonumber \\\\\n&= \\frac{ \\sum_i (y_i - \\bar y)(x_i - \\bar x)}{\\sum_i (x_i - \\bar x)^2} = \\frac{\\text{COV}(x, y)}{\\text{Var}(x)}\n\\end{align}\n$$\n\nde donde reconocemos las expresiones para la covarianza entre $x$ e $y$ y la varianza de $x$\n\n\n### Coeficiente de correlaci\u00f3n de Pearson\n\nLa fuerza de la correlaci\u00f3n se suele medir usando\n\n$$\nr^2 = 1 - \\frac{\\sum_i ( y_i - \\hat y_i)^2}{\\sum_i ( y_i - \\bar y)^2} = 1 - \\frac{\\frac{1}{M} \\sum_i (y_i - \\hat y_i)^2}{\\text{Var}(y)} = \\frac{\\text{COV}^2(x, y)}{\\text{Var}(x) \\text{Var}(y)}\n$$\n\ndonde $r = \\frac{\\text{COV}(x, y)}{\\sqrt{\\text{Var}(x) \\text{Var}(y)}} \\in [-1, 1]$ se conoce como [coeficiente de correlaci\u00f3n de Pearson](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient)\n\ndonde\n\n- si $r=1$ existe una correlaci\u00f3n lineal perfecta\n- si $r=-1$ existe una anticorrelaci\u00f3n lineal perfecta\n- si $r=0$ no hay correlaci\u00f3n lineal entre las variables\n\nEn general un $r>0.5$ se considera una correlaci\u00f3n importante\n\n**Calculando $r$ con y los par\u00e1metros de la regresi\u00f3n lineal**\n\nPodemos usar el atributo de dataframe\n\n```python\ndf.corr()\n```\n\nQue retorna la matriz de correlaciones lineales\n\n\n```python\ndf.corr()\n```\n\nSi queremos tambi\u00e9n el valor de los par\u00e1metros podemos usar la funci\u00f3n de scipy \n\n```python\nscipy.stats.linregress(x, # Variable independiente unidimensional\n y # Variable dependiente unidimensional\n )\n```\n\nEsta funci\u00f3n retorna una tupla con\n\n- Valor de la pendiente: $\\theta_1$\n- Valor de la intercepta: $\\theta_0$\n- Coeficiente de correlaci\u00f3n $r$\n- p-value\n- Error est\u00e1ndar del ajuste\n\n\n```python\nfig, ax = plt.subplots(1, 3, figsize=(8, 3), tight_layout=True, sharey=True)\nax[0].set_ylabel(df.columns[0]);\n\n\nfor i, col in enumerate(df.columns[1:]):\n res = scipy.stats.linregress(df[col], df[\"consumo\"])\n x_plot = np.linspace(np.amin(df[col]), np. amax(df[col]), num=100)\n ax[i].scatter(df[col], df[\"consumo\"], label='datos', s=10) \n ax[i].plot(x_plot, res.slope*x_plot + res.intercept, lw=2, c='r', label='modelo');\n ax[i].set_xlabel(col)\n ax[i].set_title(f\"$r$: {res.rvalue:0.5f}\")\n ax[i].legend()\n```\n\nEs decir que visualmente parece existir\n\n- una correlaci\u00f3n positiva alta entre consumo y temperatura\n- una correlaci\u00f3n negativa moderada entre consumo y precio\n- una correlaci\u00f3n cercana a cero entre consumo e ingreso\n\n### Test de hip\u00f3tesis y conclusiones\n\nLa funci\u00f3n `linregress` implementa el t-test sobre $\\theta_1$ que vimos anteriormente. Usemos estos resultados para verificar si las correlaciones son estad\u00edsticamente significativas\n\n\n```python\nalpha = 0.05\n\nfor i, col in enumerate(df.columns[1:]):\n res = scipy.stats.linregress(df[col], df[\"consumo\"])\n print(f\"{col}: \\t p-value:{res.pvalue:0.4f} \\t \u00bfMenor que {alpha}?: {res.pvalue < alpha}\") \n```\n\nComo complemento visualizemos \n\n- las distribuciones bajo la hip\u00f3tesis nula: linea azul\n- los l\u00edmites dados por $\\alpha$: linea punteada negra\n- El valor del observado para cada una de las variables: linea roja \n\n\n```python\nfig, ax = plt.subplots(1, 3, figsize=(8, 2), tight_layout=True, sharey=True)\nax[0].set_ylabel(df.columns[0]);\n\nN = df.shape[0]\nt = np.linspace(-7, 7, num=1000)\ndist = scipy.stats.t(loc=0, scale=1, df=N-2) # dos grados de libertad\n\n\nfor i, col in enumerate(df.columns[1:]):\n res = scipy.stats.linregress(df[col], df[\"consumo\"])\n t_data = res.rvalue*np.sqrt(N-2)/np.sqrt(1.-res.rvalue**2)\n ax[i].plot(t, dist.pdf(t))\n ax[i].plot([dist.ppf(alpha/2)]*2, [0, np.amax(dist.pdf(t))], 'k--')\n ax[i].plot([dist.ppf(1-alpha/2)]*2, [0, np.amax(dist.pdf(t))], 'k--')\n ax[i].plot([t_data]*2, [0, np.amax(dist.pdf(t))], 'r-')\n ax[i].set_xlabel(col) \n```\n\n**Conclusi\u00f3n**\n\nBasado en los p-values y considerando $\\alpha=0.05$\n\n\u00bfQu\u00e9 podemos decir de las correlaciones con el consumo de helados?\n\n> Rechazamos la hip\u00f3tesis nula de que no existe correlaci\u00f3n entre temperatura y consumo con un 95% de confianza\n\nPara las variables ingreso y precio no existe suficiente evidencia para rechazar $H_0$\n\n### Reflexi\u00f3n final\n\nEn el ejercicio anterior usamos t-test para una regresi\u00f3n lineal entre dos variables \u00bfQu\u00e9 prueba puedo usar si quiero hacer regresi\u00f3n lineal multivariada? \n\n> Se puede usar [ANOVA](https://pythonfordatascience.org/anova-python/)\n\n\u00bfQu\u00e9 pasa si...\n\n- mis datos tienen una relaci\u00f3n que no es lineal? \n- $\\theta_1$ no es Gaussiano/normal? \n- si el ruido no es Gaussiano? \n- si el ruido es Gaussiano pero su varianza cambia en el tiempo? \n\n> En estos casos no se cumplen los supuestos del modelo o del test, por ende el resultado no es confiable\n\nSi mis supuestos no se cumplen con ninguna prueba par\u00e1metrica, la opi\u00f3n es utilizar pruebas no-param\u00e9tricas\n\n## Prueba no-param\u00e9trica: *Bootstrap*\n\nPodemos estimar la incerteza de un estimador de forma no-param\u00e9trica usando **muestreo tipo *bootstrap***\n\nEsto consiste en tomar nuestro conjunto de datos de tama\u00f1o $N$ y crear $T$ nuevos conjuntos que \"se le parezcan\". Luego se calcula el valor del estimador que estamos buscando en los $T$ conjuntos. Con esto obtenemos una distribuci\u00f3n para el estimador como muestra el siguiente diagrama\n\n\n\n\n\nPara crear los subconjuntos podr\u00edamos suponer independencia y utilizar **muestreo con reemplazo**. Esto consiste en tomar $N$ muestras al azar permitiendo repeticiones, como muestra el siguiente diagrama\n\n\n\nSi no es posible suponer indepdencia se puede realizar bootstrap basado en residuos y bootstrap dependiente. Puedes consultar m\u00e1s detalles sobre [*bootstrap*](https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf) [aqu\u00ed](http://homepage.divms.uiowa.edu/~rdecook/stat3200/notes/bootstrap_4pp.pdf) y [ac\u00e1](https://www.sagepub.com/sites/default/files/upm-binaries/21122_Chapter_21.pdf). A continuaci\u00f3n nos enfocaremos en el cl\u00e1sico muestreo con reemplazo y como implementarlo en Python\n\n### Implementaci\u00f3n con Numpy y Scipy\n\nLa funci\u00f3n `numpy.random.choice` permite remuestrear un conjunto de datos\n\nPor ejemplo para la regresi\u00f3n lineal debemos remuestrar las parejas/tuplas $(x_i, y_i)$\n\nLuego calculamos y guardamos los par\u00e1metros del modelo para cada remuestreo. En este ejemplo haremos $1000$ repeticiones del conjunto de datos\n\n\n```python\ndf = pd.read_csv('../data/helados.csv', header=0, index_col=0)\ndf.columns = ['consumo', 'ingreso', 'precio', 'temperatura']\n\nx, y = df[\"temperatura\"].values, df[\"consumo\"].values\nparams = scipy.stats.linregress(x, y)\n\ndef muestreo_con_reemplazo(x, y):\n N = len(x)\n idx = np.random.choice(N, size=N, replace=True)\n return x[idx], y[idx]\n\ndef boostrap_linregress(x, y, T=100):\n # Par\u00e1metros: t0, t1 y r\n params = np.zeros(shape=(T, 3)) \n for t in range(T):\n res = scipy.stats.linregress(*muestreo_con_reemplazo(x, y))\n params[t, :] = [res.intercept, res.slope, res.rvalue]\n return params\n\nboostrap_params = boostrap_linregress(x, y, T=1000)\n```\n\n### Intervalos de confianza emp\u00edricos\n\nVeamos la distribuci\u00f3n emp\u00edrica de $r$ obtenida usando bootstrap\n\nEn la figura de abajo tenemos\n\n- Histograma azul: Distribuci\u00f3n bootstrap de $r$\n- Linea roja: $r$ de los datos\n- Lineas punteadas negras: Intervalo de confianza emp\u00edrico al 95%\n\n\n\n```python\nr_bootstrap = boostrap_params[:, 2]\n\nfig, ax = plt.subplots(figsize=(4, 3), tight_layout=True)\nhist_val, hist_lim, _ = ax.hist(r_bootstrap, bins=20, density=True)\n\nax.plot([params.rvalue]*2, [0, np.max(hist_val)], 'r-', lw=2)\nIC = np.percentile(r_bootstrap, [2.5, 97.5])\nax.plot([IC[0]]*2, [0, np.max(hist_val)], 'k--', lw=2)\nax.plot([IC[1]]*2, [0, np.max(hist_val)], 'k--', lw=2)\n\nprint(f\"Intervalo de confianza al 95% de r: {IC}\")\n```\n\nDe la figura podemos notar que el 95% de la distribuci\u00f3n emp\u00edrica esta sobre $r=0.5$\n\nTambi\u00e9n podemos notar que la distribuci\u00f3n emp\u00edrica de $r$ no es sim\u00e9trica, por lo que aplicar un t-test par\u00e1metrico sobre $r$ no hubiera sido correcto \n\n### Visualizando la incerteza del modelo\n\nUsando la distribuci\u00f3n emp\u00edrica de los par\u00e1metros $\\theta_0$ y $\\theta_1$ podemos visualizar la incerteza de nuestro modelo de regresi\u00f3n lineal\n\nEn la figura de abajo tenemos\n- Puntos azules: Datos\n- Linea roja: Modelo de regresi\u00f3n lineal en los datos\n- Sombra rojo claro: $\\pm 2$ desviaciones est\u00e1ndar del modelo en base a la distribuci\u00f3n emp\u00edrica\n\n\n```python\nfig, ax = plt.subplots(figsize=(4, 3), tight_layout=True)\nax.set_ylabel('Consumo')\nax.set_xlabel('Temperatura')\nax.scatter(x, y, zorder=100, s=10, label='datos')\n\ndef model(theta0, theta1, x):\n return x*theta1 + theta0\n\nax.plot(x_plot, model(params.intercept, params.slope, x_plot),\n c='r', lw=2, label='mejor ajuste')\n\ndist_lines = model(boostrap_params[:, 0], boostrap_params[:, 1], x_plot.reshape(-1, 1)).T\nmean_lines, std_lines = np.mean(dist_lines, axis=0), np.std(dist_lines, axis=0)\nax.fill_between(x_plot, \n mean_lines - 2*std_lines,\n mean_lines + 2*std_lines, \n color='r', alpha=0.25, label='incerteza')\nplt.legend();\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "de1f1efcc7aea32ea8544b6481a50af797cb8ba0", "size": 40731, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "clases/unidad2/3_statistics/stats3.ipynb", "max_stars_repo_name": "magister-informatica-uach/INFO147", "max_stars_repo_head_hexsha": "3898eb6f589a22beefb5972a0c911bb9dd098c6d", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2019-04-12T21:10:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-12T14:30:09.000Z", "max_issues_repo_path": "clases/unidad2/3_statistics/stats3.ipynb", "max_issues_repo_name": "magister-informatica-uach/INFO147", "max_issues_repo_head_hexsha": "3898eb6f589a22beefb5972a0c911bb9dd098c6d", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "clases/unidad2/3_statistics/stats3.ipynb", "max_forks_repo_name": "magister-informatica-uach/INFO147", "max_forks_repo_head_hexsha": "3898eb6f589a22beefb5972a0c911bb9dd098c6d", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2019-04-12T20:00:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-17T21:48:53.000Z", "avg_line_length": 37.8893023256, "max_line_length": 510, "alphanum_fraction": 0.5913186516, "converted": true, "num_tokens": 8582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4532618332444287, "lm_q2_score": 0.22541661063147306, "lm_q1q2_score": 0.10217274617856706}} {"text": "\n\n\n```python\n# Mount Google Drive\nfrom google.colab import drive # import drive from google colab\n\nROOT = \"/content/drive\" # default location for the drive\nprint(ROOT) # print content of ROOT (Optional)\n\ndrive.mount(ROOT,force_remount=True)\n```\n\n /content/drive\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n\n\n# Neuromatch Academy: Week 1, Day 1, Tutorial 2\n# Model Types: \"How\" models\n__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording\n\n__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom\n\n___\n# Tutorial Objectives\nThis is tutorial 2 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore models that can potentially explain *how* the spiking data we have observed is produced\n\nTo understand the mechanisms that give rise to the neural data we save in Tutorial 1, we will build simple neuronal models and compare their spiking response to real data. We will:\n- Write code to simulate a simple \"leaky integrate-and-fire\" neuron model \n- Make the model more complicated \u2014 but also more realistic\u00a0\u2014\u00a0by adding more physiologically-inspired details\n\n\n```python\n#@title Video 1: \"How\" models\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='PpnagITsb3E', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=PpnagITsb3E\n\n\n\n\n\n\n\n\n\n\n\n# Setup\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n```\n\n\n```python\n#@title Figure Settings\nimport ipywidgets as widgets #interactive display\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n\n\n```\n\n\n```python\n#@title Helper Functions\ndef histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):\n \"\"\"Plot a step histogram given counts over bins.\"\"\"\n if ax is None:\n _, ax = plt.subplots()\n \n # duplicate the first element of `counts` to match bin edges\n counts = np.insert(counts, 0, counts[0])\n\n ax.fill_between(bins, counts, step=\"pre\", alpha=0.4, **kwargs) # area shading\n ax.plot(bins, counts, drawstyle=\"steps\", **kwargs) # lines\n\n for x in vlines:\n ax.axvline(x, color='r', linestyle='dotted') # vertical line\n\n if ax_args is None:\n ax_args = {}\n\n # heuristically set max y to leave a bit of room\n ymin, ymax = ax_args.get('ylim', [None, None])\n if ymax is None: \n ymax = np.max(counts)\n if ax_args.get('yscale', 'linear') == 'log':\n ymax *= 1.5\n else:\n ymax *= 1.1\n if ymin is None:\n ymin = 0\n\n if ymax == ymin:\n ymax = None\n \n ax_args['ylim'] = [ymin, ymax]\n \n ax.set(**ax_args)\n ax.autoscale(enable=False, axis='x', tight=True)\n\n\ndef plot_neuron_stats(v, spike_times):\n fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))\n \n # membrane voltage trace \n ax1.plot(v[0:100])\n ax1.set(xlabel='Time', ylabel='Voltage')\n # plot spike events\n for x in spike_times:\n if x >= 100:\n break\n ax1.axvline(x, color='red')\n\n # ISI distribution \n isi = np.diff(spike_times)\n n_bins = np.arange(isi.min(), isi.max() + 2) - .5\n counts, bins = np.histogram(isi, n_bins)\n vlines = []\n if len(isi) > 0:\n vlines = [np.mean(isi)] \n xmax = max(20, int(bins[-1])+5)\n histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={\n 'xlabel': 'Inter-spike interval',\n 'ylabel': 'Number of intervals',\n 'xlim': [0, xmax]\n }) \n plt.show()\n```\n\n# Section 1: The Linear Integrate-and-Fire Neuron\n\nHow does a neuron spike? \n\nA neuron charges and discharges an electric field across its cell membrane. The state of this electric field can be described by the _membrane potential_. The membrane potential rises due to excitation of the neuron, and when it reaches a threshold a spike occurs. The potential resets, and must rise to a threshold again before the next spike occurs.\n\nOne of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential $V_m$ over time in response to excitatory input currents $I$ scaled by some factor $\\alpha$:\n\n\\begin{align}\n dV_m = {\\alpha}I\n\\end{align}\n\nOnce $V_m$ reaches a threshold value a spike is produced, $V_m$ is reset to a starting value, and the process continues.\n\nHere, we will take the starting and threshold potentials as $0$ and $1$, respectively. So, for example, if $\\alpha I=0.1$ is constant---that is, the input current is constant---then $dV_m=0.1$, and at each timestep the membrane potential $V_m$ increases by $0.1$ until after $(1-0)/0.1 = 10$ timesteps it reaches the threshold and resets to $V_m=0$, and so on.\n\nNote that we define the membrane potential $V_m$ as a scalar: a single real (or floating point) number. However, a biological neuron's membrane potential will not be exactly constant at all points on its cell membrane at a given time. We could capture this variation with a more complex model (e.g. with more numbers). Do we need to? \n\nThe proposed model is a 1D simplification. There are many details we could add to it, to preserve different parts of the complex structure and dynamics of a real neuron. If we were interested in small or local changes in the membrane potential, our 1D simplification could be a problem. However, we'll assume an idealized \"point\" neuron model for our current purpose.\n\n#### Spiking Inputs\n\nGiven our simplified model for the neuron dynamics, we still need to consider what form the input $I$ will take. How should we specify the firing behavior of the presynaptic neuron(s) providing the inputs to our model neuron? \n\nUnlike in the simple example above, where $\\alpha I=0.1$, the input current is generally not constant. Physical inputs tend to vary with time. We can describe this variation with a distribution.\n\nWe'll assume the input current $I$ over a timestep is due to equal contributions from a non-negative ($\\ge 0$) integer number of input spikes arriving in that timestep. Our model neuron might integrate currents from 3 input spikes in one timestep, and 7 spikes in the next timestep. We should see similar behavior when sampling from our distribution.\n\nGiven no other information about the input neurons, we will also assume that the distribution has a mean (i.e. mean rate, or number of spikes received per timestep), and that the spiking events of the input neuron(s) are independent in time. Are these reasonable assumptions in the context of real neurons?\n\nA suitable distribution given these assumptions is the Poisson distribution, which we'll use to model $I$:\n\n\\begin{align}\n I \\sim \\mathrm{Poisson}(\\lambda)\n\\end{align}\n\nwhere $\\lambda$ is the mean of the distribution: the average rate of spikes received per timestep.\n\n### Exercise 1: Compute $dV_m$\n\nFor your first exercise, you will write the code to compute the change in voltage $dV_m$ (per timestep) of the linear integrate-and-fire model neuron. The rest of the code to handle numerical integration is provided for you, so you just need to fill in a definition for `dv` in the `lif_neuron` function below. The value of $\\lambda$ for the Poisson random variable is given by the function argument `rate`.\n\n\n\nThe [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) package is a great resource for working with and sampling from various probability distributions. We will use the `scipy.stats.poisson` class and its method `rvs` to produce Poisson-distributed random samples. In this tutorial, we have imported this package with the alias `stats`, so you should refer to it in your code as `stats.poisson`.\n\n\n```python\ndef lif_neuron(n_steps=1000, alpha=0.01, rate=10):\n \"\"\" Simulate a linear integrate-and-fire neuron.\n\n Args:\n n_steps (int): The number of time steps to simulate the neuron's activity.\n alpha (float): The input scaling factor\n rate (int): The mean rate of incoming spikes\n\n \"\"\"\n # precompute Poisson samples for speed\n exc = stats.poisson(rate).rvs(n_steps)\n\n v = np.zeros(n_steps)\n spike_times = []\n\n ################################################################################\n # Students: compute dv, then comment out or remove the next line\n #raise NotImplementedError(\"Excercise: compute the change in membrane potential\")\n ################################################################################\n \n for i in range(1, n_steps):\n\n dv = alpha*exc[i]\n\n v[i] = v[i-1] + dv\n if v[i] > 1:\n spike_times.append(i)\n v[i] = 0\n \n return v, spike_times\n\n# Uncomment these lines after completing the lif_neuron function\nv, spike_times = lif_neuron()\nplot_neuron_stats(v, spike_times)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial2_Solution_f8960ca1.py)\n\n*Example output:*\n\n\n\n\n\n## Interactive Demo: Linear-IF neuron\nLike last time, you can now explore how various parametes of the LIF model influence the ISI distribution.\n\n\n```python\n#@title\n\n#@markdown You don't need to worry about how the code works \u2013\u00a0but you do need to **run the cell** to enable the sliders.\n\ndef _lif_neuron(n_steps=1000, alpha=0.01, rate=10):\n exc = stats.poisson(rate).rvs(n_steps)\n v = np.zeros(n_steps)\n spike_times = []\n for i in range(1, n_steps):\n dv = alpha * exc[i]\n v[i] = v[i-1] + dv\n if v[i] > 1:\n spike_times.append(i)\n v[i] = 0\n return v, spike_times\n\n@widgets.interact(\n n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),\n alpha=widgets.FloatLogSlider(0.01, min=-2, max=-1),\n rate=widgets.IntSlider(10, min=5, max=20)\n)\ndef plot_lif_neuron(n_steps=1000, alpha=0.01, rate=10):\n v, spike_times = _lif_neuron(int(n_steps), alpha, rate)\n plot_neuron_stats(v, spike_times)\n```\n\n\n interactive(children=(FloatLogSlider(value=1000.0, description='n_steps', min=2.0), FloatLogSlider(value=0.01,\u2026\n\n\n\n```python\n#@title Video 2: Linear-IF models\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='QBD7kulhg4U', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=QBD7kulhg4U\n\n\n\n\n\n\n\n\n\n\n\n# Section 2: Inhibitory signals\n\n\n\nOur linear integrate-and-fire neuron from the previous section was indeed able to produce spikes. However, our ISI histogram doesn't look much like empirical ISI histograms seen in Tutorial 1, which had an exponential-like shape. What is our model neuron missing, given that it doesn't behave like a real neuron?\n\nIn the previous model we only considered excitatory behavior -- the only way the membrane potential could decrease was upon a spike event. We know, however, that there are other factors that can drive $V_m$ down. First is the natural tendency of the neuron to return to some steady state or resting potential. We can update our previous model as follows:\n\n\\begin{align}\n dV_m = -{\\beta}V_m + {\\alpha}I\n\\end{align}\n\nwhere $V_m$ is the current membrane potential and $\\beta$ is some leakage factor. This is a basic form of the popular Leaky Integrate-and-Fire model neuron (for a more detailed discussion of the LIF Neuron, see the Appendix).\n\nWe also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable:\n\n\\begin{align}\nI = I_{exc} - I_{inh} \\\\\nI_{exc} \\sim \\mathrm{Poisson}(\\lambda_{exc}) \\\\\nI_{inh} \\sim \\mathrm{Poisson}(\\lambda_{inh})\n\\end{align}\n\nwhere $\\lambda_{exc}$ and $\\lambda_{inh}$ are the average spike rates (per timestep) of the excitatory and inhibitory presynaptic neurons, respectively.\n\n### Exercise 2: Compute $dV_m$ with inhibitory signals\n\nFor your second exercise, you will again write the code to compute the change in voltage $dV_m$, though now of the LIF model neuron described above. Like last time, the rest of the code needed to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` below.\n\n\n\n```python\ndef lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):\n \"\"\" Simulate a simplified leaky integrate-and-fire neuron with both excitatory\n and inhibitory inputs.\n \n Args:\n n_steps (int): The number of time steps to simulate the neuron's activity.\n alpha (float): The input scaling factor\n beta (float): The membrane potential leakage factor\n exc_rate (int): The mean rate of the incoming excitatory spikes\n inh_rate (int): The mean rate of the incoming inhibitory spikes\n \"\"\"\n\n # precompute Poisson samples for speed\n exc = stats.poisson(exc_rate).rvs(n_steps)\n inh = stats.poisson(inh_rate).rvs(n_steps)\n\n v = np.zeros(n_steps)\n spike_times = []\n\n ###############################################################################\n # Students: compute dv, then comment out or remove the next line\n #raise NotImplementedError(\"Excercise: compute the change in membrane potential\")\n ################################################################################\n \n for i in range(1, n_steps):\n\n dv = (exc[i]-inh[i])*alpha -beta*v[i-1]\n #use v[i-1] because we're calculating v[i] in this step, so it's based on v[i-1]\n\n v[i] = v[i-1] + dv\n if v[i] > 1:\n spike_times.append(i)\n v[i] = 0\n\n return v, spike_times\n\n# Uncomment these lines do make the plot once you've completed the function\nv, spike_times = lif_neuron_inh()\nplot_neuron_stats(v, spike_times)\n```\n\n[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial2_Solution_4d9a2677.py)\n\n*Example output:*\n\n\n\n\n\n## Interactive Demo: LIF + inhibition neuron\n\n\n```python\n#@title\n#@markdown **Run the cell** to enable the sliders.\ndef _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):\n \"\"\" Simulate a simplified leaky integrate-and-fire neuron with both excitatory\n and inhibitory inputs.\n \n Args:\n n_steps (int): The number of time steps to simulate the neuron's activity.\n alpha (float): The input scaling factor\n beta (float): The membrane potential leakage factor\n exc_rate (int): The mean rate of the incoming excitatory spikes\n inh_rate (int): The mean rate of the incoming inhibitory spikes\n \"\"\"\n # precompute Poisson samples for speed\n exc = stats.poisson(exc_rate).rvs(n_steps)\n inh = stats.poisson(inh_rate).rvs(n_steps)\n\n v = np.zeros(n_steps)\n spike_times = []\n for i in range(1, n_steps):\n dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])\n v[i] = v[i-1] + dv\n if v[i] > 1:\n spike_times.append(i)\n v[i] = 0\n\n return v, spike_times\n\n@widgets.interact(n_steps=widgets.FloatLogSlider(1000.0, min=2.5, max=4),\n alpha=widgets.FloatLogSlider(0.5, min=-1, max=1),\n beta=widgets.FloatLogSlider(0.1, min=-1, max=0),\n exc_rate=widgets.IntSlider(12, min=10, max=20),\n inh_rate=widgets.IntSlider(12, min=10, max=20))\ndef plot_lif_neuron(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):\n v, spike_times = _lif_neuron_inh(int(n_steps), alpha, beta, exc_rate, inh_rate)\n plot_neuron_stats(v, spike_times)\n```\n\n\n```python\n#@title Video 3: LIF + inhibition\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id='Aq7JrxRkn2w', width=854, height=480, fs=1)\nprint(\"Video available at https://youtube.com/watch?v=\" + video.id)\nvideo\n```\n\n Video available at https://youtube.com/watch?v=Aq7JrxRkn2w\n\n\n\n\n\n\n\n\n\n\n\n#Summary\n\nIn this tutorial we gained some intuition for the mechanisms that produce the observed behavior in our real neural data. First, we built a simple neuron model with excitatory input and saw that it's behavior, measured using the ISI distribution, did not match our real neurons. We then improved our model by adding leakiness and inhibitory input. The behavior of this balanced model was much closer to the real neural data.\n\n# Bonus\n\n### Why do neurons spike?\n\nA neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs.\n\nThe storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model.\n\n### The LIF Model Neuron\n\nThe full equation for the LIF neuron is\n\n\\begin{align}\nC_{m}\\frac{dV_m}{dt} = -(V_m - V_{rest})/R_{m} + I\n\\end{align}\n\nwhere $C_m$ is the membrane capacitance, $R_M$ is the membrane resistance, $V_{rest}$ is the resting potential, and $I$ is some input current (from other neurons, an electrode, ...).\n\nIn our above examples we set many of these parameters to convenient values ($C_m = R_m = dt = 1$, $V_{rest} = 0$) to focus more on the general behavior of the model. However, these too can be manipulated to achieve different dynamics, or to ensure the dimensions of the problem are preserved between simulation units and experimental units (e.g. with $V_m$ given in millivolts, $R_m$ in megaohms, $t$ in milliseconds).\n", "meta": {"hexsha": "7da951dbc84277cdb202da9d60521da4d7e1025b", "size": 616767, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial2.ipynb", "max_stars_repo_name": "hnoamany/course-content", "max_stars_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial2.ipynb", "max_issues_repo_name": "hnoamany/course-content", "max_issues_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial2.ipynb", "max_forks_repo_name": "hnoamany/course-content", "max_forks_repo_head_hexsha": "d89047537e57854c62cb9536a9c768b235fe4bf8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 616767.0, "max_line_length": 616767, "alphanum_fraction": 0.9530146717, "converted": true, "num_tokens": 4850, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4416730056646256, "lm_q2_score": 0.2309197682220399, "lm_q1q2_score": 0.10199102809800706}} {"text": "# Quality control\n\n**ENGSCI233: Computational Techniques and Computer Systems** \n\n*Department of Engineering Science, University of Auckland*\n\nIt is not realistic to write code that is completely free from **bugs**. However, we should strive to eliminate as many as possible from our work. Although this is not a software design course, there are a number of good practices that we can borrow from that field. With practice, you will develop a set of useful habits - **unit testing, version control, and writing specifications** - that will help to minimise bugs, and make it easy for other people (and your future self) to understand, use and modify your code. \n\nYou need to know:\n- How to write a unit test, a function that tests that a specific part of your code has been correctly implemented.\n- The key elements of a specification, a brief description that informs a user how your code works: inputs, outputs, purpose of a function, preconditions and post conditions, and writing a Python docstring.\n- Practical version control. A repository as a cloud-based copy of your code - you can clone a copy, make changes and map them back up to the cloud.\n\n\n```python\n# loading support files - only execute this block if running via Google Colab; if so, execute it before anything else.\n# download notebook files\n!git clone https://github.com/bryan-ruddy/ENGSCI233_2021.git\n%cd ENGSCI233_2021/quality_control\n```\n\n\n```python\n# imports and environment: this cell must be executed before any other in the notebook\n%matplotlib notebook\nfrom quality_control233 import*\n```\n\nContent in this module has been drawn from this MIT OpenCourseWare course:\n\nhttps://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-005-software-construction-spring-2016/readings/\n\n## 1 Unit testing\n\n***Checking our code is bug-free.***\n\nLarge software projects might comprise thousands of lines of code. These must be written sufficiently well that the \"whole program\" achieves its objective. Code will generally be organised in a modular fashion, as a collection of functions and subroutines. Many of these will perform a small, specific task. These in turn will be called by other functions, achieving perhaps some task of intermediate complexity, and so on and so forth.\n\nA bug or error in an elementary function can propagate its effects to other parts of the code, compromising the software. **Unit testing** is the practice of checking for and catching these errors. We do this by actively **trying** to make the code fail. You should put aside any feelings of pride and accomplishment in your work. Instead, approach the task with the methodical, sociopathic brutality of a university lecturer writing a final exam0. \n\n0 it's a joke settle down \n\nLet's look at an example:\n\nConsider the function below that computes the negative square of an input number, i.e., $-x^2$ for input $x$.\n\n\n```python\n# run this cell to make the function available\ndef neg_square(x):\n return (-x)*(-x)\n```\n\nYou should immediately see that `neg_square` has been improperly implemented. However, **it will still return a result**, i.e., no error is raised. This means that, if we're not paying attention, this bug has the potential to cause mischief elsewhere in our code. \n\nA **unit test** is another function that we write whose express purpose is to test that one of our functions is **working correctly**. But what does it mean to be \"working correctly\"?\n\n- The function should return the correct result.\n- The function should return the correct result **for every possible value** $x$.\n- The function should return the correct result **for every possible value** $x$, and **anticipate the stupidity** of the user, e.g., `neg_square('an apple')`.\n\nThe right test depends on how rigourous you need to be, and the tolerance and implications of failure by your software. Let's take a look at one example. \n\n\n```python\n# make sure to run the cell above defining `neg_square` before running this cell\ndef test_neg_square():\n assert neg_square(2) == -4\n \ntest_neg_square()\n```\n\nThe test above raises an `AssertionError` on the line with the `assert` command. Indeed, this is the express purpose of `assert` - to raise an error in the program when a condition evaluates to `False`. So the unit test is doing its job, signalling loud and clear that there is a bug in your code. \n\n***Fix the implementation of ***`neg_square`*** above so that the it passes the unit test.***\n\n### 1.1 Subdomains and edge cases\n\nYou should now have a unit test verifying that `neg_square` works for the specific case of $x=2$. Common sense would tell us it should also work for $x=3$ or $x=1003$, but these are basically the same inputs: **positive integers larger than 1**. And it is not really practical to run a test for all integers larger than 1 - there are a lot of them.\n\nHow about other input types? Negative integers? Floats? The special case of zero? An integer so large that squaring it will cause overflow? Strange and sometimes unexpected things can happen when you pass extreme or idiosyncratic values into your functions. \n\nWhen designing a unit test, you'll usually want to try an input from each of the different sub-domains - all positive integers, all negative integers - and edges between sub-domains - zero, negative infinity.\n\n***Copy-paste the unit test above and add `assert` statements to check proper behaviour for positive and negative integers and floats, zero and infinities.*** \n\n\n```python\n# use np.inf to represent \"infinity\"\nimport numpy as np\n\ndef test_neg_square():\n # your code here\n pass\n \ntest_neg_square()\n```\n\nIf your function contains `if/else` branches, then a single unit test may miss a buggy line of code if it is in the wrong branch. **Statement coverage** is the idea that you should write multiple unit tests to invoke code on the different branches, running as may lines of code (statements) as possible. One hundred percent statement coverage may not be practical.\n\n### 1.2 Testing suites\n\nEven modestly complex programs can run to thousands of lines codes and tens or hundreds of functions. Writing and running unit tests for all of these can be exhausting **but is good practice**. When you discover bugs in your code, you should immediately write a unit test for it. \n\nAnother coding philosophy is **test-driven development** or test-first programming: first, write a unit test, then, write a function that passes it. You won't need to do test-driven development in this course, but you may encounter it in industry (when doing internships, etc).\n\nIn instrumentation development, some of your code may be written to interact directly with custom hardware. This may mean you need to test your code with the hardware, or with a physical prototype of it. This is called **hardware-in-the-loop** testing. Again, you won't need to do this in this course, but may encounter it in industry, especially if you work on safety-critical or failure critical systems. (Think ventilators, or [rockets](https://www.youtube.com/watch?v=xahiWQQKw7Y&t=101s).)\n\nEach time you sit down to write some code, you should run all your unit tests - the test suite - before starting (especially if you're working on someone else's code) and again once you have finished (especially if other people are working with yours!) \n\nWith lots of tests, this can be a painful process. Thankfully, there are several automated testing programs to streamline the process. We will use one in the lab called `py.test`. \n\n## 2 Specifications\n\n***Communicating the purpose of our code.***\n\n**You write a function for someone else to use**1. \n\nLet's define some terminology and use it to unpack that statement.\n\n- The **implementor** (you). The person who **writes** the function.\n- The **client** (someone else). The person who **uses** the function.\n- The **contract**. The unspoken division of labour. You (the implementor) are writing the function and someone else (the client) is using it.\n- The **firewall**. The unspoken division of knowledge. You (the implementor) don't need to know **the context** in which the function is being used. Someone else (the client) doesn't need to know **the algorithmic implementation** of the function. \n\nMakes sense? Of course it does. But let's just think through some of the implications anyway...\n\n- The implementor can change the inner workings of a function, say, for efficiency, without consulting the client and **without breaking the client's code that uses the function**.\n- The client doesn't have to be an expert in efficient, robust or obscure algorithms. \n\nSo far, this is all just philosophy. The **specification** is where we turn it into reality.\n\n1 Sometimes, the \"someone else\" is ourselves. But, because this person is in the future, we shall consider them a separate individual. If this is confusing, I recommend watching the movie Looper (2012).\n\n### 2.1 Writing a specification\n\nThe specification provides both the implementor and the client with an unambiguous, agreed upon description of the function. It should state:\n\n- Inputs/arguments/parameters to the functions.\n- Any preconditions on these inputs, e.g., input `a` must be a `True/False` boolean; input list `xs` must be sorted.\n- Outputs/returns of the function.\n- Any postconditions on the outputs, e.g., output `ix` is the **first** appearance of input `x` in input list `xs`, which potentially contains repetitions.\n\nIn Python, we shall present the specification as a docstring, a concise commented description immediately below the function header. Let's look at an example:\n\n\n```python\n''' Find the position of a number in an array.\n \n Parameters:\n -----------\n x : float\n item to locate\n xs : array-like\n a list of values\n first : boolean (optional)\n if True, returns the index of the first appearance of x (default False)\n last : boolean (optional)\n if True, returns the index of the last appearance of x (default False)\n \n Returns:\n --------\n ix : array-like\n index location of x in xs\n \n Notes:\n ------\n xs should be sorted from smallest to largest\n'''\n```\n\n***- - - - QUESTIONS TO CONTEMPLATE - - - -***\n\n\n```python\n# PART ONE\n# --------\n# What are the inputs for this specification?\n\n# What are the outputs for this specification?\n\n```\n\n\n```python\n# PART TWO\n# --------\n# What are the preconditions for this specification?\n\n# What are the postconditions for this specification?\n\n```\n\n\n```python\n# OPTIONAL CHALLENGE\n# ------------------\n# Can you think of any other preconditions that should be given?\n\n# Often, there will be a heading \"Raises:\", which describes what should happen when an error occurs.\n# Suggest an error that could occur for an implementation of this specification. \n\n```\n\nAn essential feature of the specification above is that it provides sufficient information to BOTH the implementor and the client to do their job. If I asked you **to implement** this specification, you could. If I gave you the name of a function that corresponded to this specification, you could **make use of it**. \n\nIn addition, the specification provides **all the information you need** to write a unit-test. Details of the how the implementation works are not required.\n\nFinally, the specification is **language-agnostic** (notwithstanding, I have written it as the classic Python docstring). In practice, you should be able write a function in Python, MATLAB, C, etc., that conforms to the specification above.\n\n***Complete doc-strings for the functions below.***\n\n\n```python\ndef neg_square2(x):\n ''' **your docstring here**\n '''\n return -x**2\n\ndef find_absolute_min(x, first=False, last = False):\n ''' **your docstring here**\n '''\n \n assert len(x)>0\n assert not (first and last)\n \n ax = abs(x)\n \n axmin = np.min(ax)\n \n ixmin = np.where(ax == axmin)[0]\n \n if first:\n ixmin = ixmin[0]\n if last:\n ixmin = ixmin[-1]\n \n return ixmin\n \n```\n\n### 2.2 Errors and asserts\n\nSpecifications as we have described them leave little room for **incompetence**. For instance, the implementor **assumes** that the client will satisfy the appropriate preconditions. Equally, the client **assumes**2 that the implementor has created a bug-free function. At least for the latter, the implementor could point to a **unit-test** as providing some guarantee of quality.\n\nBut how should the implementor **guard against** incompetence on the part of the client? Here are two ways:\n\n- Explicitly check that preconditions are satisfied within the implementation. We do this using **assert** statements.\n- Monitor for anomalous or unexpected outcomes and `raise` an **error**. \n\n2Remember that, when you *assume*, you make an \"ass\" out of \"u\" and \"me\"...\n\nThe cell below calls a function that computes the **harmonic** mean of `xs`:\n\n\\begin{equation}\n\\tilde{x} = \\left(\\frac1n\\sum\\limits_{i=0}^{n}\\frac{1}{x_i}\\right)^{-1}\n\\end{equation}\n\nIt is not defined for any **zero values** of `xs`.\n\n\n```python\n# harmonic mean calculation\nxs = [1, 2, 3]\nxharm = harmonic_mean(xs)\nprint(xharm)\n```\n\n\n```python\n# Run the cell above with xs = [1, 2, 3]. What happens?\n\n# Try inserting a 0 value into xs and rerunning the cell. What happens?\n\n# Try calling harmonic_mean with an empty list (xs = []). What happens?\n\n# Try calling harmonic_mean with a non-numeric value (xs = [1, 'an apple', 2]). What happens?\n\n# Which of these are 'checked' preconditions, and which are not?\n```\n\nWhile it is sometimes a kindness on the part of the implementor to check preconditions, it may **not always be practical**. For example, the computational expense required to check the precondition *'input array `xs` must be sorted smallest to largest'* may be large compared to *'find the index position of the value `x`'*. Indeed, often the purpose of a precondition is to **save** the implementor some computational expense by guaranteeing desirable qualities of the inputs.\n\nThe cell below calls a function that computes the **geometric** mean of `xs`:\n\n\\begin{equation}\n\\hat{x} = \\sqrt[^n]{\\prod\\limits_{i=1}^n x_i}\n\\end{equation}\n\nIt is not defined if `xs` contains BOTH $0$ and $\\infty$ ([what is zero times infinity?](https://img.huffingtonpost.com/asset/5b9282ac190000930a503a0f.jpeg?ops=1910_1000))\n\n\n```python\n# geometric mean calculation\nxs = [1, 2, 3]\nxgeom = geometric_mean(xs)\nprint(xgeom)\n```\n\n\n```python\n# Run the cell above with xs = [1, 2, 3]. What happens?\n\n# Try inserting a 0 value into xs and rerunning the cell. What happens?\n\n# Try calling harmonic_mean with an empty list (xs = []). What happens?\n\n# Try inserting an np.inf value into xs and rerunning the cell. What happens?\n\n# Try inserting a 0 AND an np.inf value into xs and rerunning the cell. What happens?\n\n# Which of these are 'checked' preconditions, which are `errors raised` due to anomalous behaviour, \n# and which are normal outcomes?\n\n# OPTIONAL\n# Check the implementation of geometric_mean - what fancy trick are we using to compute it? Does the client\n# need to know about these fancy tricks?\n```\n\n### 2.3 Raising and catching errors\n\nThe implementor will sometimes **raise** an error when they want to signal to the client that things are not going well in the code. However, sometimes the client will be prepared to **tolerate and respond** to this misbehaviour. They can do this by **catching** the error with a `try` statement, and redirecting the code to an `except`.\n\nFor instance, we have seen how the inputs below to `geometric_mean` raise a `ValueError`. We can catch this error by wrapping the error generating command (`geometric_mean`) inside a `try` block. If an error is raised, the code in the `except` block will be executed.\n\n\n```python\nxs = [1, 2, 3, 0, np.inf]\n\ntry:\n xgeom = geometric_mean(xs)\nexcept:\n # default to 0 if mean is not undefined\n xgeom = 0.\n \nprint(xgeom)\n```\n\nCatching and raising errors, using asserts to check for preconditions, and writing clear specifications, are all steps you can take to minimise the emergence and impact of bugs in your code. \n\nThe final topic we need to cover is how to back-up and chronicle changes to your code: **version control**. \n\n## 3 Version control\n\n***Backing up our code.***\n\nStarting a new software project with promises of strict version control is the amateur coder's equivalent to a new year/new gym resolution. Its a good idea. You should do it. You **will** do it. Until one day it's inconvenient and you regress to lazy Leslie from 2018. Nevertheless, let's cover the basics3...\n\n3 For a *better* description of version control than I will give you here, see this [reading](https://ocw.mit.edu/ans7870/6/6.005/s16/classes/05-version-control/). \n \nYour coding project is just a collection of files. \n\n> **If your computer dies tomorrow, wouldn't it be nice to have a backup?**\n\nYou make your code better by making changes to those files.\n\n> **Wouldn't it be nice to have a record of all those changes?**\n\nSometimes you'll make a change that actually makes your code worse.\n\n> **Wouldn't it be nice to roll back to a previous (better) version?**\n\nSometimes you need to work at a desktop at university and sometimes you'll want work on your laptop at home.\n\n> **Wouldn't it be nice to sync your coding project between two or more machines?**\n\nSometimes you'll work as part of a team developing different parts of the code.\n\n> **Wouldn't it be nice if there was a way to push out your changes to others, and pull their changes back?**\n\n*The objective of version control is to address these issues.* We will be using a program called [**git**](https://git-scm.com/)4 to help us do that.\n \n4In fact, if you are running this notebook on Google Colab, you are already using git!\n\n### 3.1 Repositories\n\nAt the heart of version control is the concept of the **repository**. This is an archive of the current contents of all the files in your code, safely located in the cloud. A repository can be **private**, accessible only to people selected by the **owner**, or **public**, accessible to any who want to look at it. (Typically, only people selected by the owner have the right to modify a public repository; this is why you can access the course files but you can't edit or replace them.)\n\n\n\nAnyone who has permission (including, of course, the owner) can `clone` a copy of the repository to their (say, university) computer. All the files will appear in their folders, and you can run them if you wish.\n\n\n\nIn this, and following sections, I will be following `git` command line terminology. For example, to clone a repository, you would write at the command line\n\n> `git clone *name of repository*`\n\nThe repository name typically combines the web address of the hosting entity (e.g., [BitBucket](https://bitbucket.org), [GitHub](https://github.com/)), your username, and a project name. Usually this will be obvious when visiting the repository web interface, e.g., to clone all the notebooks for this entire course\n\n> `git clone https://github.com/bryan-ruddy/ENGSCI233_2021.git`\n\nAt the start of this notebook, if you ran it via Google, you will have run this very command so that Google's servers had access to all the support files for the notebook.\n\n### 3.2 Recording your changes\n\nSometimes, you will make a change, say by fixing a bug in one of your functions, thereby making a change to the file `super_func.py`. So that there is a record of this change, you will `add` (nominate new/modified files) and `commit` (record the change).\n\n\n\nHere is the command line terminology for `git`\n\n> `git add .`\n\n> `git commit -m \"added a check for preconditions to super_func\"`\n\nOther times, you might make a change by adding a unit test, written in a new file `func_i_test.py`. Once again, you will `add` this file to the repository, and then `commit` so there is a record of the change.\n\n\n\n> `git add .`\n\n> `git commit -m \"added a unit test for super_func\"`\n\nYour commits - records of change - are local to your computer. You rejoin them with the online **repository** using a `push`. \n\n\n\n> `git push`\n\n\n### 3.3 Working from multiple locations\n\nNow, if you go home and want to work on this code, you can `clone` a copy of the repository.\n\n\n\n> `git clone *name of repository*`\n\nMake changes to files at home. Then `add`, `commit` and `push` these changes up to the online repository.\n\n\n\n> `git add .`\n\n> `git commit -m \"fixed a bug in the precondition check\"`\n\n> `git push`\n\nNext time you are working at the university, use a `pull` to retrieve the changes made at home, **syncing** your local repository with the online one.\n\n\n\n> `git pull`\n\n### 3.4 Managing conflicts\n\nOf course, there is no requirement that the university and home clones are owned by the same person. Suppose that your friend has cloned your repository and you both working on different parts of it **simultaneously**.\n\n\n\nYour friend finishes for the night at 8pm4 and `push`es their changes up to the online repository.\n\n\n\nManaging your deadlines and workload carefully, you finish at 1am, `commit` and try to `push` up your changes. Unfortunately, the `push` fails because both you and your friend have modified the same line of code, resulting in a conflict.\n\n\n\nTo resolve the conflict, you must `fetch` the latest version that includes the modifications from your friend, and then manage the conflict locally using a `merge`.\n\n\n\n> `git fetch`\n\n> `git merge`\n\nOnce the conflict is handled, you can `commit` and `push` a new version, that includes both your changes and the managed conflict.\n\n\n\n4lol, casual\n\n### 3.5 Rolling back to previous versions\n\nOften you'll find that some magic-bullet change to your code that was going to make things more efficient just didn't pan out. Things don't even work now and you wish desperately you could roll back to the inefficient, but working, version.\n\nIn keeping a record of all your changes - snapshots of your code at different moments in history - version control allows this sort of \"saved game\" approach to coding. There are two common options:\n\n1. If you want to take a peek at the older version of the code, but later return to the current version, a `checkout` allows you to temporarily roll back.\n\n2. If you want to reset permanently to the older version, you want to do a `revert`.\n\nFor more on rolling back to an old version of your code, check out [this StackOverflow thread](https://stackoverflow.com/questions/4114095/how-to-revert-a-git-repository-to-a-previous-commit). \n\n\n```python\n# TERMINOLOGY TEST\n# ----------------\n# Provide definitions for the following terms as they pertain to version control.\n#\n# Repository:\n# Owner:\n# Clone:\n# Push/Pull:\n# Commit:\n# Add:\n# Revert:\n# Merge:\n# Git:\n```\n\n### 3.6 Final notes on `git`\n\nBecause it is a command line tool, `git` has extraordinary flexibility in its mode of operation through the use of command line flags. For example, \n\n> `git commit -m \"fixed a function\"`\n\nassociates the message `\"fixed a function\"` to the commit. `-m` is the command line flag that tells `git` \"include a message, make it the text that follows\". Adjusting this command slightly\n\n> `git commit -m \"fixed a function\" -q `\n\nwill do the same as above, but now the `-q` will request `git` to suppress output information about what code has been changed for this commit.\n\nIt is not possible to fully cover `git` functionality in one week of this course. For most users, it will be sufficient to Google what you want to achieve, and then read the explanation and instructions from the first StackOverflow link. For more help with `git` command line, try typing\n\n> `git help`\n\n> `git help add`\n\n#### 3.6.1 Ignoring files\n\nIf you're writing and compiling code (even Python compiles itself, creating `*.pyc` files) there will be extra files created that you won't want to track. Placing a `.gitignore` file in your repository tells `git` not to include particular files in the repository.\n\n***Run the cell below to display the contents of `.gitignore`***\n\n\n```python\n%pycat .gitignore\n```\n\nHere, we use a bit of special syntax (which you won't be assessed on) to show the file contents in a Jupyter notebook: the % tells the interactive Python interpreter that the line is actually a special command, not a piece of Python code. The `pycat` command shows the contents of a file on the screen, with syntax highlighting that assumes it's a piece of Python code. Similarly, the ! used with the `git` command at the start of the notebook tells the interpreter it is a shell command, not Python code.\n\n##### Example unit test for `neg_square`\n\n\n```python\ndef test_neg_square():\n assert neg_square(2) == -4\n assert neg_square(-2) == -4\n assert neg_square(2.5) == -6.25\n assert neg_square(-2.5) == -6.25\n assert neg_square(0) == 0\n assert neg_square(np.inf) == -np.inf\n```\n", "meta": {"hexsha": "07e17dc729e89132c5e6cc75e694808e22ff80aa", "size": 36955, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "quality_control/quality_control.ipynb", "max_stars_repo_name": "bryan-ruddy/ENGSCI233_2021", "max_stars_repo_head_hexsha": "97a9ede84183603ac7975d5692885921419608fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-02-09T02:15:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T02:22:42.000Z", "max_issues_repo_path": "quality_control/quality_control.ipynb", "max_issues_repo_name": "bryan-ruddy/ENGSCI233_2021", "max_issues_repo_head_hexsha": "97a9ede84183603ac7975d5692885921419608fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quality_control/quality_control.ipynb", "max_forks_repo_name": "bryan-ruddy/ENGSCI233_2021", "max_forks_repo_head_hexsha": "97a9ede84183603ac7975d5692885921419608fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-03T09:25:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T02:15:57.000Z", "avg_line_length": 36.7711442786, "max_line_length": 528, "alphanum_fraction": 0.6138276282, "converted": true, "num_tokens": 5911, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2974699426047947, "lm_q2_score": 0.34158249943831703, "lm_q1q2_score": 0.10161052650271848}} {"text": "# Bevezet\u00e9s\n\n### Python programoz\u00e1si nyelv\n\nA Python egy open-source (OS), interpret\u00e1lt, \u00e1ltal\u00e1nos c\u00e9l\u00fa programoz\u00e1si nyelv (vagy script-nyelv).\n\n**Tulajdons\u00e1gai:**\n- Objektum orient\u00e1lt\n- Interpret\u00e1lt\n - Nem sz\u00fcks\u00e9ges ford\u00edtani (mint a pl a *C++*-t), el\u00e9g csak be\u00edrni a parancsot, \u00e9s m\u00e1r futtathat\u00f3 is a k\u00f3d\n - Alkalmas teszi sz\u00e1m\u00edt\u00e1sok gyors-prototipiz\u00e1l\u00e1s\u00e1ra\n - Cser\u00e9be lass\u00fa\n- Open-source:\n - Ingyenes\n - Folyamatosan karban tartott \n - Sz\u00e9les k\u00f6rben felhaszn\u00e1lt iparban \u00e9s akad\u00e9mi\u00e1ban is\n - Nagy \"Community\", sok seg\u00e9dlettel, f\u00f3rummal (pl.: [stackoverflow](https://stackoverflow.com/questions/tagged/python)) \n- Modul\u00e1ris:\n - Rengetek feladatra l\u00e9tezik \"*package*\" (pl.: numerikus sz\u00e1m\u00edt\u00e1sokra *numpy*/*scipy*, szimbolikus sz\u00e1m\u00edt\u00e1sokra *sympy*, t\u00e1bl\u00e1zatf\u00e1jl-kezel\u00e9sre *CSV*)\n - Csak azt kell beh\u00edvni, amire sz\u00fcks\u00e9g\u00fcnk van\n - Ismerni kell a *package* ekosziszt\u00e9m\u00e1t, mik l\u00e9teznek, mi mire j\u00f3, stb...\n- Sok IDE (*Integrated Development Environment*) l\u00e9tezik:\n - Alapvet\u0151en shell (termin\u00e1l) alap\u00fa\n - Notebook: **_jupyter notebook_**, *jupyter lab*\n - Sz\u00f6vegszerkeszt\u0151: *Spyder*, *VS Code* (ingyenes/open source - ezek tartalmaznak *Debugger*-t is)\n - Fizet\u0151s sz\u00f6vegszerkesz\u0151k (lista nem teljes): *Visual Studio*, *PyCharm*, stb...\n \n### Jupyter notebook m\u0171k\u00f6d\u00e9se (+ Python kernel):\n\nLegfontosabb tudnival\u00f3k:\n\n- Csak egy *front-end*, ami kommunik\u00e1l egy *kernel*-lel (ez a kernel men\u00fcben v\u00e1laszthat\u00f3).\n- K\u00e9t m\u00f3d l\u00e9tezik:\n - Command mode (cellam\u0171veleteket lehet v\u00e9gezni)\n - Edit mode (sz\u00f6vegbevitel cell\u00e1ba)\n- Command mode (`ESC` billenty\u0171 lenyom\u00e1s\u00e1val \u00e9rhet\u0151 el, k\u00e9k cs\u00edk a cella kijel\u00f6l\u00e9se eset\u00e9n):\n - Notebook ment\u00e9se: `s`\n - Cell\u00e1k hozz\u00e1ad\u00e1sa: `b` az aktu\u00e1lis cella al\u00e1, `a` az aktu\u00e1lis cella f\u00f6l\u00e9\n - Cella t\u00f6rl\u00e9se: k\u00e9tszer egym\u00e1s ut\u00e1n a `d` billenty\u0171 lenyom\u00e1sa\n - Cella t\u00f6rl\u00e9s\u00e9nek visszavon\u00e1sa: `z`\n - Cella m\u00e1sol\u00e1s: `c`, kiv\u00e1g\u00e1s: `x`, beilleszt\u00e9s az aktu\u00e1lis cella al\u00e1: `v`\n - Sz\u00e1moz\u00e1s bekapcsol\u00e1sa a cella soraira: `l` (kis L), vagy `Shift + l` az \u00f6sszes cell\u00e1ra\n - Cellam\u00f3dok: futtatand\u00f3 k\u00f3d: `y`, nyers k\u00f3d (nem futtathat\u00f3): `r`, markdown (form\u00e1zott sz\u00f6veg): `m` \n- Edit mode (`Enter` billenyt\u0171 lenyom\u00e1s\u00e1val \u00e9rhet\u0151 el, z\u00f6ld sz\u00edn):\n - Sor \"kikommentel\u00e9se\"/\"vissza\u00e1ll\u00edt\u00e1sa\": `Ctrl + /`\n - T\u00f6bb kurzor lehelyez\u00e9se: `Ctrl + Bal eg\u00e9rgomb` \n - T\u00e9glalap-szer\u0171 kijel\u00f6l\u00e9s (rectangular selection): `Alt + Bal eg\u00e9rgomb` \"h\u00faz\u00e1sa\" (dragging)\n- K\u00f6z\u00f6s\n - Cella futtat\u00e1sa, majd cellal\u00e9ptet\u00e9s: `Shift + Enter` (ez l\u00e9trehoz egy \u00faj cell\u00e1t, ha nincs hova l\u00e9pnie)\n - Cella futtat\u00e1sa cellal\u00e9ptet\u00e9s n\u00e9lk\u00fcl: `Ctrl + Enter` \n\n**Jupyter notebook help-j\u00e9nek el\u0151hoz\u00e1sa**: *Edit mode*-ban `h` lenyom\u00e1s\u00e1val \n**Python help**: Kurzorral a f\u00fcggv\u00e9ny nev\u00e9n \u00e1llva `Shift + Tab` vagy egy cell\u00e1ba `?\"fv_n\u00e9v\"` be\u00edr\u00e1sa \u00e9s futtat\u00e1sa\n\n### A bevezet\u0151 fel\u00e9p\u00edt\u00e9se:\n- [Alapm\u0171veletek](#Alapmuveletek)\n- [\u00d6sszetett f\u00fcggv\u00e9nyek](#Osszetettfuggvenyek)\n- [Saj\u00e1t f\u00fcggv\u00e9nyek](#Sajatfuggvenyek)\n- [Oszt\u00e1lyok](#Osztalyok)\n- [Vez\u00e9rl\u00e9si szerkezetek](#Vezerlesiszerkezetek)\n- [K\u00fcls\u0151 f\u00fcggv\u00e9nyk\u00f6nyvt\u00e1rak](#Kulsofuggvenykonyvtarak)\n - [Szimbolikus matematikai m\u0171veletek](#Szimbolikus)\n - [Deriv\u00e1l\u00e1s/Integr\u00e1l\u00e1s](#DerivalIntegral)\n - [Vektor \u00e9s m\u00e1trixsz\u00e1m\u00edt\u00e1sok Sympy-ban](#SzimVektorMatrix)\n - [Vektor \u00e9s m\u00e1trixsz\u00e1m\u00edt\u00e1sok Scipy-ban](#NumVektorMatrix)\n - [Egyenletek megold\u00e1sa Scipy-ban](#Egyenletek)\n - [Szimbolikus f\u00fcggv\u00e9nyekb\u0151l numerikus f\u00fcggv\u00e9nyek](#SymToNum)\n- [Egyszer\u0171 \u00e1br\u00e1k k\u00e9sz\u00edt\u00e9se](#Egyszeruabrak)\n\n# Python bevezet\u0151 \n\n## Alapm\u0171veletek (Shift/Ctrl + Enter-rel futtassuk)\n\n\n```python\n17 + 7 #\u00d6sszead\u00e1s\n```\n\n\n\n\n 24\n\n\n\n\n```python\n333 - 7 #Kivon\u00e1s\n```\n\n\n\n\n 326\n\n\n\n\n```python\n11 * 22 #Szorz\u00e1s\n```\n\n\n\n\n 242\n\n\n\n\n```python\n7/9 #Oszt\u00e1s (ez nem eg\u00e9sz (int) lesz: k\u00fcl\u00f6n t\u00edpus float)\n```\n\n\n\n\n 0.7777777777777778\n\n\n\n\n```python\n0.3-0.1-0.2 # float: sz\u00e1m\u00e1br\u00e1zol\u00e1si hiba lehet!!\n```\n\n\n\n\n -2.7755575615628914e-17\n\n\n\n\n```python\n2**3 # Hatv\u00e1nyoz\u00e1s (** \u00e9s NEM ^!)\n```\n\n\n\n\n 8\n\n\n\n\n```python\n2**(0.5) # Gy\u00f6kv\u00f6n\u00e1s hatv\u00e1nyoz\u00e1s seg\u00edts\u00e9g\u00e9vel\n```\n\n\n\n\n 1.4142135623730951\n\n\n\n\n```python\n5e-3 #norm\u00e1lalak e seg\u00edts\u00e9g\u00e9vel (vagy 5E-3)\n```\n\n\n\n\n 0.005\n\n\n\nN\u00e9h\u00e1ny alapm\u0171velet m\u0171k\u00f6dik sz\u00f6vegre is\n\n\n```python\n'str1_' + 'str2_' #\u00d6sszead\u00e1s\n```\n\n\n\n\n 'str1_str2_'\n\n\n\n\n```python\n2 * 'str2_' #Szorz\u00e1s\n```\n\n\n\n\n 'str2_str2_'\n\n\n\n## \u00d6sszetettebb f\u00fcggv\u00e9nyek \n\n\n```python\nsin(2) #szinusz\n```\n\n\u00d6sszetettebb f\u00fcggv\u00e9nyek m\u00e1r nincsenek a python alapnyelvben - ilyenkor sz\u00fcks\u00e9ges beh\u00edvni k\u00fcls\u0151 csomagokat, pl a **math** csomagot\n\n\n```python\nimport math\n```\n\n\n```python\nsin(2) # ez \u00edgy tov\u00e1bbra sem l\u00e9tezik\n```\n\n\n```python\nmath.sin(2)\n```\n\n\n\n\n 0.9092974268256817\n\n\n\n\n```python\n# T\u00f6bb parancs egy\u00fcttes be\u00edr\u00e1sakor nem l\u00e1tszik, csak az utols\u00f3 sor kimenete: print f\u00fcggv\u00e9ny alkalmaz\u00e1sa!\nprint(math.sqrt(2))\nprint(math.tan(2))\nprint(math.atan(2))\n```\n\n 1.4142135623730951\n -2.185039863261519\n 1.1071487177940904\n\n\n\n```python\n# Kimenet el is rejthet\u0151 a ; seg\u00edts\u00e9g\u00e9vel (\"suppress output\")\n1+1;\n```\n\nAmennyiben sz\u00fcks\u00e9ges, defini\u00e1lhatunk mi is saj\u00e1t v\u00e1ltoz\u00f3kat az `=` jellel. \nMegjegyz\u00e9s: a `=` \u00e9rt\u00e9kad\u00f3 f\u00fcggv\u00e9nynek nincs kimenete\n\n\n```python\na=2\nb=3\nc=4.0 # automatikus t\u00edpusad\u00e1s\n```\n\n\n```python\n(a+b*c)**a # a leg\u00e1ltal\u00e1nosabb t\u00edpus lesz a kimenet (int < float)\n```\n\n\n\n\n 196.0\n\n\n\n\n```python\n# Fontos, hogy igyekezz\u00fck ker\u00fclni v\u00e9dett v\u00e1ltoz\u00f3 neveket! ILYET NE!\nmath.sqrt = 1\nmath.sqrt(2)\n\n# KERNEL RESTART SZ\u00dcKS\u00c9GES\n```\n\n**Ha v\u00e9letlen\u00fcl ilyet tesz\u00fcnk, akkor \u00e9rdemes \u00fajraind\u00edtani a *kernel* a fent l\u00e1that\u00f3 k\u00f6rk\u00f6r\u00f6s ny\u00edllal, vagy a *Kernel* $\\rightarrow$ *Restart* seg\u00edts\u00e9g\u00e9vel**\n\n## Saj\u00e1t f\u00fcggv\u00e9nyek \n\nSzerkezet:\n```python\ndef function(*arguments):\n instruction1\n instruction2\n ...\n return result\n```\n\nA f\u00fcggv\u00e9ny al\u00e1 tartoz\u00f3 utas\u00edt\u00e1sokat tabul\u00e1toros beh\u00faz\u00e1ssal (indent) kell be\u00edrni (nincs `{}` z\u00e1r\u00f3jel, vagy `end`). A f\u00fcggv\u00e9ny neve ut\u00e1n j\u00f6nnek az argumentumok majd kett\u0151sponttal `:` jelezz\u00fck, hogy hol kezd\u0151dik a f\u00fcggv\u00e9ny.\n\n\n```python\ndef foo(x):\n return 3*x\n\ndef bar(x,y):\n a = x+y**2\n return 2*a + 4\n```\n\n\n```python\nprint(foo(3))\nprint(foo(3.))\nprint(foo('sz\u00f6veg_'))\n\nprint(bar(3,4.))\n```\n\n 9\n 9.0\n sz\u00f6veg_sz\u00f6veg_sz\u00f6veg_\n 42.0\n\n\nLehets\u00e9ges \u00fagynevezett anonim f\u00fcggv\u00e9nyeket (*anonymous function* vagy *lambda function*) is l\u00e9trehozni, amely gyors m\u00f3dja az egyszer\u0171, egysoros f\u00fcggv\u00e9nyek l\u00e9trehoz\u00e1s\u00e1ra:\n\n```python\nlambda arguments: instruction\n```\n\nEz ak\u00e1r egy v\u00e1ltoz\u00f3hoz is hozz\u00e1rendelhet\u0151, mint egy sz\u00e1m vagy string.\n\n\n```python\ndouble = lambda x : x*2\nmultiply = lambda x,y : x*y\n```\n\n\n```python\nprint(double(3))\nprint(multiply(10,3))\n```\n\n 6\n 30\n\n\n## Oszt\u00e1lyok \n\n\n```python\ndef foo(x):\n return x**2\n\nclass MyClass:\n def __init__(self,x,y,z):\n self.square = foo(x)-z\n self.cubic = y**3+foo(y)\n\n @classmethod\n def createfrom_x(cls,x):\n return MyClass(x,x,x)\n \n def return_stuff(self):\n return self.square+3*self.cubic\n```\n\n\n```python\nmcl=MyClass.createfrom_x(2)\n```\n\n\n```python\nmcl.return_stuff()\n```\n\n\n\n\n 38\n\n\n\n\n```python\ndef foo(x):\n return x**2\n\nclass MyClass:\n def __init__(self,x,y,z):\n self.square = foo(x)-z\n self.cubic = y**3+foo(y)\n\n @classmethod\n def createfrom_x(cls,x):\n return MyClass(x,x,x)\n \n def return_stuff(self):\n return self.square+3*self.cubic\n```\n\n\n```python\nmcl=MyClass.createfrom_x(2)\n```\n\n\n```python\nmcl.return_stuff()\n```\n\n\n\n\n 38\n\n\n\n## Vez\u00e9rl\u00e9si szerkezetek (Control Flow) - csak a legfontosabbak \n\n### List\u00e1k\n\n\n```python\nlista = [1,2,3,4,\"valami\",[1.0,4]]\n```\n\n\n```python\nprint(lista[0]) # lista 1. eleme\nprint(lista[3]) # lista 4. eleme\nprint(lista[-1]) # negat\u00edv sz\u00e1mokkal h\u00e1tulr\u00f3l indexelj\u00fck a list\u00e1t, \u00e9s (-1)-t\u0151l indul\nprint(lista[-2]) # lista utols\u00f3 el\u0151tti eleme\n```\n\n 1\n 4\n [1.0, 4]\n valami\n\n\n\n```python\nprint(lista[1:-1]) # egyszerre t\u00f6bb elem [inkluz\u00edv:exkl\u00faz\u00edv m\u00f3don]\nprint(lista[1:2]) # egyszerre t\u00f6bb elem [inkluz\u00edv:exkl\u00faz\u00edv m\u00f3don]\nprint(lista[2:]) # lista utols\u00f3 elem\u00e9t is figyelembe vessz\u00fck\n```\n\n [2, 3, 4, 'valami']\n [2]\n [3, 4, 'valami', [1.0, 4]]\n\n\n\n```python\nlista = [2,3,64,89,1,4,9,0,1]\n\nlista.sort()\nlista\n```\n\n\n\n\n [0, 1, 1, 2, 3, 4, 9, 64, 89]\n\n\n\n### if-then-else\n\n```python\nif condition:\n instruction1\nelif condition2:\n instruction2\nelse:\n intsturction3\n```\n\n\n```python\na=4\nif a<=3:\n print('\"a\" nem nagyobb, mint 3')\nelif a>=10:\n print('\"a\" nem kisebb, mint 10')\nelse:\n print('\"a\" nagyobb mint 3, de kisebb mint 10')\n```\n\n \"a\" nagyobb mint 3, de kisebb mint 10\n\n\n### for ciklus (for loop)\n```python\nfor i in array:\n instruction\n```\n\n\n```python\nfor i in range(3):\n print(i)\n \nprint()\n\nfor (i,elem) in enumerate(lista):\n print('lista ',i,'. eleme: ',elem,sep='') # t\u00f6bb elem printel\u00e9se egyszerr, szepar\u00e1tor = ''\n\n```\n\n 0\n 1\n 2\n \n lista 0. eleme: 0\n lista 1. eleme: 1\n lista 2. eleme: 1\n lista 3. eleme: 2\n lista 4. eleme: 3\n lista 5. eleme: 4\n lista 6. eleme: 9\n lista 7. eleme: 64\n lista 8. eleme: 89\n\n\n## List\u00e1k gyors l\u00e9trehoz\u00e1sa (List comprehension)\n\n\n```python\nlista2 = [3*i**2 for i in range(2,5)] # range: 2,3,4\nlista2\n```\n\n\n\n\n [12, 27, 48]\n\n\n\n\n```python\nlista3 = list(range(10))\nlista3\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n\n\n\n```python\nmyfun = lambda x: 3*x**2\n\nlista4 = [myfun(i) for i in range(2,10) if i%3 != 0] # ha i nem oszthat\u00f3 3-al\nlista4\n```\n\n\n\n\n [12, 48, 75, 147, 192]\n\n\n\n# K\u00fcls\u0151 f\u00fcggv\u00e9nyk\u00f6nyvt\u00e1rak: \n\nA m\u00e9rn\u00f6ki gyakorlatban el\u0151fordul\u00f3 alapsz\u00e1m\u00edt\u00e1sokhoz 2 f\u0151 csomag import\u00e1l\u00e1sa aj\u00e1nlott:\n- `sympy`: szimbolikus sz\u00e1m\u00edt\u00e1sokhoz\n- `scipy`/`numpy`: numerikus sz\u00e1m\u00edt\u00e1sokhoz (pl m\u00e1trix algebra)\n- `matplotlib.pyplot`: F\u00fcggv\u00e9nyek \u00e1br\u00e1zol\u00e1sa\nA `module`-ok h\u00edv\u00e1sa az al\u00e1bbi m\u00f3dokon t\u00f6rt\u00e9nhet\n- `import` *`modulename`* : import\u00e1lja az adott `module`-t. Ekkor a `module`-ban l\u00e9v\u0151 f\u00fcggv\u00e9nyek h\u00edv\u00e1sa a `module.functionname(arguments)` seg\u00edts\u00e9g\u00e9vel t\u00f6rt\u00e9nik (Az *1_Alapok.ipynb*-ben l\u00e1thattunk erre p\u00e9ld\u00e1t a `math` `module` eset\u00e9ben). \n- `import` *`modulename`* `as` *`alias`* : hasonl\u00f3 az el\u0151z\u0151h\u00f6z, de megv\u00e1lasztjuk, hogy milyen *alias*-k\u00e9nt hivatkozunk a `module`-unkra\n- `from` *`modulename`* ` import` *`function1, function2, ...`* : csak bizonyos f\u00fcggv\u00e9nyek import\u00e1l\u00e1sa (nem sz\u00fcks\u00e9ges a `module`-ra hivatkozni a f\u00fcggv\u00e9nyek h\u00edv\u00e1sa sor\u00e1n)\n- `from` *`modulename`* ` import *` : a `module` \u00f6sszes f\u00fcggv\u00e9ny\u00e9nek import\u00e1l\u00e1sa (nem sz\u00fcks\u00e9ges a `module`-ra hivatkozni a f\u00fcggv\u00e9nyek h\u00edv\u00e1sa sor\u00e1n)\n\n\n## Szimbolikus matematikai m\u0171veletek \n\n\n```python\nimport math\nimport sympy as sp\nimport scipy as sc\nsp.init_printing()\n```\n\n\n```python\nF, m, a, b, c, x = sp.symbols(\"F m a b c x\")\n```\n\n\n```python\nF=m*a\n```\n\n\n```python\nF.subs([(a,7)])\n```\n\n\n```python\nF.subs([(a,7),(m,1.1)])\n```\n\n\n```python\n((a+b)**3).expand()\n```\n\n\n```python\n((a+b)**7 - (b+2*a)**3).expand()\n```\n\n\n```python\n(a**2+b**2+2*a*b).factor()\n```\n\n\n```python\nsp.factor(a**2+b**2+2*a*b)\n```\n\n\n```python\nsp.factor(b**3 + 3*a*b**2 + 3*a**2*b + a**3)\n```\n\n\n```python\na/b+c/b+7/b\n```\n\n\n```python\nsp.ratsimp(a/b+c/b+7/b)\n```\n\n\n```python\n(a/b+c/b+7/b).ratsimp()\n```\n\n\n```python\n(sp.sin(x)**2 + sp.cos(x)**2).simplify()\n```\n\n\n```python\n(sp.cos(2*x)).expand()\n```\n\n\n```python\nsp.expand_trig(sp.cos(2*x))\n```\n\n\n```python\nimport scipy.constants\n```\n\n\n```python\nsc.constants.golden\n```\n\n\n```python\nmath.sqrt(-1+0j)\n```\n\n\n```python\nsc.sqrt(-1+0j)\n```\n\n\n```python\nsp.limit(sp.sin(x)/x,x,0)\n```\n\n\nTaylor-sor megad\u00e1sa. Els\u0151 param\u00e9ter a f\u00fcggv\u00e9ny, m\u00e1sodik a v\u00e1ltoz\u00f3, harmadik az \u00e9rt\u00e9k ami k\u00f6r\u00fcl akarjuk a sort kifejteni, negyedik pedig a foksz\u00e1m:\n\n$$f\\left(x\\right) \\approx \\sum\\limits_{i=0}^{N} \\dfrac{\\left(x - x_0\\right)^i}{i!} \\left.\\dfrac{\\mathrm{d}^i f}{\\mathrm{d} x^i}\\right|_{x = x_0}$$\n\n\n```python\nsp.series(sp.sin(x),x,0,20)\n```\n\n### Deriv\u00e1l\u00e1s/Integr\u00e1l\u00e1s \n\n\n```python\na,\u0394t,x = sp.symbols('a,\u0394t,x')\n```\n\nDeriv\u00e1l\u00e1s\n\n\n```python\nsp.diff(sp.sin(x**3),x)\n```\n\nT\u00f6bbsz\u00f6ri deriv\u00e1l\u00e1s\n\n\n```python\nsp.diff(sp.sin(x**3),x,3)\n```\n\nIntegr\u00e1l\u00e1s\n\n\n```python\nsp.integrate(1/(1+x),x)\n```\n\nHat\u00e1rozott integr\u00e1l\n\n\n```python\nsp.integrate(1/(1+x),(x,1,2))\n```\n\n\n```python\na = sp.Symbol('a')\n```\n\n\n```python\nsp.integrate(1/(x**2 + a),x)\n```\n\nL\u00e1ncszab\u00e1lyt is ismeri a szoftver\n\n\n```python\ny = sp.Symbol('y')\ndef f(x):\n return x**2\ndef g(y):\n return sp.sin(y)\n```\n\n\n```python\nf(g(y))\n```\n\n\n```python\nsp.diff(f(g(y)),y)\n```\n\n\n```python\nsp.diff(g(f(x)),x)\n```\n\nSok esetben nem l\u00e9tezik z\u00e1rt alak\u00fa kifejez\u00e9s a hat\u00e1rozatlan integr\u00e1lhoz. Ebben az esetben haszn\u00e1lhatjuk a [numerikus integr\u00e1l\u00e1st](https://en.wikipedia.org/wiki/Numerical_integration) a hat\u00e1rozott integr\u00e1l sz\u00e1m\u00edt\u00e1s\u00e1hoz:\n\n\n```python\nsp.integrate(sp.sin(sp.cos(x)),x)\n```\n\n[numerikus int\u00e1gr\u00e1ls SymPy-vel](https://docs.sympy.org/latest/modules/integrals/integrals.html#numeric-integrals)\n\nnem trivi\u00e1lis\n\n## Vektor- \u00e9s m\u00e1trixsz\u00e1m\u00edt\u00e1sok *sympy*-ban \nKisebb m\u00e9retek eset\u00e9n, amelyek ak\u00e1r szimbolikus sz\u00e1mokat is tartalmaznak\n\n\n```python\nv1= sp.Matrix([2.,3.,4.]) # oszlopvektor\nv2= sp.Matrix([[3.,-2.,-7.]]) # sorvektor (m\u00e9g 1 sz\u00f6gletes z\u00e1r\u00f3jel)\nmx1 = sp.Matrix([[1.,2.,3.],[2.,0.,4.],[3.,4.,1.]])\nmx2 = sp.Matrix([[1.,2.,3.],[4.,5.,6.],[7.,8.,9.]])\nEM = sp.eye(3) # egys\u00e9gm\u00e1trix\n```\n\n\n```python\nv1\n```\n\n\n```python\nv2\n```\n\n\n```python\nv2.multiply(v1)\n```\n\n\n```python\nmx2.multiply(v1)\n```\n\n\n```python\nv2.multiply(mx2)\n```\n\n\n```python\nEM\n```\n\n\n```python\nmx1.eigenvals() # saj\u00e1t\u00e9rt\u00e9kek \u00e9s multiplicit\u00e1suk (racion\u00e1lis sz\u00e1mokkal)\n```\n\n\n```python\nmx1.eigenvals(rational=False) # saj\u00e1t\u00e9rt\u00e9kek numerikusan\n```\n\n\n```python\nmx1.eigenvects() # saj\u00e1tvektorok numerikusan\n```\n\n\n```python\nmx1.det() # mx1 determin\u00e1nsa\n```\n\n\n```python\nIx,Iy,Ixy = sp.symbols('Ix,Iy,Ixy')\nmxSP=sp.Matrix([[Ix,-Ixy],[-Ixy,Iy]])\ndisplay(mxSP)\n\nprint('\\n Saj\u00e1t\u00e9rt\u00e9kek, vektorok: \\n')\nmxSP.eigenvects()\n```\n\n\n```python\nmxSP=sp.Matrix([[Ix,0],[0,Iy]])\ndisplay(mxSP)\n\nprint('\\n Saj\u00e1t\u00e9rt\u00e9kek, vektorok: \\n')\nmxSP.eigenvects()\n```\n\n## Vektor- \u00e9s m\u00e1trixsz\u00e1m\u00edt\u00e1sok *scipy*-ban \nNagy m\u00e1trixok \u00e9s vektorok eset\u00e9n \u00e9rdemes ezt haszn\u00e1lni, vagy ha sok numerikus adattal dolgozik az ember. Tov\u00e1bb\u00e1 a saj\u00e1t\u00e9rt\u00e9k-saj\u00e1tvektor sz\u00e1m\u00edt\u00e1sok is jobban megoldott ebben a csomagban\n\n\n```python\nimport sympy as sp\nimport scipy as sc\nimport scipy.linalg\n```\n\n\n```python\nv1= sc.array([2.,3.,4.])\nv2= sc.array([3.,-2.,-7.])\nmx1 = sc.array([[1.,2.,3.],[2.,0.,4.],[3.,4.,1.]])\nmx2 = sc.array([[1.,2.,3.],[4.,5.,6.],[7.,8.,9.]])\n```\n\n\n```python\nprint( sc.dot(mx2,v1) ) # skal\u00e1r szorat mx2*v1\nprint( sc.dot(v1,mx2) ) # skal\u00e1r szorat transpose(v1)*mx2\nprint( sc.cross(v1,v2) ) # keresztszorzat v1\u00d7v2\n```\n\n [20. 47. 74.]\n [42. 51. 60.]\n [-13. 26. -13.]\n\n\n\n```python\n(\u03bb,V) = sc.linalg.eig(mx1) # M\u00e1trix saj\u00e1t\u00e9rt\u00e9kei, saj\u00e1tvektorai a \u03bb \u00e9s a V v\u00e1ltoz\u00f3kban\n\n# ki\u00edrat\u00e1s 2 \u00e9rt\u00e9kes tizedes jeggyel \u00e9s for ciklusra p\u00e9lda\nfor (i,v) in enumerate(V):\n print(i+1, '. saj\u00e1t\u00e9rt\u00e9k \u00e9s -vektor:',sep='')\n print('\u03bb = ', sp.N(\u03bb[i],3), '; v = ', [sp.N(e,3) for e in v], sep='', end='\\n\\n')\n```\n\n 1. saj\u00e1t\u00e9rt\u00e9k \u00e9s -vektor:\n \u03bb = 6.76; v = [0.529, 0.834, -0.157]\n \n 2. saj\u00e1t\u00e9rt\u00e9k \u00e9s -vektor:\n \u03bb = -1.15; v = [0.543, -0.475, -0.693]\n \n 3. saj\u00e1t\u00e9rt\u00e9k \u00e9s -vektor:\n \u03bb = -3.61; v = [0.653, -0.281, 0.704]\n \n\n\nFigyelj\u00fck meg, hogy m\u00edg a sympy a saj\u00e1tvektor utols\u00f3 koordin\u00e1t\u00e1j\u00e1t 1-nek v\u00e1lasztja, addig a scipy a saj\u00e1tvektorokat 1-re norm\u00e1lja!\n\n## Egyenletek megold\u00e1sa \n\n\n```python\nx, y, z = sp.symbols(\"x y z\")\n```\n\n\n```python\negy1=77*x+6 - 160\n```\n\n\n```python\nsp.solve(egy1,x)\n```\n\n\n```python\nsp.solve(x**2 - 5, x)\n```\n\n\n```python\nx\n```\n\n\n```python\ne1 = 3*z + 2*y + 1*x - 7\ne2 = 4*z + 0*y + 2*x - 8\ne3 = 1*z + 4*y + 3*x - 9\n```\n\n\n```python\nsp.solve([e1,e2,e3],[x,y,z])\n```\n\n\n```python\ndef f(x):\n return x**2 - 4\n```\n\n\n```python\n# Numerikus gy\u00f6kkeres\u00e9ssel\nimport scipy.optimize\n\ngyok = scipy.optimize.brentq(f,0,10)\nprint(gyok)\n```\n\n 2.0\n\n\n### Szimbolikus f\u00fcggv\u00e9nyekb\u0151l numerikus f\u00fcggv\u00e9nyek \n\nEzt a legegyszer\u0171bben [*lambda function*](#Sajatfuggvenyek)-\u00f6k seg\u00edts\u00e9g\u00e9vel tehetj\u00fck meg\n\n\n```python\na,b,c,x = sp.symbols('a,b,c,x')\nadat = [(a, 3.0), (b,-2.), (c,1.0)]\n\nmasodfoku_x = a*x**2 + b*x + c\n\nmasodfoku_x\n```\n\n\n```python\nmasodfoku_num = sp.lambdify(x,masodfoku_x.subs(adat))\nmasodfoku_num(1.)\n```\n\n# Egyszer\u0171 \u00e1br\u00e1k k\u00e9sz\u00edt\u00e9se \n\n\n```python\nimport scipy as sc # Ez egyszer\u0171s\u00edti a sok f\u00fcggv\u00e9nnyel val\u00f3 munk\u00e1t\nimport matplotlib.pyplot as plt # matplotlib csomag python rajzol\u00f3j\u00e1nak beh\u00edv\u00e1sa\n```\n\n## F\u00fcggv\u00e9nyek \u00e1br\u00e1zol\u00e1sa egyszer\u0171 diagramokon\n\n\n```python\ndef f(x):\n return sc.sin(x)*(1-4*sc.exp(0.7*x)/(x**3))\n\nx1 = sc.linspace(2,10,30)\ny1 = f(x1)\n```\n\n\n```python\nplt.figure(figsize=(20/2.54,12/2.54)) #inch-ben lehet megadni az \u00e1bra m\u00e9ret\u00e9t - nem k\u00f6telez\u0151 de \u00e9rdemes, ha dokument\u00e1ci\u00f3ba sz\u00e1njuk\n\n#plot objektum l\u00e9trehoz\u00e1sa\nplt.plot(x1,y1)\n\n# tengelyfeliratok\nplt.xlabel('x',fontsize=12)\nplt.ylabel('f(x)',fontsize=12)\n\n# r\u00e1csoz\u00e1s\nplt.grid()\n\n# \u00e1bra ment\u00e9s (m\u00e9g azel\u0151tt kell, miel\u0151tt kirajzolunk)\n# plt.savefig(\"abra1.png\", bbox_inches=\"tight\") \n# plt.savefig(\"abra1.pdf\", bbox_inches=\"tight\")\n\n# plot kirajzol\u00e1sa\nplt.show()\n\n# plot ment\u00e9se\nplt.savefig('abra1.png') # lehet .pdf, .svg form\u00e1tumokban is menteni\n```\n\n\n```python\nplt.plot(x1,y1,linewidth=3,color='k',linestyle='-.') # vonalvastags\u00e1g, -sz\u00edn, \u00e9s -st\u00edlus m\u00f3dos\u00edt\u00e1sa\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# megjelen\u00edt\u00e9si hat\u00e1r\nplt.ylim(-0.5,0.5) #-0.5...0.5\nplt.xlim(None,8) # xmin...8\nplt.grid()\n \nplt.show()\n```\n\n\n```python\ndef g(x):\n return sc.sqrt(x)\ndef h(x):\n return x**2\n\nx2 = sc.linspace(0.,1.,100) \n\nplt.plot(x2,g(x2))\nplt.plot(x2,h(x2))\n\nplt.xlabel('x',fontsize=12)\n# felirat\nplt.legend((r'$\\sqrt{x}$',r'$x^2$'), fontsize=12, loc = 5) # loc: 1:jobbfenn, 2: balfent, 3:ballent,...10\nplt.grid()\n\n\nplt.show()\n```\n\n\n```python\nx3 = sc.arange(1,14+1)\ny3 = sc.exp(x3)/(1.4e4)\nplt.plot(x3, y3, linestyle = '', marker = 'o')\nplt.xlabel('szorgalmi h\u00e9t sorsz\u00e1ma')\nplt.ylabel('hallgat\u00f3k terhelts\u00e9ge [%]-ban')\nplt.grid()\nplt.show()\n```\n\n\n```python\nplt.plot(x3, y3, linestyle = '-', marker = 'o')\nplt.yscale('log')\nplt.xlabel('szorgalmi h\u00e9t sorsz\u00e1ma')\nplt.ylabel(r'hallgat\u00f3k terhelts\u00e9ge [%]-ban')\nplt.grid()\nplt.show()\n```\n\n## Parametrikus f\u00fcggv\u00e9ny\u00e1br\u00e1zol\u00e1s:\n\n\n```python\nt = sc.linspace(0,2*sc.pi,100)\nx = sc.sin(t)\ny = sc.cos(t)\n\nplt.plot(x, y, linestyle = '--')\nplt.grid()\nplt.axis('equal')\nplt.show()\n```\n\n## 3D f\u00fcggv\u00e9ny\u00e1br\u00e1zol\u00e1s\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\ndef f(x, y):\n return sc.sin(sc.exp( - x ** 2 - y ** 2))\n\nx = sc.linspace(-2, 2, 30)\ny = sc.linspace(-2, 2, 30)\n\nX, Y = sc.meshgrid(x, y)\nZ = f(X, Y)\n```\n\n\n```python\nfig = plt.figure(figsize=(16/2.54,10/2.54)) # itt manu\u00e1lisan l\u00e9tre kell hozni a plot (figure) objektumot\nax = fig.add_subplot(111, projection='3d')\nax.plot_surface(X, Y, Z, cmap='viridis')\nplt.show()\n```\n", "meta": {"hexsha": "76d63ec13f367b65ca80bd0b00eb54ce229d49cf", "size": 316765, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Egyben/01_Python_Bevezeto.ipynb", "max_stars_repo_name": "TamasPoloskei/BME-VEMA", "max_stars_repo_head_hexsha": "542725bf78e9ad0962018c1cf9ff40c860f8e1f0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Egyben/01_Python_Bevezeto.ipynb", "max_issues_repo_name": "TamasPoloskei/BME-VEMA", "max_issues_repo_head_hexsha": "542725bf78e9ad0962018c1cf9ff40c860f8e1f0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-11-20T14:17:52.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-20T14:17:52.000Z", "max_forks_repo_path": "Egyben/01_Python_Bevezeto.ipynb", "max_forks_repo_name": "TamasPoloskei/BME-VEMA", "max_forks_repo_head_hexsha": "542725bf78e9ad0962018c1cf9ff40c860f8e1f0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 111.4584799437, "max_line_length": 74112, "alphanum_fraction": 0.8546367181, "converted": true, "num_tokens": 7624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3451052574867685, "lm_q2_score": 0.2942149721629888, "lm_q1q2_score": 0.10153513372477069}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\nimport sympy\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n# High-School Maths Exercise\n## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow\n\n### Problem 1. Markdown\nJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.\n\nFirst, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press Ctrl + Enter.\n\nSecond, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).\n\nLet me give you a...\n#### Quick Introduction to Markdown\n##### Text and Paragraphs\nThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:\n```\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n```\n**Result:**\n\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n\n##### Headings\nThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six \"#\" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:\n```\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n```\n\n**Result:**\n\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n\nIt is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.\n\n##### Emphasis\nYou can create emphasized (stronger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\\*) or underscores (\\_)). In order to \"escape\" a symbol, prefix it with a backslash (\\). You can also strike through your text in order to signify a correction.\n```\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not \\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n```\n\n**Result:**\n\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not\\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n\n##### Lists\nYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press Tab once (it will be converted to 4 spaces).\n\nTo create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...\n```\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n```\n\n**Result:**\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n \nTo create an unordered list, type an asterisk, plus or minus at the beginning:\n```\n* This is\n* An\n + Unordered\n - list\n```\n\n**Result:**\n* This is\n* An\n + Unordered\n - list\n \n##### Links\nThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:\n```\nThis is [a link](http://google.com) to Google.\n```\n\n**Result:**\n\nThis is [a link](http://google.com) to Google.\n\n##### Images\nThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):\n```\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n```\n\n**Result:**\n\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n\nIf you want to resize images or do some more advanced stuff, just use HTML. \n\nDid I mention these cells support HTML, CSS and JavaScript? Now I did.\n\n##### Tables\nThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.\n```\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n```\n\n**Result:**\n\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n
\n```python\ndef square(x):\n    return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n
\n\n**Result:**\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n\n**Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n
\n```python\ndef square(x):\n    return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n
\n\n**Result:**\n```C#\nprivate double square(x)\n{\n return Math.Pow(x,2);\n}\n```\nThis is `inline C#` code. No syntax highlighting here.\n\n### Problem 2. Formulas and LaTeX\nWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.\n\nThere are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.\n\nMost commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \\frac{a}{b} $$`: $$ \\frac{a}{b} $$.\n\n[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.\n\nYou're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.\n\nNote that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.\n\n\n\n\n$$ x=ax_{1}^{2}+ 2ax_{2}-\\sin120^{\\circ} + ( \\,\\frac{2ab}{y} )\\, $$\n$$ $M = \\left( \\begin{matrix} a&b\\\\ c&d \\\\ x&y \\end{matrix} \\right)$ $$ \n\n\n\n\n### Problem 3. Solving with Python\nLet's first do some symbolic computation. We need to import `sympy` first. \n\n**Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**\n\nLet's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): \n```python \nimport sympy \n```\n\nNext, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:\n```python \nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n```\n\nNow solve:\n```python \nsympy.solve(a * x**2 + b * x + c)\n```\n\nHmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:\n```python \nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nFinally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.\n\n\n```python\nsympy.init_printing()\n```\n\n\n```python\nsympy.symbols(\"x,y,z\")\n```\n\nHow about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?\n\nRemember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.\n\nIf $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$\n\nIf $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$\n\nIf $b^2 - 4ac < 0$, the equation has zero real roots\n\nWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.\n\n\n```python\n\ndef solve_quadratic_equation(a, b, c):\n \"\"\"\n Returns the real solutions of the quadratic equation ax^2 + bx + c = 0\n \"\"\"\n D= b**2 - 4*a*c\n \n if D > 0:\n x1=(-b -math.sqrt(D))/(2*a)\n x2=(-b +math.sqrt(D))/(2*a)\n return [x1,x2]\n elif D==0:\n x=-b /(2*a)\n return [x]\n elif D<0:\n return []\n \n```\n\n\n```python\n# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests\nprint(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]\nprint(solve_quadratic_equation(1, -8, 16)) # [4.0]\nprint(solve_quadratic_equation(1, 1, 1)) # []\n```\n\n [-1.0, 2.0]\n [4.0]\n []\n\n\n**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).\n\n### Problem 4. Equation of a Line\nLet's go back to our linear equations and systems. There are many ways to define what \"linear\" means, but they all boil down to the same thing.\n\nThe equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).\n\nThe function produces a straight line and we can see it.\n\nHow do we plot functions in general? Ww know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.\n\nNow, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:\n* All elements in it must be of the same type\n* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.\n\nThere's one more thing: it's blazingly fast because all computations are done in C, instead of Python.\n\nFirst let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:\n```python\nimport numpy as np\n```\n\nImport that at the top cell and don't forget to re-run it.\n\nNext, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).\n```python\nx = np.linspace(-3, 5, 1000)\n```\nNow, let's generate our function variable\n```python\ny = 2 * x + 3\n```\n\nWe can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.\n```python\nimport matplotlib.pyplot as plt\n```\n\nNow, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a \"magic string\": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.\n```python\nplt.plot(x, y)\nplt.show()\n```\n\n\n```python\nx = np.linspace(-6, 6, 1000)\ny = 2 * x + 3\n\nax = plt.gca()\nax.spines['top'].set_color('none')\nax.spines['bottom'].set_position('zero')\nax.spines['left'].set_position('zero')\nax.spines['right'].set_color('none')\nplt.plot(x,y,'-r', label=r'$y = {2x}+3$')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('Exponential Curve')\nplt.legend(loc='upper left')\nplt.show()\n```\n\nIt doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the \"spines\" of the plot (i.e. the borders).\n\nAll `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for \"axis\".\nLet's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n```\n\n**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.\n\nThis should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).\n\n\n```python\n# Copy and edit your code here\n```\n\n### * Problem 5. Linearizing Functions\nWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. \n\nA commonly used method for linearizing functions is through algebraic transformations. Try to linearize \n$$ y = ae^{bx} $$\n\nHint: The inverse operation of $e^{x}$ is $\\ln(x)$. Start by taking $\\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).\n\n\n```python\na = 5\nb = 0.9\nc = 1\nx = np.linspace(0, 10, 256, endpoint = True)\ny = (a * np.exp(-b*x))\n\nplt.plot(x, y, '-r', label=r'$y = 5e^{2x}$')\n\naxes = plt.gca()\naxes.set_xlim([x.min(), x.max()])\naxes.set_ylim([y.min(), y.max()])\n\nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('Exponential Curve')\nplt.legend(loc='upper left')\n\nplt.show()\n```\n\n### * Problem 6. Generalizing the Plotting Function\nLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.\n\nNote: We can also pass *lambda expressions* (anonymous functions) like this: \n```python\nlambda x: x + 2```\nThis is a shorter way to write\n```python\ndef some_anonymous_function(x):\n return x + 2\n```\n\nWe'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.\n\nWrite a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.\n\n**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):\n```python\nf_vectorized = np.vectorize(f)\ny = f_vectorized(x)\n```\n\n\n```python\ndef f(x):\n return 2.718281828459045**(1+x)\n\ndef tailor(x, eps):\n x = 1+x\n sum = 1+x\n term = x;\n n = 2;\n while term*term > eps*eps:\n term *= x/n\n n += 1\n sum += term\n return sum\n\na=3.0\nb=4.0\nkrok=(b-a)/10\n\nwhile a<=b: \n print(round(a,2), end=' ')\n print(round(f(a),5),end=' ')\n print(round(tailor(a,1e-6),5))\n ax = plt.gca()\n ax.spines['top'].set_color('none')\n ax.spines['bottom'].set_position('zero')\n ax.spines['left'].set_position('zero')\n ax.spines['right'].set_color('none')\n plt.plot(m,'-r', label=r'$y = {2x}+3$')\n plt.xlabel('x')\n plt.ylabel('y')\n plt.title('Exponential Curve')\n plt.legend(loc='upper left')\n plt.show()\n a+=krok\n```\n\n\n```python\ndef plot_math_function(f, min_x, max_x, num_points):\n # Write your code here\n pass\n```\n\n\n```python\nplot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)\nplot_math_function(lambda x: -x + 8, -1, 10, 1000)\nplot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)\nplot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)\nplot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)\n```\n\n### * Problem 7. Solving Equations Graphically\nNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the \"=\" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.\n\nTo do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.\n\n```python\nvectorized_fs = [np.vectorize(f) for f in functions]\nys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n```\n\n\n```python\ndef plot_math_functions(functions, min_x, max_x, num_points):\n # Write your code here\n pass\n```\n\n\n```python\nplot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)\nplot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)\n```\n\nThis is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.\n\n\n```python\nplot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)\n```\n\n### Problem 8. Trigonometric Functions\nWe already saw the graph of the function $y = \\sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.\n\n\n\nThe two basic trigonometric functions are defined as the ratio of two sides:\n$$ \\sin(x) = \\frac{\\text{opposite}}{\\text{hypotenuse}} $$\n$$ \\cos(x) = \\frac{\\text{adjacent}}{\\text{hypotenuse}} $$\n\nAnd also:\n$$ \\tan(x) = \\frac{\\text{opposite}}{\\text{adjacent}} = \\frac{\\sin(x)}{\\cos(x)} $$\n$$ \\cot(x) = \\frac{\\text{adjacent}}{\\text{opposite}} = \\frac{\\cos(x)}{\\sin(x)} $$\n\nThis is fine, but using this, \"right-triangle\" definition, we're able to calculate the trigonometric functions of angles up to $90^\\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a \"unit circle\".\n\n\n\nWe can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\\cos(\\alpha)$ and the $y$-coordinate - to $\\sin(\\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\\circ$. After that, the same values repeat: these functions are **periodic**: \n$$ \\sin(k.360^\\circ + \\alpha) = \\sin(\\alpha), k = 0, 1, 2, \\dots $$\n$$ \\cos(k.360^\\circ + \\alpha) = \\cos(\\alpha), k = 0, 1, 2, \\dots $$\n\nWe can, of course, use this picture to derive other identities, such as:\n$$ \\sin(90^\\circ + \\alpha) = \\cos(\\alpha) $$\n\nA very important property of the sine and cosine is that they accept values in the range $(-\\infty; \\infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\\infty; \\infty)$ **except when their denominators are zero** and produce values in the same range. \n\n#### Radians\nA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\\text{rad}$ or without any designation, so $\\sin(2)$ means \"sine of two radians\".\n\n\nIt's defined as *the central angle of an arc with length equal to the circle's radius* and $1\\text{rad} \\approx 57.296^\\circ$.\n\nWe know that the circle circumference is $C = 2\\pi r$, therefore we can fit exactly $2\\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\\circ$ or $2\\pi\\ \\text{rad}$. Also, $\\pi rad = 180^\\circ$.\n\n(Some people prefer using $\\tau = 2\\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)\n\n**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\\text{[deg]} = 180/\\pi.\\text{[rad]}, \\text{[rad]} = \\pi/180.\\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.\n\n#### Inverse trigonometric functions\nAll trigonometric functions have their inverses. If you plug in, say $\\pi/4$ in the $\\sin(x)$ function, you get $\\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:\n$$ \\arcsin(y) = x: sin(y) = x $$\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} $$\n\nPlease note that this is NOT entirely correct. From the relations we found:\n$$\\sin(x) = sin(2k\\pi + x), k = 0, 1, 2, \\dots $$\n\nit follows that $\\arcsin(x)$ has infinitely many values, separated by $2k\\pi$ radians each:\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} + 2k\\pi, k = 0, 1, 2, \\dots $$\n\nIn most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.\n\nNote 1: There are inverse functions for all four basic trigonometric functions: $\\arcsin$, $\\arccos$, $\\arctan$, $\\text{arccot}$. These are sometimes written as $\\sin^{-1}(x)$, $\\cos^{-1}(x)$, etc. These definitions are completely equivalent. \n\nJust notice the difference between $\\sin^{-1}(x) := \\arcsin(x)$ and $\\sin(x^{-1}) = \\sin(1/x)$.\n\n#### Exercise\nUse the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions).\n\n\n```python\n# Write your code here\n```\n\n### ** Problem 9. Perlin Noise\nThis algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).\n#### Noise\nNoise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.\nWe can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.\n\n$$ \\text{noise}(x, y) = N, N \\in [n_{min}, n_{max}] $$\n\nThis function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a \"scalar field\").\n\nRandom variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have \"uniform noise\" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.\n\n#### Perlin noise\nThere are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.\n\n#### Algorithm\n... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).\n\n#### Your task\n1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created\n2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using\n3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created\n4. Test and improve the algorithm\n5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)\n6. Communicate the results (e.g. in the Softuni forum)\n\nHint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.\n", "meta": {"hexsha": "ec1d151b3f2ab3b83a557ca8ac23b5141a4c5468", "size": 81893, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01.Math_Concept_For_Devs/02. High-School-Maths-Exercise/High-School Maths Exercise.ipynb", "max_stars_repo_name": "vladimirpetukhov/Artifical_Intelligence_Course", "max_stars_repo_head_hexsha": "0dd9a959f5f8b5e413dbddb13d80bb917c124fb5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-27T16:01:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-27T16:01:53.000Z", "max_issues_repo_path": "01.Math_Concept_For_Devs/02. High-School-Maths-Exercise/High-School Maths Exercise.ipynb", "max_issues_repo_name": "vladimirpetukhov/Artifical_Intelligence_Course", "max_issues_repo_head_hexsha": "0dd9a959f5f8b5e413dbddb13d80bb917c124fb5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01.Math_Concept_For_Devs/02. High-School-Maths-Exercise/High-School Maths Exercise.ipynb", "max_forks_repo_name": "vladimirpetukhov/Artifical_Intelligence_Course", "max_forks_repo_head_hexsha": "0dd9a959f5f8b5e413dbddb13d80bb917c124fb5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.0934371524, "max_line_length": 13368, "alphanum_fraction": 0.7498565201, "converted": true, "num_tokens": 7460, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.399811640739795, "lm_q2_score": 0.25386100132422423, "lm_q1q2_score": 0.10149658345928537}} {"text": "```\n# this mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)\n\n# enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment3/'\nFOLDERNAME = 'cs231n/assignments/assignment2/'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# this downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content\n```\n\n Mounted at /content/drive\n /content/drive/My Drive/cs231n/assignments/assignment2/cs231n/datasets\n /content\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. \nOne idea along these lines is batch normalization which was proposed by [1] in 2015.\n\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(' means: ', x.mean(axis=axis))\n print(' stds: ', x.std(axis=axis))\n print() \n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n \tThere will be an option for Colab users and another for Jupyter (local) users.\n\n\n\n```\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n## Batch normalization: forward\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n# Now means should be close to beta and stds close to gamma\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n## Batch normalization: backward\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n#You should expect to see relative errors between 1e-13 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.702926400705332e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n## Batch normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hart part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 9.431854837645182e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 1.79x\n\n\n## Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\nHINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.2611955101340957\n W1 relative error: 1.10e-04\n W2 relative error: 2.85e-06\n W3 relative error: 4.05e-10\n b1 relative error: 4.44e-08\n b2 relative error: 2.22e-08\n b3 relative error: 1.01e-10\n beta1 relative error: 7.33e-09\n beta2 relative error: 1.89e-09\n gamma1 relative error: 6.96e-09\n gamma2 relative error: 1.96e-09\n \n Running check with reg = 3.14\n Initial loss: 6.996533220108303\n W1 relative error: 1.98e-06\n W2 relative error: 2.28e-06\n W3 relative error: 1.11e-08\n b1 relative error: 5.55e-09\n b2 relative error: 2.22e-08\n b3 relative error: 1.42e-10\n beta1 relative error: 6.65e-09\n beta2 relative error: 3.48e-09\n gamma1 relative error: 8.80e-09\n gamma2 relative error: 5.28e-09\n\n\n# Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340974\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.315000; val_acc: 0.266000\n (Iteration 21 / 200) loss: 2.039365\n (Epoch 2 / 10) train acc: 0.384000; val_acc: 0.279000\n (Iteration 41 / 200) loss: 2.041102\n (Epoch 3 / 10) train acc: 0.494000; val_acc: 0.309000\n (Iteration 61 / 200) loss: 1.753902\n (Epoch 4 / 10) train acc: 0.531000; val_acc: 0.307000\n (Iteration 81 / 200) loss: 1.246584\n (Epoch 5 / 10) train acc: 0.573000; val_acc: 0.314000\n (Iteration 101 / 200) loss: 1.320590\n (Epoch 6 / 10) train acc: 0.632000; val_acc: 0.338000\n (Iteration 121 / 200) loss: 1.159472\n (Epoch 7 / 10) train acc: 0.686000; val_acc: 0.325000\n (Iteration 141 / 200) loss: 1.156477\n (Epoch 8 / 10) train acc: 0.767000; val_acc: 0.336000\n (Iteration 161 / 200) loss: 0.630541\n (Epoch 9 / 10) train acc: 0.802000; val_acc: 0.345000\n (Iteration 181 / 200) loss: 0.862820\n (Epoch 10 / 10) train acc: 0.788000; val_acc: 0.332000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696062\n (Epoch 6 / 10) train acc: 0.536000; val_acc: 0.346000\n (Iteration 121 / 200) loss: 1.550785\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.310000\n (Iteration 141 / 200) loss: 1.436316\n (Epoch 8 / 10) train acc: 0.622000; val_acc: 0.342000\n (Iteration 161 / 200) loss: 1.001618\n (Epoch 9 / 10) train acc: 0.663000; val_acc: 0.332000\n (Iteration 181 / 200) loss: 0.922066\n (Epoch 10 / 10) train acc: 0.720000; val_acc: 0.338000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Batch normalization and batch size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n# Means should be close to zero and stds close to one\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n# Now means should be close to beta and stds close to gamma\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n#You should expect to see relative errors between 1e-12 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.4336158494902849e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and batch size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n[FILL THIS IN]\n\n", "meta": {"hexsha": "cf4107e87edbd3c4dc9c7cc4eee38c0fdb902c03", "size": 442814, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "Abhijeet8901/CS231n", "max_stars_repo_head_hexsha": "c8e715028b453899d5069cdb34faf3fc2959c270", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "Abhijeet8901/CS231n", "max_issues_repo_head_hexsha": "c8e715028b453899d5069cdb34faf3fc2959c270", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "Abhijeet8901/CS231n", "max_forks_repo_head_hexsha": "c8e715028b453899d5069cdb34faf3fc2959c270", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 442814.0, "max_line_length": 442814, "alphanum_fraction": 0.9384640052, "converted": true, "num_tokens": 9252, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4225046348141882, "lm_q2_score": 0.23934934732271168, "lm_q1q2_score": 0.1011262085835966}} {"text": "+ This notebook is part of lecture 14 *Orthogonal vectors and subspaces* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]\n+ Created by me, Dr Juan H Klopper\n + Head of Acute Care Surgery\n + Groote Schuur Hospital\n + University Cape Town\n + Email me with your thoughts, comments, suggestions and corrections \n
Linear Algebra OCW MIT18.06 IPython notebook [2] study notes by Dr Juan H Klopper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.\n\n+ [1] OCW MIT 18.06\n+ [2] Fernando P\u00e9rez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org\n\n\n```python\nfrom IPython.core.display import HTML, Image\n```\n\n\n```python\n# css_file = 'style.css'\n# HTML(open(css_file, 'r').read())\n```\n\n\n```python\nfrom sympy import init_printing, symbols, Matrix\nfrom warnings import filterwarnings\n```\n\n\n```python\ninit_printing(use_latex='mathjax')\nfilterwarnings('ignore')\n```\n\n# Orthogonal vectors and subspaces\n# Rowspace orthogonal to nullspace and columnspace to nullspace of AT\n# N(ATA) = N(A)\n\n## Orthogonal vectors\n\n* Two vectors are orthogonal if their dot product is zero\n* If they are written as column vectors **x** and **y**, their dot product is **x**T**y**\n * For orthogonal (perpendicular) vectors **x**T**y** = 0\n* From the Pythagorean theorem they are orthogonal if\n$$ { \\left\\| \\overline { x } \\right\\| }^{ 2 }+{ \\left\\| \\overline { y } \\right\\| }^{ 2 }={ \\left\\| \\overline { x } +\\overline { y } \\right\\| }^{ 2 }\\\\ { \\left\\| \\overline { x } \\right\\| }=\\sqrt { { x }_{ 1 }^{ 2 }+{ x }_{ 2 }^{ 2 }+\\dots +{ x }_{ b }^{ 2 } } $$\n\n* The length squared of a (column) vector **x** can be calculated by **x**T**x**\n* This achieves exactly the same as the sum of the squares of each element in the vector\n$$ { x }_{ 1 }^{ 2 }+{ x }_{ 2 }^{ 2 }+\\dots +{ x }_{ n }^{ 2 }$$\n\n* Following from the Pythagorean theorem we have\n$$ { \\left\\| \\overline { x } \\right\\| }^{ 2 }+{ \\left\\| \\overline { y } \\right\\| }^{ 2 }={ \\left\\| \\overline { x } +\\overline { y } \\right\\| }^{ 2 }\\\\ { \\underline { x } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } ={ \\left( \\underline { x } +\\underline { y } \\right) }^{ T }\\left( \\underline { x } +\\underline { y } \\right) \\\\ { \\underline { x } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } ={ \\underline { x } }^{ T }\\underline { x } +{ \\underline { x } }^{ T }\\underline { y } +{ \\underline { y } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } \\\\ \\because \\quad { \\underline { x } }^{ T }\\underline { y } ={ \\underline { y } }^{ T }\\underline { x } \\\\ { \\underline { x } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } ={ \\underline { x } }^{ T }\\underline { x } +2{ \\underline { x } }^{ T }\\underline { y } +{ \\underline { y } }^{ T }\\underline { y } \\\\ 2{ \\underline { x } }^{ T }\\underline { y } =0\\\\ { \\underline { x } }^{ T }\\underline { y } =0 $$\n* This states that the dot product of orthogonal vectors equal zero\n\n* The zero vector is orthogonal to all other similar dimensional vectors\n\n## Orthogonality of subspaces\n\n* Consider two subspaces *S* and *T*\n* To be orthogonal every vector in *S* must be orthogonal to any vector in *T*\n\n* Consider the *XY* and *YZ* planes in 3-space\n* They are not orthogonal, since many combinations of vectors (one in each plane) are not orthogonal\n* Vectors in the intersection, even though, one each from each plane can indeed be the same vector\n* We can say that any planes that intersect cannot be orthogonal to each other\n\n## Orthogonality of the rowspace and the nullspace\n\n* The nullspace contains vectors **x** such that A**x** = **0**\n* Now remembering that **x**T**y** = 0 for orthogonal column vectors and considering each row in A as a transposed column vector and **x** (indeed a column vector) and their product being zero meaning that they are orthogonal, we have:\n$$ \\begin{bmatrix} { { a }_{ 11 } } & { a }_{ 12 } & \\dots & { a }_{ 1n } \\\\ { a }_{ 21 } & { a }_{ 22 } & \\dots & { a }_{ 2n } \\\\ \\vdots & \\vdots & \\vdots & \\vdots \\\\ { a }_{ m1 } & { a }_{ m2 } & \\dots & { a }_{ mn } \\end{bmatrix}\\begin{bmatrix} { x }_{ 1 } \\\\ { x }_{ 2 } \\\\ \\vdots \\\\ { x }_{ n } \\end{bmatrix}=\\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{bmatrix}\\\\ \\begin{bmatrix} { a }_{ 11 } & { a }_{ 12 } & \\dots & { a }_{ 1n } \\end{bmatrix}\\begin{bmatrix} { x }_{ 1 } \\\\ { x }_{ 2 } \\\\ \\vdots \\\\ { x }_{ n } \\end{bmatrix}=0\\\\ \\dots $$\n\n* The rows (row vectors) in A are NOT the only vectors in the rowspace, since we also need to show that ALL linear combinations of them are also orthogonal to **x**\n* This is easy to see by the structure above\n\n## Orthogonality of the columnspace and the nullspace of AT\n\n* The proof is the same as above\n\n* The orthogonality of the rowspace and the nullspace is creating two orthogonal subspaces in ℝn\n* The orthogonality of the columnspace and the nullspace of AT is creating two orthogonal subspaces in ℝm\n\n* Note how the dimension add up to the degree of the space ℝ\n * The rowspace (a fundamental subspace in ℝn) is of dimension *r*\n * The dimension of the nullspace (a fundamental subspace in ℝn) is of dimension *n* - *r*\n * Addition of these dimensions gives us the dimension of the total space *n* as in ℝn\n * AND\n * The columnspace is of dimension *r* and the nullspace of AT is of dimension *m* - *r*, which adds to *m* as in ℝm\n\n* This means that two lines that may be orthogonal in ℝ3 cannot be two orthogonal subspaces of ℝ3 since the addition of the dimensions of these two subspaces (lines) is not 3 (as in ℝ3)\n\n* We call this complementarity, i.e. the nullspace and rowspace are orthogonal *complements* in ℝn\n\n## ATA\n\n* We know that\n * The result is square\n * The result is symmetric, i.e. (*n*×*m*)(*m*×*n*)=*n*×*n*\n * (ATA)T = ATATT = ATA\n\n* When A**x** = **b** is not solvable we use ATA**x** = AT**b**\n* **x** in the first instance did not have a solution, but after multiplying both side with AT, we hope that the second **x** has an solution, now called\n$$ {A}^{T}{A}\\hat{x} = {A}^{T}{b} $$\n\n\n* Consider the matrix below with *m* = 4 equation in *n* = 2 unknowns\n* The only **b** solutions must be linear combinations of the columnspace of A\n\n\n```python\nA = Matrix([[1, 1], [1, 2], [1, 5]])\nA\n```\n\n$$ {x}_{1} \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\end{bmatrix} + {x}_{2} \\begin{bmatrix} 1 \\\\ 2 \\\\ 5 \\end{bmatrix} = \\begin{bmatrix} {b}_{1} \\\\ {b}_{2} \\\\ {b}_{3} \\end{bmatrix} $$\n\n\n```python\nA.transpose() * A\n```\n\n* Note how the nullspace of ATA is equal to the nullspace of A\n\n\n```python\n(A.transpose() * A).nullspace() == A.nullspace()\n```\n\n* The same goes for the rank\n\n\n```python\nA.rref(), (A.transpose() * A).rref()\n```\n\n* ATA is not always invertible\n* In fact it is only invertible if the nullspace of A only contains the zero vector (has independent columns)\n\n\n```python\n\n```\n", "meta": {"hexsha": "a071bbec169722bffdb07fbfbd0b1ed36413e172", "size": 11981, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_math/MIT_OCW_18_06_Linear_algebra/II_01_Orthogonality_of_vectors_and_subspaces.ipynb", "max_stars_repo_name": "aixpact/data-science", "max_stars_repo_head_hexsha": "f04a54595fbc2d797918d450b979fd4c2eabac15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-22T23:12:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-25T02:30:48.000Z", "max_issues_repo_path": "_math/MIT_OCW_18_06_Linear_algebra/II_01_Orthogonality_of_vectors_and_subspaces.ipynb", "max_issues_repo_name": "aixpact/data-science", "max_issues_repo_head_hexsha": "f04a54595fbc2d797918d450b979fd4c2eabac15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_math/MIT_OCW_18_06_Linear_algebra/II_01_Orthogonality_of_vectors_and_subspaces.ipynb", "max_forks_repo_name": "aixpact/data-science", "max_forks_repo_head_hexsha": "f04a54595fbc2d797918d450b979fd4c2eabac15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.9783950617, "max_line_length": 1149, "alphanum_fraction": 0.5446123028, "converted": true, "num_tokens": 2604, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.21206879439743004, "lm_q1q2_score": 0.10106767203542165}} {"text": "# Homework and bake-off: word-level entailment with neural networks\n\n\n```python\n__author__ = \"Christopher Potts\"\n__version__ = \"CS224u, Stanford, Fall 2020\"\n```\n\n## Contents\n\n1. [Overview](#Overview)\n1. [Set-up](#Set-up)\n1. [Data](#Data)\n1. [Baseline](#Baseline)\n 1. [Representing words: vector_func](#Representing-words:-vector_func)\n 1. [Combining words into inputs: vector_combo_func](#Combining-words-into-inputs:-vector_combo_func)\n 1. [Classifier model](#Classifier-model)\n 1. [Baseline results](#Baseline-results)\n1. [Homework questions](#Homework-questions)\n 1. [Hypothesis-only baseline [2 points]](#Hypothesis-only-baseline-[2-points])\n 1. [Alternatives to concatenation [2 points]](#Alternatives-to-concatenation-[2-points])\n 1. [A deeper network [2 points]](#A-deeper-network-[2-points])\n 1. [Your original system [3 points]](#Your-original-system-[3-points])\n1. [Bake-off [1 point]](#Bake-off-[1-point])\n\n## Overview\n\nThe general problem is word-level natural language inference. Training examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.\n\nThe homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)\n\n## Set-up\n\nSee [the first notebook in this unit](nli_01_task_and_data.ipynb) for set-up instructions.\n\n\n```python\nfrom collections import defaultdict\nimport json\nimport numpy as np\nimport os\nimport pandas as pd\nfrom torch_shallow_neural_classifier import TorchShallowNeuralClassifier\nimport nli\nimport warnings\nimport utils\n```\n\n\n```python\ndir(nli)\n```\n\n\n\n\n ['ANLIDevReader',\n 'ANLIReader',\n 'ANLITrainReader',\n 'CONDITION_NAMES',\n 'DictVectorizer',\n 'MultiNLIMatchedDevReader',\n 'MultiNLIMismatchedDevReader',\n 'MultiNLITrainReader',\n 'NLIExample',\n 'NLIReader',\n 'SNLIDevReader',\n 'SNLITrainReader',\n 'Tree',\n '__author__',\n '__builtins__',\n '__cached__',\n '__doc__',\n '__file__',\n '__loader__',\n '__name__',\n '__package__',\n '__spec__',\n '__version__',\n 'accuracy_score',\n 'bake_off_evaluation',\n 'build_dataset',\n 'classification_report',\n 'defaultdict',\n 'experiment',\n 'f1_score',\n 'get_pair_overlap_size',\n 'get_vocab_overlap_size',\n 'json',\n 'np',\n 'os',\n 'random',\n 'read_annotated_subset',\n 'str2tree',\n 'train_test_split',\n 'utils',\n 'word_entail_featurize',\n 'wordentail_experiment',\n 'wordentail_experiment_cv']\n\n\n\n\n```python\nDATA_HOME = 'data'\n\nNLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')\n\nwordentail_filename = os.path.join(\n NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')\n\nGLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')\n```\n\n## Data\n\nI've processed the data into a train/dev split that is designed to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample. \n\nThe defining feature of the dataset is that the `train` and `dev` __vocabularies__ are disjoint. That is, if a word `w` appears in a training pair, it does not occur in any test pair. It follows from this that there are also no word-pairs shared between train and dev, as you would expect. This should require your models to learn abstract relationships, as opposed to memorizing incidental properties of individual words in the dataset.\n\n\n```python\nwith open(wordentail_filename) as f:\n wordentail_data = json.load(f)\n```\n\nThe keys are the splits plus a list giving the vocabulary for the entire dataset:\n\n\n```python\nwordentail_data.keys()\n```\n\n\n\n\n dict_keys(['dev', 'train', 'vocab'])\n\n\n\n\n```python\nwordentail_data['train'][: 5]\n```\n\n\n\n\n [[['abode', 'house'], 1],\n [['abortion', 'anaemia'], 0],\n [['abortion', 'aneurysm'], 0],\n [['abortion', 'blindness'], 0],\n [['abortion', 'deafness'], 0]]\n\n\n\n\n```python\nnli.get_vocab_overlap_size(wordentail_data)\n```\n\n\n\n\n 0\n\n\n\nBecause no words are shared between `train` and `dev`, no pairs are either:\n\n\n```python\nnli.get_pair_overlap_size(wordentail_data)\n```\n\n\n\n\n 0\n\n\n\nHere is the label distribution:\n\n\n```python\npd.DataFrame(wordentail_data['train'])[1].value_counts()\n```\n\n\n\n\n 0 7000\n 1 1283\n Name: 1, dtype: int64\n\n\n\nThis is a challenging label distribution \u2013 there are more than 5 times as more non-entailment cases as entailment cases.\n\n## Baseline\n\nEven in deep learning, __feature representation is vital and requires care!__ For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input.\n\n### Representing words: vector_func\n\nLet's consider two baseline word representations methods:\n\n1. Random vectors (as returned by `utils.randvec`).\n1. 50-dimensional GloVe representations.\n\n\n```python\ndef randvec(w, n=50, lower=-1.0, upper=1.0):\n \"\"\"Returns a random vector of length `n`. `w` is ignored.\"\"\"\n return utils.randvec(n=n, lower=lower, upper=upper)\n```\n\n\n```python\ndef load_glove50():\n glove_src = os.path.join(GLOVE_HOME, 'glove.6B.50d.txt')\n # Creates a dict mapping strings (words) to GloVe vectors:\n GLOVE = utils.glove2dict(glove_src)\n return GLOVE\n\nGLOVE = load_glove50()\n\ndef glove_vec(w):\n \"\"\"Return `w`'s GloVe representation if available, else return\n a random vector.\"\"\"\n return GLOVE.get(w, randvec(w, n=50))\n```\n\n### Combining words into inputs: vector_combo_func\n\nHere we decide how to combine the two word vectors into a single representation. In more detail, where `u` is a vector representation of the left word and `v` is a vector representation of the right word, we need a function `vector_combo_func` such that `vector_combo_func(u, v)` returns a new input vector `z` of dimension `m`. A simple example is concatenation:\n\n\n```python\ndef vec_concatenate(u, v):\n \"\"\"Concatenate np.array instances `u` and `v` into a new np.array\"\"\"\n return np.concatenate((u, v))\n```\n\n`vector_combo_func` could instead be vector average, vector difference, etc. (even combinations of those) \u2013 there's lots of space for experimentation here; [homework question 2](#Alternatives-to-concatenation-[2-points]) below pushes you to do some exploration.\n\n### Classifier model\n\nFor a baseline model, I chose `TorchShallowNeuralClassifier`:\n\n\n```python\nnet = TorchShallowNeuralClassifier(early_stopping=True)\n```\n\n### Baseline results\n\nThe following puts the above pieces together, using `vector_func=glove_vec`, since `vector_func=randvec` seems so hopelessly misguided for our problem!\n\n\n```python\nbaseline_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['train'],\n assess_data=wordentail_data['dev'],\n model=net,\n vector_func=glove_vec,\n vector_combo_func=vec_concatenate)\n```\n\n Stopping after epoch 44. Validation score did not improve by tol=1e-05 for more than 10 epochs. Final error is 2.4090842604637146\n\n precision recall f1-score support\n \n 0 0.865 0.945 0.903 1732\n 1 0.448 0.234 0.307 334\n \n accuracy 0.830 2066\n macro avg 0.656 0.589 0.605 2066\n weighted avg 0.797 0.830 0.807 2066\n \n\n\n## Homework questions\n\nPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)\n\n### Hypothesis-only baseline [2 points]\n\nDuring our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for NLI tasks can be remarkably robust. This question asks you to explore briefly how this baseline affects our task.\n\nFor this problem, submit two functions:\n\n1. A `vector_combo_func` function called `hypothesis_only` that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.\n\n1. A function called `run_hypothesis_only_evaluation` that does the following:\n 1. Loops over the two `vector_combo_func` values `vec_concatenate` and `hypothesis_only`, calling `nli.wordentail_experiment` to train on the 'train' portion and assess on the 'dev' portion, with `glove_vec` as the `vector_func`. So that the results are consistent, use an `sklearn.linear_model.LogisticRegression` with default parameters as the model.\n 1. Returns a `dict` mapping `function_name` strings to the 'macro-F1' score for that pair, as returned by the call to `nli.wordentail_experiment`. (Tip: you can get the `str` name of, e.g., `hypothesis_only` with `hypothesis_only.__name__`.)\n \nThe functions `test_hypothesis_only` and `test_run_hypothesis_only_evaluation` will help ensure that your functions have the desired logic.\n\n\n```python\nfrom sklearn.linear_model import LogisticRegression\ndef hypothesis_only(u, v):\n pass\n ##### YOUR CODE HERE\n return v\n\n\ndef run_hypothesis_only_evaluation():\n pass\n ##### YOUR CODE HERE\n res = dict()\n mod = LogisticRegression()\n for vector_combo_func in [vec_concatenate, hypothesis_only]:\n print(vector_combo_func.__name__)\n exp = nli.wordentail_experiment(\n train_data=wordentail_data['train'],\n assess_data=wordentail_data['dev'],\n model=mod,\n vector_func=glove_vec,\n vector_combo_func=vector_combo_func\n )\n res[vector_combo_func.__name__] = exp[\"macro-F1\"]\n return res\n\n```\n\n\n```python\ndef test_hypothesis_only(hypothesis_only):\n v = hypothesis_only(1, 2)\n assert v == 2\n```\n\n\n```python\ntest_hypothesis_only(hypothesis_only)\n```\n\n\n```python\ndef test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation):\n results = run_hypothesis_only_evaluation()\n print(results)\n assert all(x in results for x in ('hypothesis_only', 'vec_concatenate')), \\\n (\"The return value of `run_hypothesis_only_evaluation` does not \"\n \"have the intended kind of keys.\")\n assert isinstance(results['vec_concatenate'], float), \\\n (\"The values of the `run_hypothesis_only_evaluation` result \"\n \"should be floats.\")\n```\n\n\n```python\ntest_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation)\n```\n\n vec_concatenate\n precision recall f1-score support\n \n 0 0.864 0.952 0.906 1732\n 1 0.475 0.225 0.305 334\n \n accuracy 0.834 2066\n macro avg 0.669 0.588 0.605 2066\n weighted avg 0.801 0.834 0.809 2066\n \n hypothesis_only\n precision recall f1-score support\n \n 0 0.855 0.973 0.910 1732\n 1 0.505 0.144 0.224 334\n \n accuracy 0.839 2066\n macro avg 0.680 0.558 0.567 2066\n weighted avg 0.798 0.839 0.799 2066\n \n {'vec_concatenate': 0.605461002412222, 'hypothesis_only': 0.566924568814928}\n\n\n### Alternatives to concatenation [2 points]\n\nWe've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore two simple alternative:\n\n1. Write a function `vec_diff` that, for a given pair of vector inputs `u` and `v`, returns the element-wise difference between `u` and `v`.\n\n1. Write a function `vec_max` that, for a given pair of vector inputs `u` and `v`, returns the element-wise max values between `u` and `v`.\n\nYou needn't include your uses of `nli.wordentail_experiment` with these functions, but we assume you'll be curious to see how they do!\n\n\n```python\ndef vec_diff(u, v):\n pass\n ##### YOUR CODE HERE\n return u - v\n\n\ndef vec_max(u, v):\n pass\n ##### YOUR CODE HERE\n return np.maximum(u, v)\n\n\n```\n\n\n```python\ndef test_vec_diff(vec_diff):\n u = np.array([10.2, 8.1])\n v = np.array([1.2, -7.1])\n result = vec_diff(u, v)\n expected = np.array([9.0, 15.2])\n assert np.array_equal(result, expected), \\\n \"Expected {}; got {}\".format(expected, result)\n```\n\n\n```python\ntest_vec_diff(vec_diff)\n```\n\n\n```python\ndef test_vec_max(vec_max):\n u = np.array([1.2, 8.1])\n v = np.array([10.2, -7.1])\n result = vec_max(u, v)\n expected = np.array([10.2, 8.1])\n assert np.array_equal(result, expected), \\\n \"Expected {}; got {}\".format(expected, result)\n```\n\n\n```python\ntest_vec_max(vec_max)\n```\n\n\n```python\ndef run_hypothesis_only_evaluation():\n pass\n ##### YOUR CODE HERE\n res = dict()\n mod = LogisticRegression()\n for vector_combo_func in [vec_concatenate, vec_diff, vec_max]:\n print(vector_combo_func.__name__)\n exp = nli.wordentail_experiment(\n train_data=wordentail_data['train'],\n assess_data=wordentail_data['dev'],\n model=mod,\n vector_func=glove_vec,\n vector_combo_func=vector_combo_func\n )\n res[vector_combo_func.__name__] = exp[\"macro-F1\"]\n return res\n\nresults = run_hypothesis_only_evaluation()\nprint(results)\n```\n\n vec_concatenate\n precision recall f1-score support\n \n 0 0.863 0.953 0.906 1732\n 1 0.474 0.219 0.299 334\n \n accuracy 0.834 2066\n macro avg 0.669 0.586 0.603 2066\n weighted avg 0.801 0.834 0.808 2066\n \n vec_diff\n precision recall f1-score support\n \n 0 0.846 0.991 0.913 1732\n 1 0.579 0.066 0.118 334\n \n accuracy 0.841 2066\n macro avg 0.713 0.528 0.516 2066\n weighted avg 0.803 0.841 0.784 2066\n \n vec_max\n precision recall f1-score support\n \n 0 0.853 0.955 0.901 1732\n 1 0.391 0.150 0.216 334\n \n accuracy 0.825 2066\n macro avg 0.622 0.552 0.559 2066\n weighted avg 0.779 0.825 0.791 2066\n \n {'vec_concatenate': 0.6026637094887621, 'vec_diff': 0.5155227636696409, 'vec_max': 0.5589063071351901}\n\n\n### A deeper network [2 points]\n\nIt is very easy to subclass `TorchShallowNeuralClassifier` if all you want to do is change the network graph: all you have to do is write a new `build_graph`. If your graph has new arguments that the user might want to set, then you should also redefine `__init__` so that these values are accepted and set as attributes.\n\nFor this question, please subclass `TorchShallowNeuralClassifier` so that it defines the following graph:\n\n$$\\begin{align}\nh_{1} &= xW_{1} + b_{1} \\\\\nr_{1} &= \\textbf{Bernoulli}(1 - \\textbf{dropout_prob}, n) \\\\\nd_{1} &= r_1 * h_{1} \\\\\nh_{2} &= f(d_{1}) \\\\\nh_{3} &= h_{2}W_{2} + b_{2}\n\\end{align}$$\n\nHere, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \\textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside `TorchShallowNeuralClassifier`, $h_{3}$ is the basis for a softmax classifier; no activation function is applied to it because the softmax scaling is handled internally by the loss function.)\n\nFor your implementation, please use `nn.Sequential`, `nn.Linear`, and `nn.Dropout` to define the required layers.\n\nFor comparison, using this notation, `TorchShallowNeuralClassifier` defines the following graph:\n\n$$\\begin{align}\nh_{1} &= xW_{1} + b_{1} \\\\\nh_{2} &= f(h_{1}) \\\\\nh_{3} &= h_{2}W_{2} + b_{2}\n\\end{align}$$\n\nThe following code starts this sub-class for you, so that you can concentrate on `build_graph`. Be sure to make use of `self.dropout_prob`.\n\nFor this problem, submit just your completed `TorchDeepNeuralClassifier`. You needn't evaluate it, though we assume you will be keen to do that!\n\nYou can use `test_TorchDeepNeuralClassifier` to ensure that your network has the intended structure.\n\n\n```python\nimport torch.nn as nn\n\nclass TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):\n def __init__(self, dropout_prob=0.7, **kwargs):\n self.dropout_prob = dropout_prob\n super().__init__(**kwargs)\n\n def build_graph(self):\n \"\"\"Complete this method!\n\n Returns\n -------\n an `nn.Module` instance, which can be a free-standing class you\n write yourself, as in `torch_rnn_classifier`, or the outpiut of\n `nn.Sequential`, as in `torch_shallow_neural_classifier`.\n\n \"\"\"\n pass\n ##### YOUR CODE HERE\n return nn.Sequential(\n nn.Linear(self.input_dim, self.hidden_dim),\n nn.Dropout(p=self.dropout_prob),\n self.hidden_activation,\n nn.Linear(self.hidden_dim, self.n_classes_))\n\n\n```\n\n\n```python\ndef test_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier):\n dropout_prob = 0.55\n assert hasattr(TorchDeepNeuralClassifier(), \"dropout_prob\"), \\\n \"TorchDeepNeuralClassifier must have an attribute `dropout_prob`.\"\n try:\n inst = TorchDeepNeuralClassifier(dropout_prob=dropout_prob)\n except TypeError:\n raise TypeError(\"TorchDeepNeuralClassifier must allow the user \"\n \"to set `dropout_prob` on initialization\")\n inst.input_dim = 10\n inst.n_classes_ = 5\n graph = inst.build_graph()\n print(graph)\n assert len(graph) == 4, \\\n \"The graph should have 4 layers; yours has {}\".format(len(graph))\n expected = {\n 0: 'Linear',\n 1: 'Dropout',\n 2: 'Tanh',\n 3: 'Linear'}\n for i, label in expected.items():\n name = graph[i].__class__.__name__\n assert label in name, \\\n (\"The {} layer of the graph should be a {} layer; \"\n \"yours is {}\".format(i, label, name))\n assert graph[1].p == dropout_prob, \\\n (\"The user's value for `dropout_prob` should be the value of \"\n \"`p` for the Dropout layer.\")\n```\n\n\n```python\ntest_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier)\n```\n\n Sequential(\n (0): Linear(in_features=10, out_features=50, bias=True)\n (1): Dropout(p=0.55, inplace=False)\n (2): Tanh()\n (3): Linear(in_features=50, out_features=5, bias=True)\n )\n\n\n### Your original system [3 points]\n\nThis is a simple dataset, but its \"word-disjoint\" nature ensures that it's a challenging one, and there are lots of modeling strategies one might adopt. \n\nYou are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.\n\nYou are free to use different pretrained word vectors and the like.\n\nPlease embed your code in this notebook so that we can rerun it.\n\nIn the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.\n\n\n```python\n# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:\n# 1) Textual description of your system.\n# 2) The code for your original system.\n# 3) The score achieved by your system in place of MY_NUMBER.\n# With no other changes to that line.\n# You should report your score as a decimal value <=1.0\n# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS\n\n# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM\n# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING\n# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.\n\n# START COMMENT: Enter your system description in this cell.\n# My peak score was: MY_NUMBER\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n pass\n\n# STOP COMMENT: Please do not remove this comment.\n```\n\n\n```python\ndef load_glove300():\n glove_src = os.path.join(GLOVE_HOME, 'glove.6B.300d.txt')\n # Creates a dict mapping strings (words) to GloVe vectors:\n GLOVE = utils.glove2dict(glove_src)\n return GLOVE\n\nGLOVE = load_glove300()\n\ndef glove_vec_300(w):\n \"\"\"Return `w`'s GloVe representation if available, else return\n a random vector.\"\"\"\n return GLOVE.get(w, randvec(w, n=300))\n```\n\n\n```python\ndef fit_softmax_with_hyperparameter_search(X, y):\n \"\"\"\n A MaxEnt model of dataset with hyperparameter cross-validation.\n\n Parameters\n ----------\n X : 2d np.array\n The matrix of features, one example per row.\n\n y : list\n The list of labels for rows in `X`.\n\n Returns\n -------\n sklearn.linear_model.LogisticRegression\n A trained model instance, the best model found.\n\n \"\"\"\n\n mod = LogisticRegression(\n fit_intercept=True,\n max_iter=3, ## A small number of iterations.\n solver='liblinear',\n multi_class='ovr')\n\n param_grid = {\n 'C': [0.4, 0.6, 0.8, 1.0],\n 'penalty': ['l1','l2']}\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n bestmod = utils.fit_classifier_with_hyperparameter_search(\n X, y, mod, param_grid=param_grid, cv=3)\n\n return bestmod\n```\n\n\n```python\ndef run_glove_300_evaluation():\n pass\n ##### YOUR CODE HERE\n res = dict()\n mod = LogisticRegression(C=1)\n for vector_combo_func in [vec_concatenate, vec_diff]:\n print(vector_combo_func.__name__)\n# exp = nli.wordentail_experiment(\n# train_data=wordentail_data['train'],\n# assess_data=wordentail_data['dev'],\n# model=mod,\n# vector_func=glove_vec_300,\n# vector_combo_func=vector_combo_func\n# )\n exp = nli.wordentail_experiment_cv(\n train_data=wordentail_data['train'],\n assess_data=wordentail_data['dev'],\n vector_func=glove_vec_300,\n vector_combo_func=vector_combo_func,\n train_func=fit_softmax_with_hyperparameter_search,\n )\n res[vector_combo_func.__name__] = exp[\"macro-F1\"]\n return res\n```\n\n\n```python\nrun_glove_300_evaluation()\n```\n\n vec_concatenate\n Best params: {'C': 0.4, 'penalty': 'l2'}\n Best score: 0.703\n precision recall f1-score support\n \n 0 0.873 0.941 0.906 1732\n 1 0.485 0.290 0.363 334\n \n accuracy 0.835 2066\n macro avg 0.679 0.615 0.634 2066\n weighted avg 0.810 0.835 0.818 2066\n \n vec_diff\n Best params: {'C': 0.8, 'penalty': 'l2'}\n Best score: 0.639\n precision recall f1-score support\n \n 0 0.859 0.943 0.899 1732\n 1 0.399 0.195 0.262 334\n \n accuracy 0.822 2066\n macro avg 0.629 0.569 0.580 2066\n weighted avg 0.784 0.822 0.796 2066\n \n\n\n\n\n\n {'vec_concatenate': 0.6343994687019214, 'vec_diff': 0.5803032777130458}\n\n\n\n## Bake-off [1 point]\n\nThe goal of the bake-off is to achieve the highest __macro-average F1__ score on a test set that we will make available at the start of the bake-off. The announcement will go out on the discussion forum. To enter, you'll be asked to run `nli.bake_off_evaluation` on the output of your chosen `nli.wordentail_experiment` run. \n\nThe cells below this one constitute your bake-off entry.\n\nThe rules described in the [Your original system](#Your-original-system-[3-points]) homework question are also in effect for the bake-off.\n\nSystems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n\nLate entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.\n\nThe announcement will include the details on where to submit your entry.\n\n\n```python\n# Enter your bake-off assessment code into this cell.\n# Please do not remove this comment.\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n pass\n # Please enter your code in the scope of the above conditional.\n ##### YOUR CODE HERE\n\n```\n\n\n```python\n# On an otherwise blank line in this cell, please enter\n# your macro-avg f1 value as reported by the code above.\n# Please enter only a number between 0 and 1 inclusive.\n# Please do not remove this comment.\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n pass\n # Please enter your score in the scope of the above conditional.\n ##### YOUR CODE HERE\n\n```\n", "meta": {"hexsha": "e6cf90e0b38f86f737c7bc9cd38ee6332138fb7f", "size": 37965, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hw_wordentail.ipynb", "max_stars_repo_name": "BlakeDai/CS224U", "max_stars_repo_head_hexsha": "fabb7722871b3b36cd90c45320c77ced9ce34d36", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw_wordentail.ipynb", "max_issues_repo_name": "BlakeDai/CS224U", "max_issues_repo_head_hexsha": "fabb7722871b3b36cd90c45320c77ced9ce34d36", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw_wordentail.ipynb", "max_forks_repo_name": "BlakeDai/CS224U", "max_forks_repo_head_hexsha": "fabb7722871b3b36cd90c45320c77ced9ce34d36", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.0993897123, "max_line_length": 642, "alphanum_fraction": 0.5472672198, "converted": true, "num_tokens": 6682, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.33458944125318596, "lm_q2_score": 0.3007455789412415, "lm_q1q2_score": 0.10062629521731592}} {"text": "[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)\n\n# Kalman Filter Math\n\n\n```python\n#format the book\n%matplotlib inline\nfrom __future__ import division, print_function\nfrom book_format import load_style\nload_style()\n```\n\n\n\n\n\n\n\n\n\n\nIf you've gotten this far I hope that you are thinking that the Kalman filter's fearsome reputation is somewhat undeserved. Sure, I hand waved some equations away, but I hope implementation has been fairly straightforward for you. The underlying concept is quite straightforward - take two measurements, or a measurement and a prediction, and choose the output to be somewhere between the two. If you believe the measurement more your guess will be closer to the measurement, and if you believe the prediction is more accurate your guess will lie closer to it. That's not rocket science (little joke - it is exactly this math that got Apollo to the moon and back!). \n\nTo be honest I have been choosing my problems carefully. For an arbitrary problem designing the Kalman filter matrices can be extremely difficult. I haven't been *too tricky*, though. Equations like Newton's equations of motion can be trivially computed for Kalman filter applications, and they make up the bulk of the kind of problems that we want to solve. \n\nI have illustrated the concepts with code and reasoning, not math. But there are topics that do require more mathematics than I have used so far. This chapter presents the math that you will need for the rest of the book.\n\n## Modeling a Dynamic System\n\nA *dynamic system* is a physical system whose state (position, temperature, etc) evolves over time. Calculus is the math of changing values, so we use differential equations to model dynamic systems. Some systems cannot be modeled with differential equations, but we will not encounter those in this book.\n\nModeling dynamic systems is properly the topic of several college courses. To an extent there is no substitute for a few semesters of ordinary and partial differential equations followed by a graduate course in control system theory. If you are a hobbyist, or trying to solve one very specific filtering problem at work you probably do not have the time and/or inclination to devote a year or more to that education.\n\nFortunately, I can present enough of the theory to allow us to create the system equations for many different Kalman filters. My goal is to get you to the stage where you can read a publication and understand it well enough to implement the algorithms. The background math is deep, but in practice we end up using a few simple techniques. \n\nThis is the longest section of pure math in this book. You will need to master everything in this section to understand the Extended Kalman filter (EKF), the most common nonlinear filter. I do cover more modern filters that do not require as much of this math. You can choose to skim now, and come back to this if you decide to learn the EKF.\n\nWe need to start by understanding the underlying equations and assumptions that the Kalman filter uses. We are trying to model real world phenomena, so what do we have to consider?\n\nEach physical system has a process. For example, a car traveling at a certain velocity goes so far in a fixed amount of time, and its velocity varies as a function of its acceleration. We describe that behavior with the well known Newtonian equations that we learned in high school.\n\n$$\n\\begin{aligned}\nv&=at\\\\\nx &= \\frac{1}{2}at^2 + v_0t + x_0\n\\end{aligned}\n$$\n\nOnce we learned calculus we saw them in this form:\n\n$$ \\mathbf v = \\frac{d \\mathbf x}{d t}, \n\\quad \\mathbf a = \\frac{d \\mathbf v}{d t} = \\frac{d^2 \\mathbf x}{d t^2}\n$$\n\nA typical automobile tracking problem would have you compute the distance traveled given a constant velocity or acceleration, as we did in previous chapters. But, of course we know this is not all that is happening. No car travels on a perfect road. There are bumps, wind drag, and hills that raise and lower the speed. The suspension is a mechanical system with friction and imperfect springs.\n\nPerfectly modeling a system is impossible except for the most trivial problems. We are forced to make a simplification. At any time $t$ we say that the true state (such as the position of our car) is the predicted value from the imperfect model plus some unknown *process noise*:\n\n$$\nx(t) = x_{pred}(t) + noise(t)\n$$\n\nThis is not meant to imply that $noise(t)$ is a function that we can derive analytically. It is merely a statement of fact - we can always describe the true value as the predicted value plus the process noise. \"Noise\" does not imply random events. If we are tracking a thrown ball in the atmosphere, and our model assumes the ball is in a vacuum, then the effect of air drag is process noise in this context.\n\nIn the next section we will learn techniques to convert a set of higher order differential equations into a set of first-order differential equations. After the conversion the model of the system without noise is:\n\n$$ \\dot{\\mathbf x} = \\mathbf{Ax}$$\n\n$\\mathbf A$ is known as the *systems dynamics matrix* as it describes the dynamics of the system. Now we need to model the noise. We will call that $\\mathbf w$, and add it to the equation. \n\n$$ \\dot{\\mathbf x} = \\mathbf{Ax} + \\mathbf w$$\n\n$\\mathbf w$ may strike you as a poor choice for the name, but you will soon see that the Kalman filter assumes *white* noise.\n\nFinally, we need to consider any inputs into the system. We assume an input $\\mathbf u$, and that there exists a linear model that defines how that input changes the system. For example, pressing the accelerator in your car makes it accelerate, and gravity causes balls to fall. Both are contol inputs. We will need a matrix $\\mathbf B$ to convert $u$ into the effect on the system. We add that into our equation:\n\n$$ \\dot{\\mathbf x} = \\mathbf{Ax} + \\mathbf{Bu} + \\mathbf{w}$$\n\nAnd that's it. That is one of the equations that Dr. Kalman set out to solve, and he found an optimal estimator if we assume certain properties of $\\mathbf w$.\n\n## State-Space Representation of Dynamic Systems\n\nWe've derived the equation\n\n$$ \\dot{\\mathbf x} = \\mathbf{Ax}+ \\mathbf{Bu} + \\mathbf{w}$$\n\nHowever, we are not interested in the derivative of $\\mathbf x$, but in $\\mathbf x$ itself. Ignoring the noise for a moment, we want an equation that recusively finds the value of $\\mathbf x$ at time $t_k$ in terms of $\\mathbf x$ at time $t_{k-1}$:\n\n$$\\mathbf x(t_k) = \\mathbf F(\\Delta t)\\mathbf x(t_{k-1}) + \\mathbf B(t_k)\\mathbf u (t_k)$$\n\nConvention allows us to write $\\mathbf x(t_k)$ as $\\mathbf x_k$, which means the \nthe value of $\\mathbf x$ at the k$^{th}$ value of $t$.\n\n$$\\mathbf x_k = \\mathbf{Fx}_{k-1} + \\mathbf B_k\\mathbf u_k$$\n\n$\\mathbf F$ is the familiar *state transition matrix*, named due to its ability to transition the state's value between discrete time steps. It is very similar to the system dynamics matrix $\\mathbf A$. The difference is that $\\mathbf A$ models a set of linear differential equations, and is continuous. $\\mathbf F$ is discrete, and represents a set of linear equations (not differential equations) which transitions $\\mathbf x_{k-1}$ to $\\mathbf x_k$ over a discrete time step $\\Delta t$. \n\nFinding this matrix is often quite difficult. The equation $\\dot x = v$ is the simplest possible differential equation and we trivially integrate it as:\n\n$$ \\int\\limits_{x_{k-1}}^{x_k} \\mathrm{d}x = \\int\\limits_{0}^{\\Delta t} v\\, \\mathrm{d}t $$\n$$x_k-x_0 = v \\Delta t$$\n$$x_k = v \\Delta t + x_0$$\n\nThis equation is *recursive*: we compute the value of $x$ at time $t$ based on its value at time $t-1$. This recursive form enables us to represent the system (process model) in the form required by the Kalman filter:\n\n$$\\begin{aligned}\n\\mathbf x_k &= \\mathbf{Fx}_{k-1} \\\\\n&= \\begin{bmatrix} 1 & \\Delta t \\\\ 0 & 1\\end{bmatrix}\n\\begin{bmatrix}x_{k-1} \\\\ \\dot x_{k-1}\\end{bmatrix}\n\\end{aligned}$$\n\nWe can do that only because $\\dot x = v$ is simplest differential equation possible. Almost all other in physical systems result in more complicated differential equation which do not yield to this approach. \n\n*State-space* methods became popular around the time of the Apollo missions, largely due to the work of Dr. Kalman. The idea is simple. Model a system with a set of $n^{th}$-order differential equations. Convert them into an equivalent set of first-order differential equations. Put them into the vector-matrix form used in the previous section: $\\dot{\\mathbf x} = \\mathbf{Ax} + \\mathbf{Bu}$. Once in this form we use of of several techniques to convert these linear differential equations into the recursive equation:\n\n$$ \\mathbf x_k = \\mathbf{Fx}_{k-1} + \\mathbf B_k\\mathbf u_k$$\n\nSome books call the state transition matrix the *fundamental matrix*. Many use $\\mathbf \\Phi$ instead of $\\mathbf F$. Sources based heavily on control theory tend to use these forms.\n\nThese are called *state-space* methods because we are expressing the solution of the differential equations in terms of the system state. \n\n### Forming First Order Equations from Higher Order Equations\n\nMany models of physical systems require second or higher order differential equations with control input $u$:\n\n$$a_n \\frac{d^ny}{dt^n} + a_{n-1} \\frac{d^{n-1}y}{dt^{n-1}} + \\dots + a_2 \\frac{d^2y}{dt^2} + a_1 \\frac{dy}{dt} + a_0 = u$$\n\nState-space methods require first-order equations. Any higher order system of equations can be reduced to first-order by defining extra variables for the derivatives and then solving. \n\n\nLet's do an example. Given the system $\\ddot{x} - 6\\dot x + 9x = u$ find the equivalent first order equations. I've used the dot notation for the time derivatives for clarity.\n\nThe first step is to isolate the highest order term onto one side of the equation.\n\n$$\\ddot{x} = 6\\dot x - 9x + u$$\n\nWe define two new variables:\n\n$$\\begin{aligned} x_1(u) &= x \\\\\nx_2(u) &= \\dot x\n\\end{aligned}$$\n\nNow we will substitute these into the original equation and solve. The solution yields a set of first-order equations in terms of these new variables. It is conventional to drop the $(u)$ for notational convenience.\n\nWe know that $\\dot x_1 = x_2$ and that $\\dot x_2 = \\ddot{x}$. Therefore\n\n$$\\begin{aligned}\n\\dot x_2 &= \\ddot{x} \\\\\n &= 6\\dot x - 9x + t\\\\\n &= 6x_2-9x_1 + t\n\\end{aligned}$$\n\nTherefore our first-order system of equations is\n\n$$\\begin{aligned}\\dot x_1 &= x_2 \\\\\n\\dot x_2 &= 6x_2-9x_1 + t\\end{aligned}$$\n\nIf you practice this a bit you will become adept at it. Isolate the highest term, define a new variable and its derivatives, and then substitute.\n\n### First Order Differential Equations In State-Space Form\n\nSubstituting the newly defined variables from the previous section:\n\n$$\\frac{dx_1}{dt} = x_2,\\, \n\\frac{dx_2}{dt} = x_3, \\, ..., \\, \n\\frac{dx_{n-1}}{dt} = x_n$$\n\ninto the first order equations yields: \n\n$$\\frac{dx_n}{dt} = \\frac{1}{a_n}\\sum\\limits_{i=0}^{n-1}a_ix_{i+1} + \\frac{1}{a_n}u\n$$\n\n\nUsing vector-matrix notation we have:\n\n$$\\begin{bmatrix}\\frac{dx_1}{dt} \\\\ \\frac{dx_2}{dt} \\\\ \\vdots \\\\ \\frac{dx_n}{dt}\\end{bmatrix} = \n\\begin{bmatrix}\\dot x_1 \\\\ \\dot x_2 \\\\ \\vdots \\\\ \\dot x_n\\end{bmatrix}=\n\\begin{bmatrix}0 & 1 & 0 &\\cdots & 0 \\\\\n0 & 0 & 1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n-\\frac{a_0}{a_n} & -\\frac{a_1}{a_n} & -\\frac{a_2}{a_n} & \\cdots & -\\frac{a_{n-1}}{a_n}\\end{bmatrix}\n\\begin{bmatrix}x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n\\end{bmatrix} + \n\\begin{bmatrix}0 \\\\ 0 \\\\ \\vdots \\\\ \\frac{1}{a_n}\\end{bmatrix}u$$\n\nwhich we then write as $\\dot{\\mathbf x} = \\mathbf{Ax} + \\mathbf{B}u$.\n\n### Finding the Fundamental Matrix for Time Invariant Systems\n\nWe express the system equations in state-space form with\n\n$$ \\dot{\\mathbf x} = \\mathbf{Ax}$$\n\nwhere $\\mathbf A$ is the system dynamics matrix, and want to find the *fundamental matrix* $\\mathbf F$ that propagates the state $\\mathbf x$ over the interval $\\Delta t$ with the equation\n\n$$\\begin{aligned}\n\\mathbf x(t_k) = \\mathbf F(\\Delta t)\\mathbf x(t_{k-1})\\end{aligned}$$\n\nIn other words, $\\mathbf A$ is a set of continuous differential equations, and we need $\\mathbf F$ to be a set of discrete linear equations that computes the change in $\\mathbf A$ over a discrete time step.\n\nIt is conventional to drop the $t_k$ and $(\\Delta t)$ and use the notation\n\n$$\\mathbf x_k = \\mathbf {Fx}_{k-1}$$\n\nBroadly speaking there are three common ways to find this matrix for Kalman filters. The technique most often used is the matrix exponential. Linear Time Invariant Theory, also known as LTI System Theory, is a second technique. Finally, there are numerical techniques. You may know of others, but these three are what you will most likely encounter in the Kalman filter literature and praxis.\n\n### The Matrix Exponential\n\nThe solution to the equation $\\frac{dx}{dt} = kx$ can be found by:\n\n$$\\begin{gathered}\\frac{dx}{dt} = kx \\\\\n\\frac{dx}{x} = k\\, dt \\\\\n\\int \\frac{1}{x}\\, dx = \\int k\\, dt \\\\\n\\log x = kt + c \\\\\nx = e^{kt+c} \\\\\nx = e^ce^{kt} \\\\\nx = c_0e^{kt}\\end{gathered}$$\n\nUsing similar math, the solution to the first-order equation \n\n$$\\dot{\\mathbf x} = \\mathbf{Ax} ,\\, \\, \\, \\mathbf x(0) = \\mathbf x_0$$\n\nwhere $\\mathbf A$ is a constant matrix, is\n\n$$\\mathbf x = e^{\\mathbf At}\\mathbf x_0$$\n\nSubstituting $F = e^{\\mathbf At}$, we can write \n\n$$\\mathbf x_k = \\mathbf F\\mathbf x_{k-1}$$\n\nwhich is the form we are looking for! We have reduced the problem of finding the fundamental matrix to one of finding the value for $e^{\\mathbf At}$.\n\n$e^{\\mathbf At}$ is known as the [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential). It can be computed with this power series:\n\n$$e^{\\mathbf At} = \\mathbf{I} + \\mathbf{A}t + \\frac{(\\mathbf{A}t)^2}{2!} + \\frac{(\\mathbf{A}t)^3}{3!} + ... $$\n\nThat series is found by doing a Taylor series expansion of $e^{\\mathbf At}$, which I will not cover here.\n\nLet's use this to find the solution to Newton's equations. Using $v$ as an substitution for $\\dot x$, and assuming constant velocity we get the linear matrix-vector form \n\n$$\\begin{bmatrix}\\dot x \\\\ \\dot v\\end{bmatrix} =\\begin{bmatrix}0&1\\\\0&0\\end{bmatrix} \\begin{bmatrix}x \\\\ v\\end{bmatrix}$$\n\nThis is a first order differential equation, so we can set $\\mathbf{A}=\\begin{bmatrix}0&1\\\\0&0\\end{bmatrix}$ and solve the following equation. I have substituted the interval $\\Delta t$ for $t$ to emphasize that the fundamental matrix is discrete:\n\n$$\\mathbf F = e^{\\mathbf A\\Delta t} = \\mathbf{I} + \\mathbf A\\Delta t + \\frac{(\\mathbf A\\Delta t)^2}{2!} + \\frac{(\\mathbf A\\Delta t)^3}{3!} + ... $$\n\nIf you perform the multiplication you will find that $\\mathbf{A}^2=\\begin{bmatrix}0&0\\\\0&0\\end{bmatrix}$, which means that all higher powers of $\\mathbf{A}$ are also $\\mathbf{0}$. Thus we get an exact answer without an infinite number of terms:\n\n$$\n\\begin{aligned}\n\\mathbf F &=\\mathbf{I} + \\mathbf A \\Delta t + \\mathbf{0} \\\\\n&= \\begin{bmatrix}1&0\\\\0&1\\end{bmatrix} + \\begin{bmatrix}0&1\\\\0&0\\end{bmatrix}\\Delta t\\\\\n&= \\begin{bmatrix}1&\\Delta t\\\\0&1\\end{bmatrix}\n\\end{aligned}$$\n\nWe plug this into $\\mathbf x_k= \\mathbf{Fx}_{k-1}$ to get\n\n$$\n\\begin{aligned}\nx_k &=\\begin{bmatrix}1&\\Delta t\\\\0&1\\end{bmatrix}x_{k-1}\n\\end{aligned}$$\n\nYou will recognize this as the matrix we derived analytically for the constant velocity Kalman filter in the **Multivariate Kalman Filter** chapter.\n\nSciPy's linalg module includes a routine `expm()` to compute the matrix exponential. It does not use the Taylor series method, but the [Pad\u00e9 Approximation](https://en.wikipedia.org/wiki/Pad%C3%A9_approximant). There are many (at least 19) methods to computed the matrix exponential, and all suffer from numerical difficulties[1]. But you should be aware of the problems, especially when $\\mathbf A$ is large. If you search for \"pade approximation matrix exponential\" you will find many publications devoted to this problem. \n\nIn practice this may not be of concern to you as for the Kalman filter we normally just take the first two terms of the Taylor series. But don't assume my treatment of the problem is complete and run off and try to use this technique for other problem without doing a numerical analysis of the performance of this technique. Interestingly, one of the favored ways of solving $e^{\\mathbf At}$ is to use a generalized ode solver. In other words, they do the opposite of what we do - turn $\\mathbf A$ into a set of differential equations, and then solve that set using numerical techniques! \n\nHere is an example of using `expm()` to solve $e^{\\mathbf At}$.\n\n\n```python\nimport numpy as np\nfrom scipy.linalg import expm\n\ndt = 0.1\nA = np.array([[0, 1], \n [0, 0]])\nexpm(A*dt)\n```\n\n\n\n\n array([[ 1.0, 0.1],\n [ 0.0, 1.0]])\n\n\n\n### Time Invariance\n\nIf the behavior of the system depends on time we can say that a dynamic system is described by the first-order differential equation\n\n$$ g(t) = \\dot x$$\n\nHowever, if the system is *time invariant* the equation is of the form:\n\n$$ f(x) = \\dot x$$\n\nWhat does *time invariant* mean? Consider a home stereo. If you input a signal $x$ into it at time $t$, it will output some signal $f(x)$. If you instead perform the input at time $t + \\Delta t$ the output signal will be the same $f(x)$, shifted in time.\n\nA counter-example is $x(t) = \\sin(t)$, with the system $f(x) = t\\, x(t) = t \\sin(t)$. This is not time invariant; the value will be different at different times due to the multiplication by t. An aircraft is not time invariant. If you make a control input to the aircraft at a later time its behavior will be different because it will have burned fuel and thus lost weight. Lower weight results in different behavior.\n\nWe can solve these equations by integrating each side. I demonstrated integrating the time invariant system $v = \\dot x$ above. However, integrating the time invariant equation $\\dot x = f(x)$ is not so straightforward. Using the *separation of variables* techniques we divide by $f(x)$ and move the $dt$ term to the right so we can integrate each side:\n\n$$\\begin{gathered}\n\\frac{dx}{dt} = f(x) \\\\\n\\int^x_{x_0} \\frac{1}{f(x)} dx = \\int^t_{t_0} dt\n\\end{gathered}$$\n\nIf we let $F(x) = \\int \\frac{1}{f(x)} dx$ we get\n\n$$F(x) - F(x_0) = t-t_0$$\n\nWe then solve for x with\n\n$$\\begin{gathered}\nF(x) = t - t_0 + F(x_0) \\\\\nx = F^{-1}[t-t_0 + F(x_0)]\n\\end{gathered}$$\n\nIn other words, we need to find the inverse of $F$. This is not trivial, and a significant amount of coursework in a STEM education is devoted to finding tricky, analytic solutions to this problem. \n\nHowever, they are tricks, and many simple forms of $f(x)$ either have no closed form solution or pose extreme difficulties. Instead, the practicing engineer turns to state-space methods to find approximate solutions.\n\nThe advantage of the matrix exponential is that we can use it for any arbitrary set of differential equations which are *time invariant*. However, we often use this technique even when the equations are not time invariant. As an aircraft flies it burns fuel and loses weight. However, the weight loss over one second is negligible, and so the system is nearly linear over that time step. Our answers will still be reasonably accurate so long as the time step is short.\n\n#### Example: Mass-Spring-Damper Model\n\nSuppose we wanted to track the motion of a weight on a spring and connected to a damper, such as an automobile's suspension. The equation for the motion with $m$ being the mass, $k$ the spring constant, and $c$ the damping force, under some input $u$ is \n\n$$m\\frac{d^2x}{dt^2} + c\\frac{dx}{dt} +kx = u$$\n\nFor notational convenience I will write that as\n\n$$m\\ddot x + c\\dot x + kx = u$$\n\nI can turn this into a system of first order equations by setting $x_1(t)=x(t)$, and then substituting as follows:\n\n$$\\begin{aligned}\nx_1 &= x \\\\\nx_2 &= \\dot x_1 \\\\\n\\dot x_2 &= \\dot x_1 = \\ddot x\n\\end{aligned}$$\n\nAs is common I dropped the $(t)$ for notational convenience. This gives the equation\n\n$$m\\dot x_2 + c x_2 +kx_1 = u$$\n\nSolving for $\\dot x_2$ we get a first order equation:\n\n$$\\dot x_2 = -\\frac{c}{m}x_2 - \\frac{k}{m}x_1 + \\frac{1}{m}u$$\n\nWe put this into matrix form:\n\n$$\\begin{bmatrix} \\dot x_1 \\\\ \\dot x_2 \\end{bmatrix} = \n\\begin{bmatrix}0 & 1 \\\\ -k/m & -c/m \\end{bmatrix}\n\\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} + \n\\begin{bmatrix} 0 \\\\ 1/m \\end{bmatrix}u$$\n\nNow we use the matrix exponential to find the state transition matrix:\n\n$$\\Phi(t) = e^{\\mathbf At} = \\mathbf{I} + \\mathbf At + \\frac{(\\mathbf At)^2}{2!} + \\frac{(\\mathbf At)^3}{3!} + ... $$\n\nThe first two terms give us\n\n$$\\mathbf F = \\begin{bmatrix}1 & t \\\\ -(k/m) t & 1-(c/m) t \\end{bmatrix}$$\n\nThis may or may not give you enough precision. You can easily check this by computing $\\frac{(\\mathbf At)^2}{2!}$ for your constants and seeing how much this matrix contributes to the results.\n\n### Linear Time Invariant Theory\n\n[*Linear Time Invariant Theory*](https://en.wikipedia.org/wiki/LTI_system_theory), also known as LTI System Theory, gives us a way to find $\\Phi$ using the inverse Laplace transform. You are either nodding your head now, or completely lost. I will not be using the Laplace transform in this book. LTI system theory tells us that \n\n$$ \\Phi(t) = \\mathcal{L}^{-1}[(s\\mathbf{I} - \\mathbf{A})^{-1}]$$\n\nI have no intention of going into this other than to say that the Laplace transform $\\mathcal{L}$ converts a signal into a space $s$ that excludes time, but finding a solution to the equation above is non-trivial. If you are interested, the Wikipedia article on LTI system theory provides an introduction. I mention LTI because you will find some literature using it to design the Kalman filter matrices for difficult problems. \n\n### Numerical Solutions\n\nFinally, there are numerical techniques to find $\\mathbf F$. As filters get larger finding analytical solutions becomes very tedious (though packages like SymPy make it easier). C. F. van Loan [2] has developed a technique that finds both $\\Phi$ and $\\mathbf Q$ numerically. Given the continuous model\n\n$$ \\dot x = Ax + Gw$$\n\nwhere $w$ is the unity white noise, van Loan's method computes both $\\mathbf F_k$ and $\\mathbf Q_k$.\n \nI have implemented van Loan's method in `FilterPy`. You may use it as follows:\n\n```python\nfrom filterpy.common import van_loan_discretization\n\nA = np.array([[0., 1.], [-1., 0.]])\nG = np.array([[0.], [2.]]) # white noise scaling\nF, Q = van_loan_discretization(A, G, dt=0.1)\n```\n \nIn the section *Numeric Integration of Differential Equations* I present alternative methods which are very commonly used in Kalman filtering.\n\n## Design of the Process Noise Matrix\n\nIn general the design of the $\\mathbf Q$ matrix is among the most difficult aspects of Kalman filter design. This is due to several factors. First, the math requires a good foundation in signal theory. Second, we are trying to model the noise in something for which we have little information. Consider trying to model the process noise for a thrown baseball. We can model it as a sphere moving through the air, but that leaves many unknown factors - the wind, ball rotation and spin decay, the coefficient of drag of a ball with stitches, the effects of wind and air density, and so on. We develop the equations for an exact mathematical solution for a given process model, but since the process model is incomplete the result for $\\mathbf Q$ will also be incomplete. This has a lot of ramifications for the behavior of the Kalman filter. If $\\mathbf Q$ is too small then the filter will be overconfident in its prediction model and will diverge from the actual solution. If $\\mathbf Q$ is too large than the filter will be unduly influenced by the noise in the measurements and perform sub-optimally. In practice we spend a lot of time running simulations and evaluating collected data to try to select an appropriate value for $\\mathbf Q$. But let's start by looking at the math.\n\n\nLet's assume a kinematic system - some system that can be modeled using Newton's equations of motion. We can make a few different assumptions about this process. \n\nWe have been using a process model of\n\n$$ \\dot{\\mathbf x} = \\mathbf{Ax} + \\mathbf{Bu} + \\mathbf{w}$$\n\nwhere $\\mathbf{w}$ is the process noise. Kinematic systems are *continuous* - their inputs and outputs can vary at any arbitrary point in time. However, our Kalman filters are *discrete* (there are continuous forms for Kalman filters, but we do not cover them in this book). We sample the system at regular intervals. Therefore we must find the discrete representation for the noise term in the equation above. This depends on what assumptions we make about the behavior of the noise. We will consider two different models for the noise.\n\n### Continuous White Noise Model\n\nWe model kinematic systems using Newton's equations. We have either used position and velocity, or position, velocity, and acceleration as the models for our systems. There is nothing stopping us from going further - we can model jerk, jounce, snap, and so on. We don't do that normally because adding terms beyond the dynamics of the real system degrades the estimate. \n\nLet's say that we need to model the position, velocity, and acceleration. We can then assume that acceleration is constant for each discrete time step. Of course, there is process noise in the system and so the acceleration is not actually constant. The tracked object will alter the acceleration over time due to external, unmodeled forces. In this section we will assume that the acceleration changes by a continuous time zero-mean white noise $w(t)$. In other words, we are assuming that the small changes in velocity average to 0 over time (zero-mean). \n\nSince the noise is changing continuously we will need to integrate to get the discrete noise for the discretization interval that we have chosen. We will not prove it here, but the equation for the discretization of the noise is\n\n$$\\mathbf Q = \\int_0^{\\Delta t} \\mathbf F(t)\\mathbf{Q_c}\\mathbf F^\\mathsf{T}(t) dt$$\n\nwhere $\\mathbf{Q_c}$ is the continuous noise. The general reasoning should be clear. $\\mathbf F(t)\\mathbf{Q_c}\\mathbf F^\\mathsf{T}(t)$ is a projection of the continuous noise based on our process model $\\mathbf F(t)$ at the instant $t$. We want to know how much noise is added to the system over a discrete intervat $\\Delta t$, so we integrate this expression over the interval $[0, \\Delta t]$. \n\nWe know the fundamental matrix for Newtonian systems is\n\n$$F = \\begin{bmatrix}1 & \\Delta t & {\\Delta t}^2/2 \\\\ 0 & 1 & \\Delta t\\\\ 0& 0& 1\\end{bmatrix}$$\n\nWe define the continuous noise as \n\n$$\\mathbf{Q_c} = \\begin{bmatrix}0&0&0\\\\0&0&0\\\\0&0&1\\end{bmatrix} \\Phi_s$$\n\nwhere $\\Phi_s$ is the spectral density of the white noise. This can be derived, but is beyond the scope of this book. See any standard text on stochastic processes for the details. In practice we often do not know the spectral density of the noise, and so this turns into an \"engineering\" factor - a number we experimentally tune until our filter performs as we expect. You can see that the matrix that $\\Phi_s$ is multiplied by effectively assigns the power spectral density to the acceleration term. This makes sense; we assume that the system has constant acceleration except for the variations caused by noise. The noise alters the acceleration.\n\nWe could carry out these computations ourselves, but I prefer using SymPy to solve the equation.\n\n$$\\mathbf{Q_c} = \\begin{bmatrix}0&0&0\\\\0&0&0\\\\0&0&1\\end{bmatrix} \\Phi_s$$\n\n\n\n\n```python\nimport sympy\nfrom sympy import (init_printing, Matrix,MatMul, \n integrate, symbols)\n\ninit_printing(use_latex='mathjax')\ndt, phi = symbols('\\Delta{t} \\Phi_s')\nF_k = Matrix([[1, dt, dt**2/2],\n [0, 1, dt],\n [0, 0, 1]])\nQ_c = Matrix([[0, 0, 0],\n [0, 0, 0],\n [0, 0, 1]])*phi\n\nQ=sympy.integrate(F_k * Q_c * F_k.T, (dt, 0, dt))\n\n# factor phi out of the matrix to make it more readable\nQ = Q / phi\nsympy.MatMul(Q, phi)\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{\\Delta{t}^{5}}{20} & \\frac{\\Delta{t}^{4}}{8} & \\frac{\\Delta{t}^{3}}{6}\\\\\\frac{\\Delta{t}^{4}}{8} & \\frac{\\Delta{t}^{3}}{3} & \\frac{\\Delta{t}^{2}}{2}\\\\\\frac{\\Delta{t}^{3}}{6} & \\frac{\\Delta{t}^{2}}{2} & \\Delta{t}\\end{matrix}\\right] \\Phi_s$$\n\n\n\nFor completeness, let us compute the equations for the 0th order and 1st order equations.\n\n\n```python\nF_k = sympy.Matrix([[1]])\nQ_c = sympy.Matrix([[phi]])\n\nprint('0th order discrete process noise')\nsympy.integrate(F_k*Q_c*F_k.T,(dt, 0, dt))\n```\n\n 0th order discrete process noise\n\n\n\n\n\n$$\\left[\\begin{matrix}\\Delta{t} \\Phi_s\\end{matrix}\\right]$$\n\n\n\n\n```python\nF_k = sympy.Matrix([[1, dt],\n [0, 1]])\nQ_c = sympy.Matrix([[0, 0],\n [0, 1]])*phi\n\nQ = sympy.integrate(F_k * Q_c * F_k.T, (dt, 0, dt))\n\nprint('1st order discrete process noise')\n# factor phi out of the matrix to make it more readable\nQ = Q / phi\nsympy.MatMul(Q, phi)\n```\n\n 1st order discrete process noise\n\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{\\Delta{t}^{3}}{3} & \\frac{\\Delta{t}^{2}}{2}\\\\\\frac{\\Delta{t}^{2}}{2} & \\Delta{t}\\end{matrix}\\right] \\Phi_s$$\n\n\n\n### Piecewise White Noise Model\n\nAnother model for the noise assumes that the that highest order term (say, acceleration) is constant for the duration of each time period, but differs for each time period, and each of these is uncorrelated between time periods. In other words there is a discontinuous jump in acceleration at each time step. This is subtly different than the model above, where we assumed that the last term had a continuously varying noisy signal applied to it. \n\nWe will model this as\n\n$$f(x)=Fx+\\Gamma w$$\n\nwhere $\\Gamma$ is the *noise gain* of the system, and $w$ is the constant piecewise acceleration (or velocity, or jerk, etc). \n\nLet's start by looking at a first order system. In this case we have the state transition function\n\n$$\\mathbf{F} = \\begin{bmatrix}1&\\Delta t \\\\ 0& 1\\end{bmatrix}$$\n\nIn one time period, the change in velocity will be $w(t)\\Delta t$, and the change in position will be $w(t)\\Delta t^2/2$, giving us\n\n$$\\Gamma = \\begin{bmatrix}\\frac{1}{2}\\Delta t^2 \\\\ \\Delta t\\end{bmatrix}$$\n\nThe covariance of the process noise is then\n\n$$Q = \\mathbb E[\\Gamma w(t) w(t) \\Gamma^\\mathsf{T}] = \\Gamma\\sigma^2_v\\Gamma^\\mathsf{T}$$.\n\nWe can compute that with SymPy as follows\n\n\n```python\nvar=symbols('sigma^2_v')\nv = Matrix([[dt**2 / 2], [dt]])\n\nQ = v * var * v.T\n\n# factor variance out of the matrix to make it more readable\nQ = Q / var\nsympy.MatMul(Q, var)\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{\\Delta{t}^{4}}{4} & \\frac{\\Delta{t}^{3}}{2}\\\\\\frac{\\Delta{t}^{3}}{2} & \\Delta{t}^{2}\\end{matrix}\\right] \\sigma^{2}_{v}$$\n\n\n\nThe second order system proceeds with the same math.\n\n\n$$\\mathbf{F} = \\begin{bmatrix}1 & \\Delta t & {\\Delta t}^2/2 \\\\ 0 & 1 & \\Delta t\\\\ 0& 0& 1\\end{bmatrix}$$\n\nHere we will assume that the white noise is a discrete time Wiener process. This gives us\n\n$$\\Gamma = \\begin{bmatrix}\\frac{1}{2}\\Delta t^2 \\\\ \\Delta t\\\\ 1\\end{bmatrix}$$\n\nThere is no 'truth' to this model, it is just convenient and provides good results. For example, we could assume that the noise is applied to the jerk at the cost of a more complicated equation. \n\nThe covariance of the process noise is then\n\n$$Q = \\mathbb E[\\Gamma w(t) w(t) \\Gamma^\\mathsf{T}] = \\Gamma\\sigma^2_v\\Gamma^\\mathsf{T}$$.\n\nWe can compute that with SymPy as follows\n\n\n```python\nvar=symbols('sigma^2_v')\nv = Matrix([[dt**2 / 2], [dt], [1]])\n\nQ = v * var * v.T\n\n# factor variance out of the matrix to make it more readable\nQ = Q / var\nsympy.MatMul(Q, var)\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{\\Delta{t}^{4}}{4} & \\frac{\\Delta{t}^{3}}{2} & \\frac{\\Delta{t}^{2}}{2}\\\\\\frac{\\Delta{t}^{3}}{2} & \\Delta{t}^{2} & \\Delta{t}\\\\\\frac{\\Delta{t}^{2}}{2} & \\Delta{t} & 1\\end{matrix}\\right] \\sigma^{2}_{v}$$\n\n\n\nWe cannot say that this model is more or less correct than the continuous model - both are approximations to what is happening to the actual object. Only experience and experiments can guide you to the appropriate model. In practice you will usually find that either model provides reasonable results, but typically one will perform better than the other.\n\nThe advantage of the second model is that we can model the noise in terms of $\\sigma^2$ which we can describe in terms of the motion and the amount of error we expect. The first model requires us to specify the spectral density, which is not very intuitive, but it handles varying time samples much more easily since the noise is integrated across the time period. However, these are not fixed rules - use whichever model (or a model of your own devising) based on testing how the filter performs and/or your knowledge of the behavior of the physical model.\n\nA good rule of thumb is to set $\\sigma$ somewhere from $\\frac{1}{2}\\Delta a$ to $\\Delta a$, where $\\Delta a$ is the maximum amount that the acceleration will change between sample periods. In practice we pick a number, run simulations on data, and choose a value that works well.\n\n### Using FilterPy to Compute Q\n\nFilterPy offers several routines to compute the $\\mathbf Q$ matrix. The function `Q_continuous_white_noise()` computes $\\mathbf Q$ for a given value for $\\Delta t$ and the spectral density.\n\n\n```python\nfrom filterpy.common import Q_continuous_white_noise\nfrom filterpy.common import Q_discrete_white_noise\n\nQ = Q_continuous_white_noise(dim=2, dt=1, spectral_density=1)\nprint(Q)\n```\n\n [[ 0.333 0.5]\n [ 0.5 1.0]]\n\n\n\n```python\nQ = Q_continuous_white_noise(dim=3, dt=1, spectral_density=1)\nprint(Q)\n```\n\n [[ 0.05 0.125 0.167]\n [ 0.125 0.333 0.5]\n [ 0.167 0.5 1.0]]\n\n\nThe function `Q_discrete_white_noise()` computes $\\mathbf Q$ assuming a piecewise model for the noise.\n\n\n```python\nQ = Q_discrete_white_noise(2, var=1.)\nprint(Q)\n```\n\n [[ 0.25 0.5]\n [ 0.5 1.0]]\n\n\n\n```python\nQ = Q_discrete_white_noise(3, var=1.)\nprint(Q)\n```\n\n [[ 0.25 0.5 0.5]\n [ 0.5 1.0 1.0]\n [ 0.5 1.0 1.0]]\n\n\n### Simplification of Q\n\nMany treatments use a much simpler form for $\\mathbf Q$, setting it to zero except for a noise term in the lower rightmost element. Is this justified? Well, consider the value of $\\mathbf Q$ for a small $\\Delta t$\n\n\n```python\nimport numpy as np\n\nnp.set_printoptions(precision=8)\nQ = Q_continuous_white_noise(\n dim=3, dt=0.05, spectral_density=1)\nprint(Q)\nnp.set_printoptions(precision=3)\n```\n\n [[ 0.00000002 0.00000078 0.00002083]\n [ 0.00000078 0.00004167 0.00125 ]\n [ 0.00002083 0.00125 0.05 ]]\n\n\nWe can see that most of the terms are very small. Recall that the only equation using this matrix is\n\n$$ \\mathbf P=\\mathbf{FPF}^\\mathsf{T} + \\mathbf Q$$\n\nIf the values for $\\mathbf Q$ are small relative to $\\mathbf P$\nthan it will be contributing almost nothing to the computation of $\\mathbf P$. Setting $\\mathbf Q$ to the zero matrix except for the lower right term\n\n$$\\mathbf Q=\\begin{bmatrix}0&0&0\\\\0&0&0\\\\0&0&\\sigma^2\\end{bmatrix}$$\n\nwhile not correct, is often a useful approximation. If you do this for an important application you will have to perform quite a few studies to guarantee that your filter works in a variety of situations. \n\nIf you do this, 'lower right term' means the most rapidly changing term for each variable. If the state is $x=\\begin{bmatrix}x & \\dot x & \\ddot{x} & y & \\dot{y} & \\ddot{y}\\end{bmatrix}^\\mathsf{T}$ Then Q will be 6x6; the elements for both $\\ddot{x}$ and $\\ddot{y}$ will have to be set to non-zero in $\\mathbf Q$.\n\n## Numeric Integration of Differential Equations\n\nWe've been exposed to several numerical techniques to solve linear differential equations. These include state-space methods, the Laplace transform, and van Loan's method. \n\nThese work well for linear ordinary differential equations (ODEs), but do not work well for nonlinear equations. For example, consider trying to predict the position of a rapidly turning car. Cars maneuver by turning the front wheels. This makes them pivot around their rear axle as it moves forward. Therefore the path will be continuously varying and a linear prediction will necessarily produce an incorrect value. If the change in the system is small enough relative to $\\Delta t$ this can often produce adequate results, but that will rarely be the case with the nonlinear Kalman filters we will be studying in subsequent chapters. \n\nFor these reasons we need to know how to numerically integrate ODEs. This can be a vast topic that requires several books. If you need to explore this topic in depth *Computational Physics in Python* by Dr. Eric Ayars is excellent, and available for free here:\n\nhttp://phys.csuchico.edu/ayars/312/Handouts/comp-phys-python.pdf\n\nHowever, I will cover a few simple techniques which will work for a majority of the problems you encounter.\n\n\n### Euler's Method\n\nLet's say we have the initial condition problem of \n\n$$\\begin{gathered}\ny' = y, \\\\ y(0) = 1\n\\end{gathered}$$\n\nWe happen to know the exact answer is $y=e^t$ because we solved it earlier, but for an arbitrary ODE we will not know the exact solution. In general all we know is the derivative of the equation, which is equal to the slope. We also know the initial value: at $t=0$, $y=1$. If we know these two pieces of information we can predict the value at $y(t=1)$ using the slope at $t=0$ and the value of $y(0)$. I've plotted this below.\n\n\n```python\nimport matplotlib.pyplot as plt\nt = np.linspace(-1, 1, 10)\nplt.plot(t, np.exp(t))\nt = np.linspace(-1, 1, 2)\nplt.plot(t,t+1, ls='--', c='k');\n```\n\nYou can see that the slope is very close to the curve at $t=0.1$, but far from it\nat $t=1$. But let's continue with a step size of 1 for a moment. We can see that at $t=1$ the estimated value of $y$ is 2. Now we can compute the value at $t=2$ by taking the slope of the curve at $t=1$ and adding it to our initial estimate. The slope is computed with $y'=y$, so the slope is 2.\n\n\n```python\nimport kf_book.book_plots as book_plots\n\nt = np.linspace(-1, 2, 20)\nplt.plot(t, np.exp(t))\nt = np.linspace(0, 1, 2)\nplt.plot([1, 2, 4], ls='--', c='k')\nbook_plots.set_labels(x='x', y='y');\n```\n\nHere we see the next estimate for y is 4. The errors are getting large quickly, and you might be unimpressed. But 1 is a very large step size. Let's put this algorithm in code, and verify that it works by using a small step size.\n\n\n```python\ndef euler(t, tmax, y, dx, step=1.):\n ys = []\n while t < tmax:\n y = y + step*dx(t, y)\n ys.append(y)\n t +=step \n return ys\n```\n\n\n```python\ndef dx(t, y): return y\n\nprint(euler(0, 1, 1, dx, step=1.)[-1])\nprint(euler(0, 2, 1, dx, step=1.)[-1])\n```\n\n 2.0\n 4.0\n\n\nThis looks correct. So now let's plot the result of a much smaller step size.\n\n\n```python\nys = euler(0, 4, 1, dx, step=0.00001)\nplt.subplot(1,2,1)\nplt.title('Computed')\nplt.plot(np.linspace(0, 4, len(ys)),ys)\nplt.subplot(1,2,2)\nt = np.linspace(0, 4, 20)\nplt.title('Exact')\nplt.plot(t, np.exp(t));\n```\n\n\n```python\nprint('exact answer=', np.exp(4))\nprint('euler answer=', ys[-1])\nprint('difference =', np.exp(4) - ys[-1])\nprint('iterations =', len(ys))\n```\n\n exact answer= 54.5981500331\n euler answer= 54.59705808834125\n difference = 0.00109194480299\n iterations = 400000\n\n\nHere we see that the error is reasonably small, but it took a very large number of iterations to get three digits of precision. In practice Euler's method is too slow for most problems, and we use more sophisticated methods.\n\nBefore we go on, let's formally derive Euler's method, as it is the basis for the more advanced Runge Kutta methods used in the next section. In fact, Euler's method is the simplest form of Runge Kutta.\n\n\nHere are the first 3 terms of the Euler expansion of $y$. An infinite expansion would give an exact answer, so $O(h^4)$ denotes the error due to the finite expansion.\n\n$$y(t_0 + h) = y(t_0) + h y'(t_0) + \\frac{1}{2!}h^2 y''(t_0) + \\frac{1}{3!}h^3 y'''(t_0) + O(h^4)$$\n\nHere we can see that Euler's method is using the first two terms of the Taylor expansion. Each subsequent term is smaller than the previous terms, so we are assured that the estimate will not be too far off from the correct value. \n\n### Runge Kutta Methods\n\n\nRunge Kutta is the workhorse of numerical integration. There are a vast number of methods in the literature. In practice, using the Runge Kutta algorithm that I present here will solve most any problem you will face. It offers a very good balance of speed, precision, and stability, and it is the 'go to' numerical integration method unless you have a very good reason to choose something different.\n\nLet's dive in. We start with some differential equation\n\n$$\\ddot{y} = \\frac{d}{dt}\\dot{y}$$.\n\nWe can substitute the derivative of y with a function f, like so\n\n$$\\ddot{y} = \\frac{d}{dt}f(y,t)$$.\n\nDeriving these equations is outside the scope of this book, but the Runge Kutta RK4 method is defined with these equations.\n\n$$y(t+\\Delta t) = y(t) + \\frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + O(\\Delta t^4)$$\n\n$$\\begin{aligned}\nk_1 &= f(y,t)\\Delta t \\\\\nk_2 &= f(y+\\frac{1}{2}k_1, t+\\frac{1}{2}\\Delta t)\\Delta t \\\\\nk_3 &= f(y+\\frac{1}{2}k_2, t+\\frac{1}{2}\\Delta t)\\Delta t \\\\\nk_4 &= f(y+k_3, t+\\Delta t)\\Delta t\n\\end{aligned}\n$$\n\nHere is the corresponding code:\n\n\n```python\ndef runge_kutta4(y, x, dx, f):\n \"\"\"computes 4th order Runge-Kutta for dy/dx.\n y is the initial value for y\n x is the initial value for x\n dx is the difference in x (e.g. the time step)\n f is a callable function (y, x) that you supply \n to compute dy/dx for the specified values.\n \"\"\"\n \n k1 = dx * f(y, x)\n k2 = dx * f(y + 0.5*k1, x + 0.5*dx)\n k3 = dx * f(y + 0.5*k2, x + 0.5*dx)\n k4 = dx * f(y + k3, x + dx)\n \n return y + (k1 + 2*k2 + 2*k3 + k4) / 6.\n```\n\nLet's use this for a simple example. Let\n\n$$\\dot{y} = t\\sqrt{y(t)}$$\n\nwith the initial values\n\n$$\\begin{aligned}t_0 &= 0\\\\y_0 &= y(t_0) = 1\\end{aligned}$$\n\n\n```python\nimport math\nimport numpy as np\nt = 0.\ny = 1.\ndt = .1\n\nys, ts = [], []\n\ndef func(y,t):\n return t*math.sqrt(y)\n\nwhile t <= 10:\n y = runge_kutta4(y, t, dt, func)\n t += dt\n ys.append(y)\n ts.append(t)\n\nexact = [(t**2 + 4)**2 / 16. for t in ts]\nplt.plot(ts, ys)\nplt.plot(ts, exact)\n\nerror = np.array(exact) - np.array(ys)\nprint(\"max error {}\".format(max(error)))\n```\n\n## Bayesian Filtering\n\nStarting in the Discrete Bayes chapter I used a Bayesian formulation for filtering. Suppose we are tracking an object. We define its *state* at a specific time as its position, velocity, and so on. For example, we might write the state at time $t$ as $\\mathbf x_t = \\begin{bmatrix}x_t &\\dot x_t \\end{bmatrix}^\\mathsf T$. \n\nWhen we take a measurement of the object we are measuring the state or part of it. Sensors are noisy, so the measurement is corrupted with noise. Clearly though, the measurement is determined by the state. That is, a change in state may change the measurement, but a change in measurement will not change the state.\n\nIn filtering our goal is to compute an optimal estimate for a set of states $\\mathbf x_{0:t}$ from time 0 to time $t$. If we knew $\\mathbf x_{0:t}$ then it would be trivial to compute a set of measurements $\\mathbf z_{0:t}$ corresponding to those states. However, we receive a set of measurements $\\mathbf z_{0:t}$, and want to compute the corresponding states $\\mathbf x_{0:t}$. This is called *statistical inversion* because we are trying to compute the input from the output. \n\nInversion is a difficult problem because there is typically no unique solution. For a given set of states $\\mathbf x_{0:t}$ there is only one possible set of measurements (plus noise), but for a given set of measurements there are many different sets of states that could have led to those measurements. \n\nRecall Bayes Theorem:\n\n$$P(x \\mid z) = \\frac{P(z \\mid x)P(x)}{P(z)}$$\n\nwhere $P(z \\mid x)$ is the *likelihood* of the measurement $z$, $P(x)$ is the *prior* based on our process model, and $P(z)$ is a normalization constant. $P(x \\mid z)$ is the *posterior*, or the distribution after incorporating the measurement $z$, also called the *evidence*.\n\nThis is a *statistical inversion* as it goes from $P(z \\mid x)$ to $P(x \\mid z)$. The solution to our filtering problem can be expressed as:\n\n$$P(\\mathbf x_{0:t} \\mid \\mathbf z_{0:t}) = \\frac{P(\\mathbf z_{0:t} \\mid \\mathbf x_{0:t})P(\\mathbf x_{0:t})}{P(\\mathbf z_{0:t})}$$\n\nThat is all well and good until the next measurement $\\mathbf z_{t+1}$ comes in, at which point we need to recompute the entire expression for the range $0:t+1$. \n\n\nIn practice this is intractable because we are trying to compute the posterior distribution $P(\\mathbf x_{0:t} \\mid \\mathbf z_{0:t})$ for the state over the full range of time steps. But do we really care about the probability distribution at the third step (say) when we just received the tenth measurement? Not usually. So we relax our requirements and only compute the distributions for the current time step.\n\nThe first simplification is we describe our process (e.g., the motion model for a moving object) as a *Markov chain*. That is, we say that the current state is solely dependent on the previous state and a transition probability $P(\\mathbf x_k \\mid \\mathbf x_{k-1})$, which is just the probability of going from the last state to the current one. We write:\n\n$$\\mathbf x_k \\sim P(\\mathbf x_k \\mid \\mathbf x_{k-1})$$\n\nThe next simplification we make is do define the *measurement model* as depending on the current state $\\mathbf x_k$ with the conditional probability of the measurement given the current state: $P(\\mathbf z_t \\mid \\mathbf x_x)$. We write:\n\n$$\\mathbf z_k \\sim P(\\mathbf z_t \\mid \\mathbf x_x)$$\n\nWe have a recurrance now, so we need an initial condition to terminate it. Therefore we say that the initial distribution is the probablity of the state $\\mathbf x_0$:\n\n$$\\mathbf x_0 \\sim P(\\mathbf x_0)$$\n\n\nThese terms are plugged into Bayes equation. If we have the state $\\mathbf x_0$ and the first measurement we can estimate $P(\\mathbf x_1 | \\mathbf z_1)$. The motion model creates the prior $P(\\mathbf x_2 \\mid \\mathbf x_1)$. We feed this back into Bayes theorem to compute $P(\\mathbf x_2 | \\mathbf z_2)$. We continue this predictor-corrector algorithm, recursively computing the state and distribution at time $t$ based solely on the state and distribution at time $t-1$ and the measurement at time $t$.\n\nThe details of the mathematics for this computation varies based on the problem. The **Discrete Bayes** and **Univariate Kalman Filter** chapters gave two different formulations which you should have been able to reason through. The univariate Kalman filter assumes that for a scalar state both the noise and process are linear model are affected by zero-mean, uncorrelated Gaussian noise. \n\nThe Multivariate Kalman filter make the same assumption but for states and measurements that are vectors, not scalars. Dr. Kalman was able to prove that if these assumptions hold true then the Kalman filter is *optimal* in a least squares sense. Colloquially this means there is no way to derive more information from the noise. In the remainder of the book I will present filters that relax the constraints on linearity and Gaussian noise.\n\nBefore I go on, a few more words about statistical inversion. As Calvetti and Somersalo write in *Introduction to Bayesian Scientific Computing*, \"we adopt the Bayesian point of view: *randomness simply means lack of information*.\"[3] Our state parametize physical phenomena that we could in principle measure or compute: velocity, air drag, and so on. We lack enough information to compute or measure their value, so we opt to consider them as random variables. Strictly speaking they are not random, thus this is a subjective position. \n\nThey devote a full chapter to this topic. I can spare a paragraph. Bayesian filters are possible because we ascribe statistical properties to unknown parameters. In the case of the Kalman filter we have closed-form solutions to find an optimal estimate. Other filters, such as the discrete Bayes filter or the particle filter which we cover in a later chapter, model the probability in a more ad-hoc, non-optimal manner. The power of our technique comes from treating lack of information as a random variable, describing that random variable as a probability distribution, and then using Bayes Theorem to solve the statistical inference problem.\n\n## Converting Kalman Filter to a g-h Filter\n\nI've stated that the Kalman filter is a form of the g-h filter. It just takes some algebra to prove it. It's more straightforward to do with the one dimensional case, so I will do that. Recall \n\n$$\n\\mu_{x}=\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}\n$$\n\nwhich I will make more friendly for our eyes as:\n\n$$\n\\mu_{x}=\\frac{ya + xb} {a+b}\n$$\n\nWe can easily put this into the g-h form with the following algebra\n\n$$\n\\begin{aligned}\n\\mu_{x}&=(x-x) + \\frac{ya + xb} {a+b} \\\\\n\\mu_{x}&=x-\\frac{a+b}{a+b}x + \\frac{ya + xb} {a+b} \\\\ \n\\mu_{x}&=x +\\frac{-x(a+b) + xb+ya}{a+b} \\\\\n\\mu_{x}&=x+ \\frac{-xa+ya}{a+b} \\\\\n\\mu_{x}&=x+ \\frac{a}{a+b}(y-x)\\\\\n\\end{aligned}\n$$\n\nWe are almost done, but recall that the variance of estimate is given by \n\n$$\\begin{aligned}\n\\sigma_{x}^2 &= \\frac{1}{\\frac{1}{\\sigma_1^2} + \\frac{1}{\\sigma_2^2}} \\\\\n&= \\frac{1}{\\frac{1}{a} + \\frac{1}{b}}\n\\end{aligned}$$\n\nWe can incorporate that term into our equation above by observing that\n\n$$ \n\\begin{aligned}\n\\frac{a}{a+b} &= \\frac{a/a}{(a+b)/a} = \\frac{1}{(a+b)/a} \\\\\n &= \\frac{1}{1 + \\frac{b}{a}} = \\frac{1}{\\frac{b}{b} + \\frac{b}{a}} \\\\\n &= \\frac{1}{b}\\frac{1}{\\frac{1}{b} + \\frac{1}{a}} \\\\\n &= \\frac{\\sigma^2_{x'}}{b}\n \\end{aligned}\n$$\n\nWe can tie all of this together with\n\n$$\n\\begin{aligned}\n\\mu_{x}&=x+ \\frac{a}{a+b}(y-x) \\\\\n&= x + \\frac{\\sigma^2_{x'}}{b}(y-x) \\\\\n&= x + g_n(y-x)\n\\end{aligned}\n$$\n\nwhere\n\n$$g_n = \\frac{\\sigma^2_{x}}{\\sigma^2_{y}}$$\n\nThe end result is multiplying the residual of the two measurements by a constant and adding to our previous value, which is the $g$ equation for the g-h filter. $g$ is the variance of the new estimate divided by the variance of the measurement. Of course in this case $g$ is not a constant as it varies with each time step as the variance changes. We can also derive the formula for $h$ in the same way. It is not a particularly illuminating derivation and I will skip it. The end result is\n\n$$h_n = \\frac{COV (x,\\dot x)}{\\sigma^2_{y}}$$\n\nThe takeaway point is that $g$ and $h$ are specified fully by the variance and covariances of the measurement and predictions at time $n$. In other words, we are picking a point between the measurement and prediction by a scale factor determined by the quality of each of those two inputs.\n\n## References\n\n * [1] C.B. Molwer and C.F. Van Loan \"Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later,\", *SIAM Review 45, 3-49*. 2003.\n\n\n * [2] C.F. van Loan, \"Computing Integrals Involving the Matrix Exponential,\" IEEE *Transactions Automatic Control*, June 1978.\n \n \n * [3] Calvetti, D and Somersalo E, \"Introduction to Bayesian Scientific Computing: Ten Lectures on Subjective Computing,\", *Springer*, 2007.\n", "meta": {"hexsha": "824f229069d3cd2417b98fd034b758f1e7b27a17", "size": 177070, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07-Kalman-Filter-Math.ipynb", "max_stars_repo_name": "MichaelRW/Kalman_and_Bayesian_Filtering", "max_stars_repo_head_hexsha": "2e9394c7942872b155228ed7b21798527961282b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-10-27T02:19:43.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-13T09:17:00.000Z", "max_issues_repo_path": "07-Kalman-Filter-Math.ipynb", "max_issues_repo_name": "MichaelRW/Kalman_and_Bayesian_Filtering", "max_issues_repo_head_hexsha": "2e9394c7942872b155228ed7b21798527961282b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07-Kalman-Filter-Math.ipynb", "max_forks_repo_name": "MichaelRW/Kalman_and_Bayesian_Filtering", "max_forks_repo_head_hexsha": "2e9394c7942872b155228ed7b21798527961282b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-01-17T17:42:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-12T18:19:42.000Z", "avg_line_length": 99.4216732173, "max_line_length": 27064, "alphanum_fraction": 0.7784209635, "converted": true, "num_tokens": 16279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49218813572079556, "lm_q2_score": 0.20434189993684584, "lm_q1q2_score": 0.10057465877956151}} {"text": "```python\n%matplotlib inline\n```\n\n\nWord Embeddings: Encoding Lexical Semantics\n===========================================\n\nWord embeddings are dense vectors of real numbers, one per word in your\nvocabulary. In NLP, it is almost always the case that your features are\nwords! But how should you represent a word in a computer? You could\nstore its ascii character representation, but that only tells you what\nthe word *is*, it doesn't say much about what it *means* (you might be\nable to derive its part of speech from its affixes, or properties from\nits capitalization, but not much). Even more, in what sense could you\ncombine these representations? We often want dense outputs from our\nneural networks, where the inputs are $|V|$ dimensional, where\n$V$ is our vocabulary, but often the outputs are only a few\ndimensional (if we are only predicting a handful of labels, for\ninstance). How do we get from a massive dimensional space to a smaller\ndimensional space?\n\nHow about instead of ascii representations, we use a one-hot encoding?\nThat is, we represent the word $w$ by\n\n\\begin{align}\\overbrace{\\left[ 0, 0, \\dots, 1, \\dots, 0, 0 \\right]}^\\text{|V| elements}\\end{align}\n\nwhere the 1 is in a location unique to $w$. Any other word will\nhave a 1 in some other location, and a 0 everywhere else.\n\nThere is an enormous drawback to this representation, besides just how\nhuge it is. It basically treats all words as independent entities with\nno relation to each other. What we really want is some notion of\n*similarity* between words. Why? Let's see an example.\n\nSuppose we are building a language model. Suppose we have seen the\nsentences\n\n* The mathematician ran to the store.\n* The physicist ran to the store.\n* The mathematician solved the open problem.\n\nin our training data. Now suppose we get a new sentence never before\nseen in our training data:\n\n* The physicist solved the open problem.\n\nOur language model might do OK on this sentence, but wouldn't it be much\nbetter if we could use the following two facts:\n\n* We have seen mathematician and physicist in the same role in a sentence. Somehow they\n have a semantic relation.\n* We have seen mathematician in the same role in this new unseen sentence\n as we are now seeing physicist.\n\nand then infer that physicist is actually a good fit in the new unseen\nsentence? This is what we mean by a notion of similarity: we mean\n*semantic similarity*, not simply having similar orthographic\nrepresentations. It is a technique to combat the sparsity of linguistic\ndata, by connecting the dots between what we have seen and what we\nhaven't. This example of course relies on a fundamental linguistic\nassumption: that words appearing in similar contexts are related to each\nother semantically. This is called the `distributional\nhypothesis `__.\n\n\nGetting Dense Word Embeddings\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nHow can we solve this problem? That is, how could we actually encode\nsemantic similarity in words? Maybe we think up some semantic\nattributes. For example, we see that both mathematicians and physicists\ncan run, so maybe we give these words a high score for the \"is able to\nrun\" semantic attribute. Think of some other attributes, and imagine\nwhat you might score some common words on those attributes.\n\nIf each attribute is a dimension, then we might give each word a vector,\nlike this:\n\n\\begin{align}q_\\text{mathematician} = \\left[ \\overbrace{2.3}^\\text{can run},\n \\overbrace{9.4}^\\text{likes coffee}, \\overbrace{-5.5}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\n\\begin{align}q_\\text{physicist} = \\left[ \\overbrace{2.5}^\\text{can run},\n \\overbrace{9.1}^\\text{likes coffee}, \\overbrace{6.4}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\nThen we can get a measure of similarity between these words by doing:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = q_\\text{physicist} \\cdot q_\\text{mathematician}\\end{align}\n\nAlthough it is more common to normalize by the lengths:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = \\frac{q_\\text{physicist} \\cdot q_\\text{mathematician}}\n {\\| q_\\text{\\physicist} \\| \\| q_\\text{mathematician} \\|} = \\cos (\\phi)\\end{align}\n\nWhere $\\phi$ is the angle between the two vectors. That way,\nextremely similar words (words whose embeddings point in the same\ndirection) will have similarity 1. Extremely dissimilar words should\nhave similarity -1.\n\n\nYou can think of the sparse one-hot vectors from the beginning of this\nsection as a special case of these new vectors we have defined, where\neach word basically has similarity 0, and we gave each word some unique\nsemantic attribute. These new vectors are *dense*, which is to say their\nentries are (typically) non-zero.\n\nBut these new vectors are a big pain: you could think of thousands of\ndifferent semantic attributes that might be relevant to determining\nsimilarity, and how on earth would you set the values of the different\nattributes? Central to the idea of deep learning is that the neural\nnetwork learns representations of the features, rather than requiring\nthe programmer to design them herself. So why not just let the word\nembeddings be parameters in our model, and then be updated during\ntraining? This is exactly what we will do. We will have some *latent\nsemantic attributes* that the network can, in principle, learn. Note\nthat the word embeddings will probably not be interpretable. That is,\nalthough with our hand-crafted vectors above we can see that\nmathematicians and physicists are similar in that they both like coffee,\nif we allow a neural network to learn the embeddings and see that both\nmathematicians and physicists have a large value in the second\ndimension, it is not clear what that means. They are similar in some\nlatent semantic dimension, but this probably has no interpretation to\nus.\n\n\nIn summary, **word embeddings are a representation of the *semantics* of\na word, efficiently encoding semantic information that might be relevant\nto the task at hand**. You can embed other things too: part of speech\ntags, parse trees, anything! The idea of feature embeddings is central\nto the field.\n\n\nWord Embeddings in Pytorch\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBefore we get to a worked example and an exercise, a few quick notes\nabout how to use embeddings in Pytorch and in deep learning programming\nin general. Similar to how we defined a unique index for each word when\nmaking one-hot vectors, we also need to define an index for each word\nwhen using embeddings. These will be keys into a lookup table. That is,\nembeddings are stored as a $|V| \\times D$ matrix, where $D$\nis the dimensionality of the embeddings, such that the word assigned\nindex $i$ has its embedding stored in the $i$'th row of the\nmatrix. In all of my code, the mapping from words to indices is a\ndictionary named word\\_to\\_ix.\n\nThe module that allows you to use embeddings is torch.nn.Embedding,\nwhich takes two arguments: the vocabulary size, and the dimensionality\nof the embeddings.\n\nTo index into this table, you must use torch.LongTensor (since the\nindices are integers, not floats).\n\n\n\n\n\n```python\n# Author: Robert Guthrie\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)\n```\n\n\n\n\n \n\n\n\n\n```python\nword_to_ix = {\"hello\": 0, \"world\": 1}\nembeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings\nlookup_tensor = torch.tensor([word_to_ix[\"hello\"]], dtype=torch.long)\nhello_embed = embeds(lookup_tensor)\nprint(\"hello embedding: \", hello_embed)\n\nworld_tensor = torch.tensor([word_to_ix[\"world\"]], dtype=torch.long)\nworld_embed = embeds(world_tensor)\nprint(\"world embedding: \", world_embed)\n```\n\n hello embedding: tensor([[ 3.5870, -1.8313, 1.5987, -1.2770, 0.3255]], grad_fn=)\n world embedding: tensor([[-0.4791, 1.3790, 2.5286, 0.4107, -0.9880]], grad_fn=)\n\n\nAn Example: N-Gram Language Modeling\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRecall that in an n-gram language model, given a sequence of words\n$w$, we want to compute\n\n\\begin{align}P(w_i | w_{i-1}, w_{i-2}, \\dots, w_{i-n+1} )\\end{align}\n\nWhere $w_i$ is the ith word of the sequence.\n\nIn this example, we will compute the loss function on some training\nexamples and update the parameters with backpropagation.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2\nEMBEDDING_DIM = 10\n# We will use Shakespeare Sonnet 2\ntest_sentence = \"\"\"When forty winters shall besiege thy brow,\nAnd dig deep trenches in thy beauty's field,\nThy youth's proud livery so gazed on now,\nWill be a totter'd weed of small worth held:\nThen being asked, where all thy beauty lies,\nWhere all the treasure of thy lusty days;\nTo say, within thine own deep sunken eyes,\nWere an all-eating shame, and thriftless praise.\nHow much more praise deserv'd thy beauty's use,\nIf thou couldst answer 'This fair child of mine\nShall sum my count, and make my old excuse,'\nProving his beauty by succession thine!\nThis were to be new made when thou art old,\nAnd see thy blood warm when thou feel'st it cold.\"\"\".split()\n# we should tokenize the input, but we will ignore that for now\n# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)\ntrigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])\n for i in range(len(test_sentence) - 2)]\n# print the first 3, just so you can see what they look like\nprint(\"triagrams: \", trigrams[:3])\n\n```\n\n next most probable word is: thy\n sorted probs: [['thy', -4.186922], ['art', -4.1922836], ['Thy', -4.248099], ['of', -4.2611537], ['To', -4.2630076], [\"feel'st\", -4.277316], ['shall', -4.311347], ['all-eating', -4.315796], ['trenches', -4.335431], ['winters', -4.3398404], ['weed', -4.344031], ['to', -4.3594456], ['an', -4.3658733], ['Then', -4.3755765], ['field,', -4.3875885], ['This', -4.4035196], ['cold.', -4.4039836], ['asked,', -4.413395], ['be', -4.4173007], ['when', -4.4187098], ['And', -4.420888], ['sunken', -4.435668], ['use,', -4.454238], ['eyes,', -4.4582515], ['new', -4.470178], ['livery', -4.4881845], ['fair', -4.48823], ['answer', -4.4896107], ['couldst', -4.494951], ['deep', -4.4967484], ['worth', -4.5047307], ['make', -4.5149193], ['thine!', -4.523828], ['and', -4.5316954], ['old', -4.541312], ['thou', -4.542433], [\"deserv'd\", -4.5449767], ['my', -4.5454288], ['Will', -4.546628], ['small', -4.558781], ['were', -4.560234], ['proud', -4.565142], [\"youth's\", -4.5692616], ['thine', -4.5696316], ['lusty', -4.575783], [\"excuse,'\", -4.5793695], ['sum', -4.587189], ['held:', -4.5924115], ['blood', -4.593034], ['child', -4.6025386], ['much', -4.6080484], ['mine', -4.6193256], ['on', -4.6253176], ['so', -4.6262584], ['gazed', -4.627983], ['a', -4.6304474], ['his', -4.634258], ['count,', -4.6342707], ['old,', -4.638082], ['days;', -4.6426125], ['thriftless', -4.6506767], ['Were', -4.6526656], ['in', -4.654818], [\"beauty's\", -4.656125], ['own', -4.6561947], ['all', -4.6609883], ['succession', -4.666073], ['more', -4.679204], [\"totter'd\", -4.6818657], ['within', -4.689345], [\"'This\", -4.7001176], ['besiege', -4.702865], ['praise', -4.7082815], ['the', -4.7130375], ['Proving', -4.7230167], ['shame,', -4.727713], ['brow,', -4.739775], ['where', -4.753501], ['praise.', -4.7548923], ['see', -4.755212], ['Where', -4.7792172], ['Shall', -4.7813163], ['forty', -4.7825603], ['How', -4.806548], ['treasure', -4.8108234], ['dig', -4.8125415], ['by', -4.825708], ['beauty', -4.8277187], ['it', -4.830165], ['being', -4.8485456], ['say,', -4.8581595], ['lies,', -4.8927193], ['When', -4.9533033], ['warm', -4.9603124], ['If', -4.9679556], ['now,', -4.976203], ['made', -4.985473]]\n\n\n\n```python\n\nvocab = set(test_sentence)\nword_to_ix = {word: i for i, word in enumerate(vocab)}\nprint(\"word_to_ix for When: \", word_to_ix['When'])\n```\n\n next most probable word is: thy\n sorted probs: [['thy', -4.186922], ['art', -4.1922836], ['Thy', -4.248099], ['of', -4.2611537], ['To', -4.2630076], [\"feel'st\", -4.277316], ['shall', -4.311347], ['all-eating', -4.315796], ['trenches', -4.335431], ['winters', -4.3398404], ['weed', -4.344031], ['to', -4.3594456], ['an', -4.3658733], ['Then', -4.3755765], ['field,', -4.3875885], ['This', -4.4035196], ['cold.', -4.4039836], ['asked,', -4.413395], ['be', -4.4173007], ['when', -4.4187098], ['And', -4.420888], ['sunken', -4.435668], ['use,', -4.454238], ['eyes,', -4.4582515], ['new', -4.470178], ['livery', -4.4881845], ['fair', -4.48823], ['answer', -4.4896107], ['couldst', -4.494951], ['deep', -4.4967484], ['worth', -4.5047307], ['make', -4.5149193], ['thine!', -4.523828], ['and', -4.5316954], ['old', -4.541312], ['thou', -4.542433], [\"deserv'd\", -4.5449767], ['my', -4.5454288], ['Will', -4.546628], ['small', -4.558781], ['were', -4.560234], ['proud', -4.565142], [\"youth's\", -4.5692616], ['thine', -4.5696316], ['lusty', -4.575783], [\"excuse,'\", -4.5793695], ['sum', -4.587189], ['held:', -4.5924115], ['blood', -4.593034], ['child', -4.6025386], ['much', -4.6080484], ['mine', -4.6193256], ['on', -4.6253176], ['so', -4.6262584], ['gazed', -4.627983], ['a', -4.6304474], ['his', -4.634258], ['count,', -4.6342707], ['old,', -4.638082], ['days;', -4.6426125], ['thriftless', -4.6506767], ['Were', -4.6526656], ['in', -4.654818], [\"beauty's\", -4.656125], ['own', -4.6561947], ['all', -4.6609883], ['succession', -4.666073], ['more', -4.679204], [\"totter'd\", -4.6818657], ['within', -4.689345], [\"'This\", -4.7001176], ['besiege', -4.702865], ['praise', -4.7082815], ['the', -4.7130375], ['Proving', -4.7230167], ['shame,', -4.727713], ['brow,', -4.739775], ['where', -4.753501], ['praise.', -4.7548923], ['see', -4.755212], ['Where', -4.7792172], ['Shall', -4.7813163], ['forty', -4.7825603], ['How', -4.806548], ['treasure', -4.8108234], ['dig', -4.8125415], ['by', -4.825708], ['beauty', -4.8277187], ['it', -4.830165], ['being', -4.8485456], ['say,', -4.8581595], ['lies,', -4.8927193], ['When', -4.9533033], ['warm', -4.9603124], ['If', -4.9679556], ['now,', -4.976203], ['made', -4.985473]]\n\n\n\n```python\n\n\nclass NGramLanguageModeler(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(NGramLanguageModeler, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n # returns probability of a new word (shows the probability for each \n # word in the vocabulary) after the two context words are given \n self.linear2 = nn.Linear(128, vocab_size)\n\n def forward(self, inputs):\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n\nlosses = []\nloss_function = nn.NLLLoss()\nmodel = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\nfor epoch in range(10):\n total_loss = 0\n for context, target in trigrams:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in tensors)\n context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n\n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a tensor)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # The loss decreased every iteration over the training data!\n\n```\n\n next most probable word is: thy\n sorted probs: [['thy', -4.186922], ['art', -4.1922836], ['Thy', -4.248099], ['of', -4.2611537], ['To', -4.2630076], [\"feel'st\", -4.277316], ['shall', -4.311347], ['all-eating', -4.315796], ['trenches', -4.335431], ['winters', -4.3398404], ['weed', -4.344031], ['to', -4.3594456], ['an', -4.3658733], ['Then', -4.3755765], ['field,', -4.3875885], ['This', -4.4035196], ['cold.', -4.4039836], ['asked,', -4.413395], ['be', -4.4173007], ['when', -4.4187098], ['And', -4.420888], ['sunken', -4.435668], ['use,', -4.454238], ['eyes,', -4.4582515], ['new', -4.470178], ['livery', -4.4881845], ['fair', -4.48823], ['answer', -4.4896107], ['couldst', -4.494951], ['deep', -4.4967484], ['worth', -4.5047307], ['make', -4.5149193], ['thine!', -4.523828], ['and', -4.5316954], ['old', -4.541312], ['thou', -4.542433], [\"deserv'd\", -4.5449767], ['my', -4.5454288], ['Will', -4.546628], ['small', -4.558781], ['were', -4.560234], ['proud', -4.565142], [\"youth's\", -4.5692616], ['thine', -4.5696316], ['lusty', -4.575783], [\"excuse,'\", -4.5793695], ['sum', -4.587189], ['held:', -4.5924115], ['blood', -4.593034], ['child', -4.6025386], ['much', -4.6080484], ['mine', -4.6193256], ['on', -4.6253176], ['so', -4.6262584], ['gazed', -4.627983], ['a', -4.6304474], ['his', -4.634258], ['count,', -4.6342707], ['old,', -4.638082], ['days;', -4.6426125], ['thriftless', -4.6506767], ['Were', -4.6526656], ['in', -4.654818], [\"beauty's\", -4.656125], ['own', -4.6561947], ['all', -4.6609883], ['succession', -4.666073], ['more', -4.679204], [\"totter'd\", -4.6818657], ['within', -4.689345], [\"'This\", -4.7001176], ['besiege', -4.702865], ['praise', -4.7082815], ['the', -4.7130375], ['Proving', -4.7230167], ['shame,', -4.727713], ['brow,', -4.739775], ['where', -4.753501], ['praise.', -4.7548923], ['see', -4.755212], ['Where', -4.7792172], ['Shall', -4.7813163], ['forty', -4.7825603], ['How', -4.806548], ['treasure', -4.8108234], ['dig', -4.8125415], ['by', -4.825708], ['beauty', -4.8277187], ['it', -4.830165], ['being', -4.8485456], ['say,', -4.8581595], ['lies,', -4.8927193], ['When', -4.9533033], ['warm', -4.9603124], ['If', -4.9679556], ['now,', -4.976203], ['made', -4.985473]]\n\n\n\n```python\ncontext = ['When', 'forty']\ncontext_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\nlog_probs = model(context_idxs)\nmax_ix = torch.argmax(log_probs)\n# ix_to_word = {i: word for word, i in word_to_ix.items()}\n# print(\"ix_to_word: \", ix_to_word)\n# next_word = ix_to_word[max_ix.item()]\nlist_vocab = list(vocab)\nnext_word = list_vocab[max_ix]\nprint(\"next most probable word is: \", next_word)\n#print(\"log probs: \", log_probs)\nword_prob = []\nlog_probs_np = log_probs.detach().numpy().flatten()\n# print(\"log probs numpy: \", log_probs_np)\nfor i in range(len(vocab)):\n word_prob.append([list_vocab[i], log_probs_np[i]])\n \nsorted_probs = sorted(word_prob, key = lambda tup: tup[1], reverse=True)\nprint(\"sorted probs: \", sorted_probs)\n```\n\n next most probable word is: thy\n sorted probs: [['thy', -4.186922], ['art', -4.1922836], ['Thy', -4.248099], ['of', -4.2611537], ['To', -4.2630076], [\"feel'st\", -4.277316], ['shall', -4.311347], ['all-eating', -4.315796], ['trenches', -4.335431], ['winters', -4.3398404], ['weed', -4.344031], ['to', -4.3594456], ['an', -4.3658733], ['Then', -4.3755765], ['field,', -4.3875885], ['This', -4.4035196], ['cold.', -4.4039836], ['asked,', -4.413395], ['be', -4.4173007], ['when', -4.4187098], ['And', -4.420888], ['sunken', -4.435668], ['use,', -4.454238], ['eyes,', -4.4582515], ['new', -4.470178], ['livery', -4.4881845], ['fair', -4.48823], ['answer', -4.4896107], ['couldst', -4.494951], ['deep', -4.4967484], ['worth', -4.5047307], ['make', -4.5149193], ['thine!', -4.523828], ['and', -4.5316954], ['old', -4.541312], ['thou', -4.542433], [\"deserv'd\", -4.5449767], ['my', -4.5454288], ['Will', -4.546628], ['small', -4.558781], ['were', -4.560234], ['proud', -4.565142], [\"youth's\", -4.5692616], ['thine', -4.5696316], ['lusty', -4.575783], [\"excuse,'\", -4.5793695], ['sum', -4.587189], ['held:', -4.5924115], ['blood', -4.593034], ['child', -4.6025386], ['much', -4.6080484], ['mine', -4.6193256], ['on', -4.6253176], ['so', -4.6262584], ['gazed', -4.627983], ['a', -4.6304474], ['his', -4.634258], ['count,', -4.6342707], ['old,', -4.638082], ['days;', -4.6426125], ['thriftless', -4.6506767], ['Were', -4.6526656], ['in', -4.654818], [\"beauty's\", -4.656125], ['own', -4.6561947], ['all', -4.6609883], ['succession', -4.666073], ['more', -4.679204], [\"totter'd\", -4.6818657], ['within', -4.689345], [\"'This\", -4.7001176], ['besiege', -4.702865], ['praise', -4.7082815], ['the', -4.7130375], ['Proving', -4.7230167], ['shame,', -4.727713], ['brow,', -4.739775], ['where', -4.753501], ['praise.', -4.7548923], ['see', -4.755212], ['Where', -4.7792172], ['Shall', -4.7813163], ['forty', -4.7825603], ['How', -4.806548], ['treasure', -4.8108234], ['dig', -4.8125415], ['by', -4.825708], ['beauty', -4.8277187], ['it', -4.830165], ['being', -4.8485456], ['say,', -4.8581595], ['lies,', -4.8927193], ['When', -4.9533033], ['warm', -4.9603124], ['If', -4.9679556], ['now,', -4.976203], ['made', -4.985473]]\n\n\nExercise: Computing Word Embeddings: Continuous Bag-of-Words\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep\nlearning. It is a model that tries to predict words given the context of\na few words before and a few words after the target word. This is\ndistinct from language modeling, since CBOW is not sequential and does\nnot have to be probabilistic. Typcially, CBOW is used to quickly train\nword embeddings, and these embeddings are used to initialize the\nembeddings of some more complicated model. Usually, this is referred to\nas *pretraining embeddings*. It almost always helps performance a couple\nof percent.\n\nThe CBOW model is as follows. Given a target word $w_i$ and an\n$N$ context window on each side, $w_{i-1}, \\dots, w_{i-N}$\nand $w_{i+1}, \\dots, w_{i+N}$, referring to all context words\ncollectively as $C$, CBOW tries to minimize\n\n\\begin{align}-\\log p(w_i | C) = -\\log \\text{Softmax}(A(\\sum_{w \\in C} q_w) + b)\\end{align}\n\nwhere $q_w$ is the embedding for word $w$.\n\nImplement this model in Pytorch by filling in the class below. Some\ntips:\n\n* Think about which parameters you need to define.\n* Make sure you know what shape each operation expects. Use .view() if you need to\n reshape.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2 # 2 words to the left, 2 to the right\nraw_text = \"\"\"We are about to study the idea of a computational process.\nComputational processes are abstract beings that inhabit computers.\nAs they evolve, processes manipulate other abstract things called data.\nThe evolution of a process is directed by a pattern of rules\ncalled a program. People create programs to direct processes. In effect,\nwe conjure the spirits of the computer with our spells.\"\"\".split()\n\n# By deriving a set from `raw_text`, we deduplicate the array\nvocab = set(raw_text)\nvocab_size = len(vocab)\n\nword_to_ix = {word: i for i, word in enumerate(vocab)}\ndata = []\nfor i in range(2, len(raw_text) - 2):\n context = [raw_text[i - 2], raw_text[i - 1],\n raw_text[i + 1], raw_text[i + 2]]\n target = raw_text[i]\n data.append((context, target))\nprint(data[:5])\n\n\n# here are some functions to help you make\n# the data ready for use by your module\ndef make_context_vector(context, word_to_ix):\n idxs = [word_to_ix[w] for w in context]\n return torch.tensor(idxs, dtype=torch.long)\n\n\nmake_context_vector(data[0][0], word_to_ix) # example\n\n\nclass CBOW(nn.Module):\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(CBOW, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n # returns probability of a new word (shows the probability for each \n # word in the vocabulary) after the two context words are given \n self.linear2 = nn.Linear(128, vocab_size)\n\n def forward(self, inputs):\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n\n# create your model and train. \n\nloss_function = nn.NLLLoss()\nmodel = CBOW(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE * 2)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\nfor epoch in range(10):\n total_loss = 0\n for context, target in data:\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in tensors)\n context_idxs = make_context_vector(context, word_to_ix)\n\n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a tensor)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]],\n dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # The loss decreased every iteration over the training data!\n\n```\n\n next most probable word is: spirits\n sorted probs: [['spirits', -3.571494], ['Computational', -3.6043441], ['our', -3.6446052], ['process', -3.6532173], ['called', -3.6619964], ['programs', -3.6627073], ['about', -3.6661139], ['of', -3.667089], ['spells.', -3.6754756], ['processes', -3.6888878], ['rules', -3.6935806], ['evolution', -3.6960344], ['other', -3.6961038], ['that', -3.7074409], ['conjure', -3.7213202], ['In', -3.7506506], ['is', -3.7588222], ['We', -3.7936354], ['data.', -3.799165], ['As', -3.8064919], ['direct', -3.8306112], ['a', -3.86346], ['process.', -3.8688204], ['idea', -3.8740613], ['beings', -3.8935022], ['by', -3.9167356], ['pattern', -3.9174557], ['to', -3.968558], ['with', -3.9687386], ['manipulate', -3.978693], ['study', -3.9860647], ['things', -3.9939542], ['program.', -4.0097456], ['the', -4.0122476], ['effect,', -4.073447], ['computers.', -4.0799565], ['processes.', -4.0821033], ['abstract', -4.0989685], ['directed', -4.116846], ['inhabit', -4.155322], ['evolve,', -4.1835294], ['they', -4.195596], ['computer', -4.21617], ['People', -4.222899], ['The', -4.2314906], ['create', -4.238188], ['computational', -4.278097], ['we', -4.2880874], ['are', -4.3646474]]\n\n\n\n```python\n\ncontext = ['We', 'are', 'to', 'study']\ncontext_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\nlog_probs = model(context_idxs)\nmax_ix = torch.argmax(log_probs)\n# ix_to_word = {i: word for word, i in word_to_ix.items()}\n# print(\"ix_to_word: \", ix_to_word)\n# next_word = ix_to_word[max_ix.item()]\nlist_vocab = list(vocab)\nnext_word = list_vocab[max_ix]\nprint(\"next most probable word is: \", next_word)\n#print(\"log probs: \", log_probs)\nword_prob = []\nlog_probs_np = log_probs.detach().numpy().flatten()\n# print(\"log probs numpy: \", log_probs_np)\nfor i in range(len(vocab)):\n word_prob.append([list_vocab[i], log_probs_np[i]])\n \nsorted_probs = sorted(word_prob, key = lambda tup: tup[1], reverse=True)\nprint(\"sorted probs: \", sorted_probs)\n```\n\n next most probable word is: spirits\n sorted probs: [['spirits', -3.571494], ['Computational', -3.6043441], ['our', -3.6446052], ['process', -3.6532173], ['called', -3.6619964], ['programs', -3.6627073], ['about', -3.6661139], ['of', -3.667089], ['spells.', -3.6754756], ['processes', -3.6888878], ['rules', -3.6935806], ['evolution', -3.6960344], ['other', -3.6961038], ['that', -3.7074409], ['conjure', -3.7213202], ['In', -3.7506506], ['is', -3.7588222], ['We', -3.7936354], ['data.', -3.799165], ['As', -3.8064919], ['direct', -3.8306112], ['a', -3.86346], ['process.', -3.8688204], ['idea', -3.8740613], ['beings', -3.8935022], ['by', -3.9167356], ['pattern', -3.9174557], ['to', -3.968558], ['with', -3.9687386], ['manipulate', -3.978693], ['study', -3.9860647], ['things', -3.9939542], ['program.', -4.0097456], ['the', -4.0122476], ['effect,', -4.073447], ['computers.', -4.0799565], ['processes.', -4.0821033], ['abstract', -4.0989685], ['directed', -4.116846], ['inhabit', -4.155322], ['evolve,', -4.1835294], ['they', -4.195596], ['computer', -4.21617], ['People', -4.222899], ['The', -4.2314906], ['create', -4.238188], ['computational', -4.278097], ['we', -4.2880874], ['are', -4.3646474]]\n\n", "meta": {"hexsha": "dd1a87a2d77e2475556a75c0f136d06804160042", "size": 35394, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "pytorch_tutorials/nlp/word_embeddings_tutorial.ipynb", "max_stars_repo_name": "adam-dziedzic/time-series-ml", "max_stars_repo_head_hexsha": "81aaa27f1dd9ea3d7d62b661dac40cac6c1ef77a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-03-25T13:19:46.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-25T13:19:46.000Z", "max_issues_repo_path": "pytorch_tutorials/nlp/word_embeddings_tutorial.ipynb", "max_issues_repo_name": "adam-dziedzic/time-series-ml", "max_issues_repo_head_hexsha": "81aaa27f1dd9ea3d7d62b661dac40cac6c1ef77a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pytorch_tutorials/nlp/word_embeddings_tutorial.ipynb", "max_forks_repo_name": "adam-dziedzic/time-series-ml", "max_forks_repo_head_hexsha": "81aaa27f1dd9ea3d7d62b661dac40cac6c1ef77a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.193877551, "max_line_length": 2228, "alphanum_fraction": 0.5823585919, "converted": true, "num_tokens": 9393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43014733397551624, "lm_q2_score": 0.23370636225126956, "lm_q1q2_score": 0.10052816865549984}} {"text": "```python\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n c:\\users\\isomorphism\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\IPython\\core\\magics\\pylab.py:160: UserWarning: pylab import has clobbered these variables: ['f']\n `%matplotlib` prevents importing * from pylab and numpy\n \"\\n`%matplotlib` prevents importing * from pylab and numpy\"\n\n\n# Probability Theory\n\n## Week 1\n\n**Topics:** Random sampling, permutations, combinations.\n\n**Problems:** 3.1, 3.3, 3.4, 3.5, 3.6, additional problem set 1.\n\n### Random sampling\n\nTaking a random sample can be done in different ways:\n\n||With replacement|Without replacement|\n|---|---|---|\n|**With order**|$n^k$|$n!, \\quad k \\leq n$|\n|**Without order**|$\\begin{pmatrix}n+k-1\\\\k\\end{pmatrix}$|$\\begin{pmatrix}n\\\\k\\end{pmatrix}$|\n\nThis leads to four different cases which will be addressed seperately.\n\n### Permutations\n\nThe general formula for permutations is:\n\n$$ _nP_k = \\dfrac{n!}{(n-k)!}$$\n\n#### With replacement\n\n#### Without replacement\n\n### Combinations\n\nThe general formula for combinations is (which can also be written as the binomial coefficient):\n\n$$ _nC_k=\\dfrac{n!}{k!(n-k)!} = \\begin{pmatrix}n\\\\k\\end{pmatrix}$$\n\n#### With replacement\n\n#### Without replacement\n\n## Week 2\n\n**Topics**: Probability definitions, probability rules, grid chart.\n\n**Problems:** 3.7, 3.8, 3.10, additional problem set 2.\n\n### Probability definitions\n\n#### Probability experiment\n\nThere are different examples of probability experiments:\n\n1. One throw with a dice\n2. Two throws with a dice\n3. One throw with a coin\n4. Pulling two cards from a deck of cards\n\n#### Sample space\n\nThe sample space $\\Omega$ are all the possible outcomes for a probability experiment. For the above given examples this gives:\n\n1. $\\Omega=\\{1,2,3,4,5,6\\}$\n2. $\\Omega=\\{(a,b):a,b \\in \\{1,2,3,4,5,6\\}\\}$\n3. $\\Omega=\\{H,T\\}$\n4. $\\Omega=\\{(a,b):a,b \\in \\{1,\\ldots,52\\}\\}$\n\n#### Event\n\nAn event is a subset of $\\Omega$.\n\n1. Throwing an even number, $A=\\{2,4,6\\}$\n2. Throwing the same dices, $A=\\{(1,1),(2,2),\\ldots,(6,6)\\}$\n3. Throwing heads, $A=\\{H\\}$\n4. Picking a heart A ($HA$) and club 2 ($C2$), $A=\\{(HA,C2),(C2,HA)\\}$\n\n#### Elementary event\n\nAn elementary event $\\omega$ (also called atomic event) is exactly one element from $\\Omega$.\n\nFor example (1) all the elementary events are: $\\{1\\}$, $\\{2\\}$, $\\{3\\}$, $\\{4\\}$, $\\{5\\}$, $\\{6\\}$.\n\n#### Probability (axiomatic definition)\n\nFor the probability function $P$ the following three axioms hold:\n\n1. $P(S)=1$\n2. $0\\leq P(A) \\leq 1$\n3. $P(A \\cap B) = P(A)+P(B)-P(A \\cap B)$, where $P(A \\cap B) = 0$ if and only if $A \\cap B = \\emptyset$.\n\n#### Probability space\n\nA probability space consists of the following:\n\n1. A sample space $\\Omega$ which are all the possible outcomes for the experiment.\n2. A set of events, where each event is a set containing zero or more outcomes.\n3. The assigment of probabilities to the events; that is, a function $P$ from events to probabilities.\n\nWith this we can determine the probability for an event $A$:\n\n$$P(A) = \\dfrac{n_a}{n_\\Omega}$$\n\nWhere $n_a$ are the favorable outcomes and $n_\\Omega$ are all the possible outcomes.\n\n### Probability rules\n\nNow we establish a few rules for probabilites. Later on when we cover conditional probilities we will add a few more rules to the list.\n\n#### Sum rule\n\n$P(A \\cup B) = P(A)+P(B)-P(A \\cap B)$ where $P(A \\cap B) = 0$ if and only if $A \\cap B = \\emptyset$.\n\n#### Complement rule\n\nWe can define this as $P(A) + P(\\bar{A}) = P(S) = 1$. We use this to infer that $P(A) = 1-P(\\bar{A})$. This is particularly useful because most of the times it is a lot easier to figure out what $P(\\bar{A})$ is, and then use the complement rule to determine the probability of $P(A)$.\n\n### Grid chart\n\nA grid chart is useful to visualize conditions.\n\nFor example, suppose we want to throw $2$ dices, and we multiply the outcome. What is the probability that the result is greater than $20$?\n\n\n\nUsing the grid chart it is easy to see that:\n\n$$P(x>20) = \\frac{6}{6^2} = \\frac{1}{6}$$\n\n## Week 3\n\n**Topics:** Conditional probability, independence, product rule, sampling with replacement, sampling without replacement, probability tree.\n\n**Problems:** 3.2, 3.12, 3.13, 3.14, 3.15, 3.16, additional problem set 3.\n\n### Conditional probability\n\nA conditional probability gives more information about the possible outcome space, which is defined as:\n\n$$P(A|B) = \\dfrac{P(A \\cap B)}{P(B)} ,\\quad P(B)>0$$\n\n### Indepence\n\nIf $P(A|B)=P(A)$ then the events $A$ and $B$ are independent.\n\n### Product rule\n\nThe general product rule is:\n\n$$ P(A \\cap B) = P(A|B) \\cdot P(B) = P(B|A) \\cdot P(A) $$\n\nIn the case that $A$ and $B$ are independent, we use:\n\n$$ P(A \\cap B) = P(A) \\cdot P(B)$$\n\n### Probability rules\n\nWith the addition of above rules, we now have the following rules for probabilities:\n\n1. $P(\\Omega)=1$, sample space has a probability of 1.\n2. $P(\\emptyset)=0$, an empty set has a probability of 0.\n3. $P(A)=1-P(\\bar{A})$, complement rule.\n4. $P(A\\cup B)=P(A)+P(B)-P(A\\cap B)$, sum rule.\n5. $P(A\\cup B)=P(A)+P(B)$, if and only if $A\\cap B=\\emptyset$.\n6. $P(A\\cap B)=P(A|B)\\cdot P(B)$, product rule.\n7. $P(A\\cap B)=P(A)\\cdot P(B)$, if and only if $A$ and $B$ are independent.\n\n### Sampling with replacement\n\n### Sampling without replacement\n\n### Probability tree\n\n## Week 4\n\n**Topics:** Bayes' rule.\n\n**Problems:** 3.9, 3.11, 3.17 till 3.31.\n\n### Bayes' rule\n\n#### Inference\n\nWhen we have a cause and effect model where $A_i\\xrightarrow[P(B|A_i)]{\\text{cause-effect}}B$ is known, we can use Bayes' rule to infer $B\\xleftarrow[P(A_i|B)]{\\text{inference}}A_i$.\n\n**General case**\n\nThe general use case for Bayes rule is:\n\n$$ P(A|B) = \\dfrac{P(A) \\cdot P(B|A)}{P(A)\\cdot P(B|A) + P(\\bar{A})\\cdot P(B|\\bar{A})} $$\n\nWe can use this to infer $P(A|B)$ if only $P(B|A)$ is known.\n\n**Definition**\n\nBayes' rule is defined as:\n\n$$ P(A_i|B) = \\dfrac{P(A_i)\\cdot P(B|A_i)}{\\sum\\left[P(A_j)\\cdot P(B|A_j)\\right]}$$\n\nsuch that the following holds:\n\n1. $A_i \\cap A_j = \\emptyset $, the intersection must be disjunct.\n2. $\\cup A_i = \\Omega$, the union of $A$ is $\\Omega$.\n\n## Week 5\n\n**Topics:** Discrete stochastic variable, discrete probability function, cumulative distribution function, expected value, variance for a discrete stochastic variable.\n\n**Problems:** 4.1, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8\n\n### Stochastic variables\n\n**Definition**\n\nGiven a sample space $\\Omega$ and a function $\\underline{k}$ such that $\\underline{k}:\\Omega\\rightarrow\\mathbb{R}$. Where the range is $\\underline{R}_k \\subset \\mathbb{R}$. In this context $\\underline{k}$ is the stochastic variable (or probability variable). With the following two rules:\n\n1. $f(k) \\geq 0$ for $\\forall k$\n2. $\\sum f(k) = 1, \\quad \\left(\\sum\\limits_{k\\in\\underline{k}}(\\underline{k}=k)=1\\right)$\n\n**Example**\n\nGiven a probability experiment: throwing with a dice.\n\nWe define $\\underline{k}$ as the squared outcome of the throw. The mapping of the function $\\underline{k}$ is defined as:\n\n$$\\Omega=\\underbrace{\\{1,2,3,4,5,6\\}}_{\\text{domain}} \\xrightarrow{\\underline{k}} \\underbrace{\\{1,4,9,16,25,36\\}}_{\\text{range}} = \\underline{R}_k \\subset \\mathbb{R}$$\n\n### Discrete probability distribution\n\nThe probability distribution is defined as $f(k)=P(\\underline{k}=k)$ such that $k\\in\\underline{R}_k$. For the above given example this yields:\n\n$$ f(1)=P(\\underline{k}=1)=\\dfrac{1}{6} \\\\ f(4)=P(\\underline{k}=4)=\\dfrac{1}{6} \\\\ \\vdots \\\\ f(36)=P(\\underline{k}=36)=\\dfrac{1}{6}$$\n\n**Code**\n\n\n```python\ndef f(k):\n r = [1,4,9,16,25,36]\n if k in r: return 1/len(r)\n return 0\n```\n\nPlotting this function gives a pin bar graph with a vertical line at the value when there is a probability defined. All the other values are not defined because this is a discrete function.\n\n\n```python\npoints = np.arange(0,37,0.01)\nplot(points, [f(x) for x in points], c='b', lw=2);\n```\n\n### Cumulative distribution function\n\nThe cumulative distribution function $F(k)=P(\\underline{k}\\leq k)$ for the example above yields:\n\n$$ F(1)=P(\\underline{k}\\leq1)=\\dfrac{1}{6} \\\\ F(2)=P(\\underline{k}\\leq2)=\\dfrac{1}{6} \\\\ F(4)=P(\\underline{k}\\leq4)=\\dfrac{2}{6} \\\\ \\vdots \\\\ F(36)=P(\\underline{k}\\leq36)=\\dfrac{6}{6}=1$$\n\n**Code**\n\n\n```python\ndef F(k):\n x = 0\n r = [1,4,9,16,25,36]\n for i in r: \n if i <= k: x = i\n try: return (r.index(x)+1)/len(r)\n except: return 0\n```\n\nPlotting this function gives a staircase graph.\n\n\n```python\npoints = np.arange(-10,50,0.01)\nplot(points, [F(x) for x in points], c='b', lw=2);\n```\n\n### Expected value of a discrete stochastic variable\n\nThe expected value $E(\\underline{k})$ is the weighted arithmetic mean from descriptive statistics but altered for a stochastic variable.\n\n**Definition**\n\nThe formula for the expected value is:\n\n$$ E(\\underline{k})=\\sum\\limits_{k\\in\\underline{k}} k\\cdot P(\\underline{k}=k) $$\n\nNotice that we do not divide by the sum of values since that sum would equal 1, hence it's omitted.\n\n**Example**\n\nThe expected value for the example with a dice is:\n\n$$E(\\underline{k})=1 \\cdot \\dfrac{1}{6} + 4 \\cdot \\dfrac{1}{6} + 9 \\cdot \\dfrac{1}{6} + 16 \\cdot \\dfrac{1}{6} + 25 \\cdot \\dfrac{1}{6} + 36 \\cdot \\dfrac{1}{6} = \\dfrac{91}{6} = 15 \\dfrac{1}{6}$$\n\n**Code**\n\n\n```python\ndef E(r): \n return sum([x * 1/len(r) for x in r])\n```\n\nWe can use this to easily determine the expected value for any range of $\\underline{k}$ when there is a uniform distribution.\n\n\n```python\nexpected = E([1,4,9,16,25,36])\nexpected\n```\n\n\n\n\n 15.166666666666668\n\n\n\nPlotting $f(k)$, $F(k)$, and $E(\\underline{k})$ yields:\n\n\n```python\npoints = np.arange(0,50,0.01)\nplot(points, [f(x) for x in points], c='lightblue', lw=2);\nplot(points, [F(x) for x in points], c='b', lw=2);\nscatter(expected, F(expected), c='r', lw=4);\n```\n\n### Variance of a discrete stochastic variable\n\nThe variance $\\text{Var}(\\underline{k})$ is the variance from descriptive statistics but is altered for a discrete stochastic variable.\n\n\n**Definition**\n\nThe formula for the variance of a discrete stochastic variable is defined as:\n\n$$ \\text{Var}(\\underline{k}) = \\sum\\limits_{k\\in\\underline{k}}\\left(k-E(\\underline{k})\\right)^2\\cdot P(\\underline{k}=k_i) $$\n\nNotice that we do not divide by the sum of probabilities since that sum would equal 1, hence it's omitted.\n\n**Example**\n\nAn example for throwing a dice gives:\n\n$$\\text{Var}(\\underline{k})= \\left(1-\\dfrac{91}{6}\\right)^2 + \\left(4-\\dfrac{91}{6}\\right)^2 + \\ldots + \\left(36-\\dfrac{91}{6}\\right)^2 = 149.14$$\n\n**Standard deviation**\n\nWith the example given above, it is easy to determine the standard deviation, which is:\n\n$$\\sigma_{\\underline{k}} = \\sqrt{149.14} = 12.21$$\n\n**Alternative method**\n\nAn alternative (preferred) method to calculate the variance of a discrete stochastic variable is:\n\n$$\\text{Var}(\\underline{k})= E(\\underline{k}^2)-\\left[E(\\underline{k})\\right]^2$$\n\nProof:\n\nWe start with the definition:\n\n$$\\begin{align} &\\sum\\limits_{k\\in\\underline{k}}\\left(k-E(\\underline{k})\\right)^2 \\cdot P(\\underline{k}=k) \\\\ = &\\sum\\limits_{k\\in\\underline{k}}\\left(k^2 - 2k\\cdot E(\\underline{k})+\\left[E(\\underline{k})\\right]^2 \\right) \\cdot P(\\underline{k}=k) \\\\ = & \\underbrace{\\sum\\limits_{k\\in\\underline{k}}k^2\\cdot P(\\underline{k}=k)}_{\\text{def.}\\ E(\\underline{k}^2)}-2\\sum\\limits_{k\\in\\underline{k}}k\\cdot E(\\underline{k})\\cdot P(\\underline{k}=k)+\\sum\\limits_{k\\in\\underline{k}}\\left[E(\\underline{k})\\right]^2\\cdot P(\\underline{k}=k) \\\\ = &E(\\underline{k}^2) - 2\\cdot E(\\underline{k}) \\cdot \\underbrace{\\sum\\limits_{k\\in\\underline{k}} k \\cdot P(\\underline{k}=k)}_{\\text{def.}\\ E(\\underline{k})} + \\left[E(\\underline{k})\\right]^2 \\cdot \\underbrace{\\sum\\limits_{k\\in\\underline{k}} P(\\underline{k}=k)}_{1} \\\\ = &E(\\underline{k}^2)-2\\cdot E(\\underline{k})\\cdot E(\\underline{k})+\\left[E(\\underline{k})\\right]^2 \\\\ = & E(\\underline{k}^2) - \\left[E(\\underline{k})\\right]^2 \\end{align}$$\n\nWhich proves our formula.\n\n**Code**\n\n\n```python\ndef var(r):\n expected = E(r)\n return sum([(x-expected)**2*f(x) for x in r])\n```\n\n\n```python\nvar([1,4,9,16,25,36])\n```\n\n\n\n\n 149.13888888888886\n\n\n\n\n```python\ndef var2(r):\n return E([x**2 for x in r]-E(r)**2)\n```\n\n\n```python\nvar2([1,4,9,16,25,36])\n```\n\n\n\n\n 149.13888888888886\n\n\n\n\n```python\ndef stdev(r):\n return math.sqrt(var2(r))\n```\n\n\n```python\nstdev([1,4,9,16,25,36])\n```\n\n\n\n\n 12.212243401148244\n\n\n\n## Week 6\n\n**Topics:** Rules for expected value and variance, summing stochastic variables, multi-dimensional stochastic variables.\n\n**Problems:** 4.9 till 4.13, 4.20, 4.21, additional problem set 4.\n\n### Applying linear transformations\n\nWhen we apply a linear transformation of the form $\\underline{m}=\\alpha \\underline{k} + A$ to our values, there are a few rules that apply to the expected value and variance that will help us. Instead of recalculating the expected value and/or variance we can use the observations below.\n\n#### Multiplying with a scalar $\\alpha$\n\nWhen we multiply with a scalar $\\alpha$, the expected value will also change by that factor:\n\n$$E(\\underline{m}) = E(\\alpha\\underline{k})=\\alpha E(\\underline{k})$$\n\nIf we apply a scalar $\\alpha$ the variance will change with a factor $\\alpha^2$:\n\n$$\\text{Var}(\\underline{m})=\\text{Var}(\\alpha\\underline{k})=\\alpha^2\\text{Var}(\\underline{k})$$\n\nEven if a negative value for $\\alpha$ will be used, the standard deviation will remain positive, thus:\n\n$$\\sigma_{\\underline{m}}=\\sigma_{\\alpha\\underline{k}}=\\sqrt{\\alpha^2 \\text{Var}(\\underline{k})}=\\ |\\ \\alpha \\ |\\ \\sigma_\\underline{k}$$\n\n#### Adding a constant $A$\n\nWhen we add a constant to the outcomes the expected value will also change by that constant value:\n\n$$E(\\underline{y}) = E(\\underline{k} + A) = E(\\underline{x})+A$$\n\nIf we shift all the values over a constant value $A$, the variance will not be affected because the spread between the variables will remain the same. \n\n### Rules for expected value and variance\n\n\n\n#### Sum rule for expected values\n\nThe expected value of the sum is the sum of the expected values:\n\n$$ E(\\underline{k}_{sum}) = E(\\underline{k}_1) + E(\\underline{k}_2) $$\n\n#### Sum rule for variance\n\nThe variance of the sum is the sum of the variances:\n\n$$ \\text{Var}(\\underline{k}_{sum}) = \\text{Var}(\\underline{k}_1) + \\text{Var}(\\underline{k}_2) $$ \n\n### Multidimensional stochastic variables\n\nA multidimensional stochastic variable is defined as:\n\n$$\\begin{align} f(k)&=P(\\underline{k}=k),\\ \\text{any}\\ m) \\\\&=\\sum\\limits_j P(\\underline{k}=k, \\underline{m}=m_j)\\sum\\limits_jp_{kj} \\end{align}$$\n\n## Week 7\n\n### Populatiecorrelatieco\u00ebffici\u00ebnt $\\rho$\n\nOm de populatiecorrelatiecoefficient van twee stochasten te berekenen volgen we de onderstaande stappen:\n\n1. Bereken $E(\\underline{x})$ en $E(\\underline{y})$.\n2. Bereken $E(\\underline{x}^2)$ en $E(\\underline{y}^2)$.\n3. Bereken $\\text{Var}(\\underline{x})$ and $\\text{Var}(\\underline{y})$.\n4. Bereken $E(\\underline{x}\\cdot\\underline{y})$.\n5. Bereken $\\text{Cov}(\\underline{x}, \\underline{y})$.\n6. Bereken $\\rho(\\underline{x}, \\underline{y})$.\n\nWaarbij $\\text{Cov}(\\underline{x},\\underline{y})$ is gedefinieerd als: \n\n$$ \\text{Cov}(\\underline{x},\\underline{y}) = \\sum\\limits_{i=1}^r\\sum\\limits_{j=1}^k(x_i-\\mu_\\underline{x})(y_j-\\mu_\\underline{y})\\cdot P(\\underline{x}=x_i \\land \\underline{y} = y_j)$$\n\nEn net als bij Beschrijvende Statistiek, is $\\rho$ gedefinieerd als:\n\n$$ \\rho = \\frac{ \\text{Cov}(\\underline{x}, \\underline{y}) }{\\sigma_\\underline{x} \\cdot \\sigma_\\underline{y}} $$\n\n### Simultane kansverdeling\n\n### Onderling onafhankelijk\n\nTwee stochasten zijn onderling onafhankelijk als:\n\n$$ P(\\underline{x}=x_i \\land \\underline{y}=y_i)=P(\\underline{x}=x_i)\\cdot P(\\underline{y}=y_i)$$\n\nvoor alle getallen paren $(x_i,y_i)$.\n\n\n```python\n\n```\n", "meta": {"hexsha": "29bb1a774a27b8e6a78c38000f53477cc984f7e1", "size": 50679, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Applied Math/Y1S2/.ipynb_checkpoints/Probability Theory-checkpoint.ipynb", "max_stars_repo_name": "darkeclipz/jupyter-notebooks", "max_stars_repo_head_hexsha": "5de784244ad9db12cfacbbec3053b11f10456d7e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-08-28T12:16:12.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-28T12:16:12.000Z", "max_issues_repo_path": "Applied Math/Y1S2/.ipynb_checkpoints/Probability Theory-checkpoint.ipynb", "max_issues_repo_name": "darkeclipz/jupyter-notebooks", "max_issues_repo_head_hexsha": "5de784244ad9db12cfacbbec3053b11f10456d7e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Applied Math/Y1S2/.ipynb_checkpoints/Probability Theory-checkpoint.ipynb", "max_forks_repo_name": "darkeclipz/jupyter-notebooks", "max_forks_repo_head_hexsha": "5de784244ad9db12cfacbbec3053b11f10456d7e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.4912109375, "max_line_length": 8960, "alphanum_fraction": 0.7173385426, "converted": true, "num_tokens": 5040, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.29098086621490676, "lm_q2_score": 0.34510527095787247, "lm_q1q2_score": 0.10041903067865184}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport control as control\nimport matplotlib.pyplot as plt\nimport ipywidgets as widgets\nimport scipy.signal as signal\ncontinuous_update=False\n```\n\n## Risposta dei sistemi del primo ordine\n\nLa funzione di trasferimento del sistema del primo ordine selezionato \u00e8 definita come\n\n\\begin{equation}\n \\frac{K_p}{\\tau_p s+1},\n\\end{equation}\n\ndove $K_p$ e $\\tau_p$ sono parametri del sistema.\n\nLa risposta del sistema dipende dal segnale di ingresso. In questo esempio vengono utilizzati la funzione gradino (trasformata di Laplace $\\frac{1}{s}$), la funzione impulso unitario (trasformata di Laplace $1$), la funzione rampa unitaria (trasformata di Laplace $\\frac{1}{s^2}$) e un segnale sinusoidale (trasformata di Laplace $\\frac{1}{s^2+1}$) come segnali di ingresso.\n\nIl grafico sottostante mostra il segnale di ingresso e il segnale di uscita corrispondente per i valori scelti dei parametri $K_p$ e $\\tau_p$.\n\n### Come usare questo notebook?\n\nTesta le diverse funzioni di ingresso (gradino, impulso, rampa e sinusoide). Sposta gli sliders per modificare i valori di $K_p$ e $\\tau_p$.\n\n\n```python\n# sinus, step, ramp, x^2, sqrt(x)\nfunctionSelect = widgets.ToggleButtons(\n options=[('gradino unitario', 0), ('impulso unitario', 1), ('rampa unitaria', 2), ('sinusoide', 3)],\n description='Seleziona: ')\n\nfig = plt.figure(num='Risposta del sistema del primo ordine')\nfig.set_size_inches((9.8, 3))\nfig.set_tight_layout(True)\nf1 = fig.add_subplot(1, 1, 1)\n\nf1.grid(which='both', axis='both', color='lightgray')\n\nf1.set_xlabel('$t$ [s]')\nf1.set_ylabel('input, output')\n\nf1.axhline(0,Color='black',linewidth=0.5)\nf1.axvline(0,Color='black',linewidth=0.5)\n\ninputf, = f1.plot([],[])\nresponsef, = f1.plot([],[])\narrowf, = f1.plot([],[])\n\nnum_samples=2041\n\ndef create_draw_functions(Kp,taup,index):\n t=np.linspace(-0.1,5,num_samples)\n\n num=[Kp]\n den=[taup,1]\n Wsys=control.tf(num,den)\n \n global inputf,responsef, arrowf\n \n if index==0:\n yin=np.zeros(2041)\n yin[40:num_samples]=1\n tnew=np.linspace(0,5,2001)\n tout,yout=control.step_response(Wsys, T=tnew)\n elif index==1:\n yin=signal.unit_impulse(2001, 0)\n tnew=np.linspace(0,5,2001)\n t=tnew\n tout,yout=control.impulse_response(Wsys,tnew,X0=0)\n elif index==2:\n yin=np.zeros(num_samples)\n yin[40:num_samples]=np.linspace(0,5,2001)\n tnew=np.linspace(0,5,2001)\n tout,yout,xx=control.forced_response(Wsys,tnew,yin[40:])\n elif index==3: \n yin=np.sin(np.linspace(0,30,2001))\n tnew=np.linspace(0,30,2001)\n t=tnew\n tout,yout,xx=control.forced_response(Wsys,tnew,yin)\n \n f1.lines.remove(inputf)\n f1.lines.remove(responsef)\n f1.lines.remove(arrowf)\n \n inputf, = f1.plot(t,yin,color='C0',label='input')\n responsef, = f1.plot(tout,yout,color='C1',label='output')\n\n if index == 1:\n arrowf, = f1.plot([-0.1,0,0.1],[0.95,1,0.95],color='C0')\n else:\n arrowf, = f1.plot([],[])\n \n \n f1.legend()\n \n f1.relim()\n f1.relim()\n f1.autoscale_view()\n f1.autoscale_view()\n \nKp_slider = widgets.FloatSlider(value=1, min=0, max=2, step=0.1, description='$K_p$',\n continuous_update=True, layout=widgets.Layout(width='auto', flex='5 5 auto'),readout_format='.1f')\n\ntaup_slider = widgets.FloatSlider(value=1, min=0, max=2, step=0.1, description='$\\\\tau_p$',\n continuous_update=True, layout=widgets.Layout(width='auto', flex='5 5 auto'),readout_format='.1f')\n\n\ninput_data = widgets.interactive_output(create_draw_functions, {'Kp':Kp_slider,\n 'taup':taup_slider,\n 'index':functionSelect})\n\ndef update_sliders(index):\n global x0_slider\n \n Kpval = [1,1,1,1]\n Kp_slider.value = Kpval[index]\n taupval = [1,1,1,1]\n taup_slider.value = taupval[index] \n \ninput_data2 = widgets.interactive_output(update_sliders, {'index':functionSelect})\n\ndisplay(functionSelect)\n\ndisplay(Kp_slider,taup_slider,input_data)\n\n# display(Markdown(\"The system can be represented as $f(x)=5$ for small excursions of x about x0.\"))\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Seleziona: ', options=(('gradino unitario', 0), ('impulso unitario', 1), ('rampa un\u2026\n\n\n\n FloatSlider(value=1.0, description='$K_p$', layout=Layout(flex='5 5 auto', width='auto'), max=2.0, readout_for\u2026\n\n\n\n FloatSlider(value=1.0, description='$\\\\tau_p$', layout=Layout(flex='5 5 auto', width='auto'), max=2.0, readout\u2026\n\n\n\n Output()\n\n", "meta": {"hexsha": "7aec4d13eb70ca9631ada55b45eb717db9f3d1ba", "size": 156303, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_it/examples/02/.ipynb_checkpoints/TD-11-Risposta-nel-tempo-di-un-sistema-del-primo-ordine-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_it/examples/02/TD-11-Risposta-nel-tempo-di-un-sistema-del-primo-ordine.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_it/examples/02/TD-11-Risposta-nel-tempo-di-un-sistema-del-primo-ordine.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 142.7424657534, "max_line_length": 111891, "alphanum_fraction": 0.8294786408, "converted": true, "num_tokens": 1551, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.37754066879814546, "lm_q2_score": 0.2658804730998169, "lm_q1q2_score": 0.10038069163447219}} {"text": "# Computa\u00e7\u00e3o Cient\u00edfica com Julia\n\n * Computa\u00e7\u00e3o cient\u00edfica\n * Julia\n## Como s\u00e3o as coisas no IPT hoje?\n\n * Planilha eletr\u00f4nica\n * Excel\n * Deus nos acuda!\n\n\n# Ser\u00e1 que isso \u00e9 o suficiente???\n\n * Como detectar erros em planilhas?\n * Como reutilizar c\u00e1lculos j\u00e1 feitos?\n * Como fazer algo mais complexo?\n * Como fazer **gr\u00e1ficos decentes**?\n\n## O problema do Excel \u00e9 mais um problema de cultura do que qualquer outra coisa!\n\n# O que eu quero dizer com cultura:\n\n * Modelagem matem\u00e1tica de problemas se resume ao que se fazia 100 anos atr\u00e1s.\n * Pouco reaproveitamento \n * Pouco compartilhamento de trabalho j\u00e1 realizado\n * Endeusamento do que j\u00e1 funciona\n * Pouca discuss\u00e3o e ossifica\u00e7\u00e3o do conhecimento\n \n## Exemplo do que j\u00e1 ocorreu no passado\n\n * T\u00fanel de vento (tuninho) circa 1990 - Mestrado do Nilson\n * Algor - escoamento potencial\n * STAN5 (ou 7 de acordo com o Nilson)\n * Modelo em escala reduzida\n * Medi\u00e7\u00e3o com fio quente\n\n# Minha opini\u00e3o\n\n * Compreender o que ocorre numa bancada ou em um trabalho externo\n * Dar ao computador o seu lugar de direito: um escravo muito burro mas muito forte!\n * Muito do que n\u00f3s fazemaos \u00e9 software. **Engenharia de Software**\n * Conhecer o que se faz aqui dentro e fora\n * Compartilhar este conhecimento\n\n# Ideal\n\n * Todas as bancadas modeladas\n * Modelos hidr\u00e1ulicos **e**\n * Modelos CFD\n * Conhecimento profundo do que ocorre de fato nas bancadas.\n * Um modelo das plantas em campo **antes** de medir.\n * Cada \u00e1rea ter um conjunto de ferramentas bem conhecidas \n * Ser \"piloto\" de software n\u00e3o vai nos levar longe \n\n\n# Computa\u00e7\u00e3o Cient\u00edfica\n\n * M\u00e9todos b\u00e1sicos \n * Sistemas lineares\n * Aproxima\u00e7\u00e3o\n * Equa\u00e7\u00f5es diferenciais\n * Sistemas n\u00e3o lineares\n * Mais importante do que o b\u00e1sico, \u00e9 saber como usar isso.\n\n * M\u00e9todos num\u00e9rico r\u00e1pidos e eficientes\n * Muito trabalhoso\n * Muito dif\u00edcil\n * Coisa de profissa. \n\n# Porque Julia\n\n1. Ambiente interativo\n * Muito melhor que FORTRAN ou C/C++/Pascal/Java\n * Parecido com R/Python/Matlab/Mathematica/Scilab\n2. Desempenho\n * Consegue desempenho de C/FORTRAN\n * Mas n\u00e3o precisa vetorizar\n * Praticamente tudo \u00e9 implementado em Julia!\n3. Biblioteca extensa\n * Visualiza\u00e7\u00e3o, \u00c1lgebra Linear, equa\u00e7\u00f5es diferenciais, etc\n * Ainda n\u00e3o tem tudo o que Matlab tem\n * Muito f\u00e1cil chamar Python (que hoje tem um ecossistema impressionante)\n4. A linguagem \u00e9 muito boa!\n\n \n\n# Engenharia de Software\n\n * Controle de vers\u00f5es e reposit\u00f3rios de software (git e github)\n * Documenta\u00e7\u00e3o no c\u00f3digo (*literate programming*)\n * Testes unit\u00e1rios\n * Pensar no ambiente de trabalho:\n * Editor\n * Interface Notebook - Jupyter\n * REPL (terminal)\n * *Muito do que n\u00f3s fazemos \u00e9 programa\u00e7\u00e3o explorat\u00f3ria*\n\n# Estrutura do curso\n\nObjetivo final:\n$$\n\\nabla\\cdot\\pmb{u} = 0\\\\\n\\frac{\\partial \\pmb{u}}{\\partial t} + \\pmb{u}\\cdot\\nabla\\pmb{u} = \n-\\nabla p + \\frac{1}{Re}\\nabla^2\\pmb{u}\n$$\n\nPrecisamos saber resolver este tipo de equa\u00e7\u00e3o:\n$$\n\\nabla^2 \\phi = f\\\\\n\\frac{\\partial \\phi}{\\partial t} = \\alpha\\nabla^2 \n\\phi\\\\\n\\pmb{u}\\cdot\\phi = \\alpha\\nabla^2\\phi\n$$\n\nVamos focar no problema 1D para depois tentar avan\u00e7ar em outras dire\u00e7\u00f5es\n\n# Em paralelo: use algum software para resolver um problema concreto:\n \n * [OpenFoam](https://www.openfoam.com/)\n * [SU2](https://su2code.github.io/)\n * [CalculiX](http://www.calculix.de/)\n * [Code Saturne](https://www.code-saturne.org/)\n * [Code Aster](https://www.code-aster.org/)\n * [Fenics](https://fenicsproject.org/)\n * [FreeFem++](https://freefem.org/)\n * [Dedalus Project](http://dedalus-project.org/)\n * [Nektar](https://www.nektar.info/)\n * [MEEP](https://meep.readthedocs.io/en/latest/)\n\n# Roteiro para $\\nabla^2 u = f$\n\n * $u^\\delta(x) = \\sum \\hat{u}_k \\phi_k(x) \\approx u(x)$ - Aproxima\u00e7\u00e3o e interpola\u00e7\u00e3o\n * $\\nabla^2 u^\\delta - f = \\varepsilon(x)$ - Erro da equa\u00e7\u00e3o diferencial\n * Res\u00edduos ponderados $\\int_\\Omega w_i(x) \\nabla^2 u^\\delta \\:dx = \\int_\\Omega f w_i(x)\\:dx$\n * Formula\u00e7\u00e3o fraca: $ -\\int_\\Omega \\nabla w_i(x) \\cdot \\nabla u^\\delta \\:dx + \\int_{\\partial\\Omega} w_i\\frac{\\partial u}{\\partial n} = \\int_\\Omega f w_i(x)\\:dx$\n * Galerkin: $w_i(x) = \\phi_i(x)$\n * Sistema linear: $\\left[A\\right]\\cdot\\left\\{\\hat{u}\\right\\} = \\left\\{f\\right\\}$, \n $$A_{i,k} = -\\int_\\Omega\\nabla\\phi_i\\cdot\\nabla\\phi_k\\:dx + \\int_{\\partial\\Omega}\\phi_i\\frac{\\partial u}{\\partial n}\\\\\n f_i = \\int_\\Omega f\\phi_i\\:dx\n $$\n\n# Problema 1D\n\n 1. Interpola\u00e7\u00e3o e aproxima\u00e7\u00e3o: polinomial, senos e cossenos\n 2. Derivadas: simb\u00f3lica, num\u00e9rica, diferencia\u00e7\u00e3o autom\u00e1tica\n 3. Quadratura, Simb\u00f3lica, Num\u00e9rica\n 4. Sistemas lineares: M\u00e9todos diretos, m\u00e9todos iterativos, *Explorar a estrutura da matriz*\n 5. Equa\u00e7\u00f5es diferenciais ordin\u00e1rias: problemas de valor inicial\n 6. Sistemas de equa\u00e7\u00f5es n\u00e3o lineares: Newton-Raphson e outros\n\n\n# Introduzindo Julia\n\n * http://julialang.org\n * Lista de discuss\u00e3o https://discourse.julialang.org/\n * Canal do youtube https://www.youtube.com/user/JuliaLanguage\n * Documenta\u00e7\u00e3o: https://docs.julialang.org\n\n\n```julia\n# Caluladora\n1+1\n\n```\n\n\n```julia\nfunction soma(a,b) \n return a+b\nend\nsoma2(a,b) = a+2b\n\nx = 1\ny = 10\nsoma2(x,y)\n```\n\n# Porque Julia consegue ser r\u00e1pido como C?\n\n\n```julia\ncode_native(soma2, (Int,Int))\n```\n\n\n```julia\ncode_native(soma2, (Float64,Float64))\n```\n\n\n```julia\nusing ForwardDiff\n\nfunction f1(x)\n \n res = one(x)\n for i = 1:5\n res = x + res*x\n end\n return res\nend\ndf1 = x -> ForwardDiff.derivative(f1, x)\nf1b(x) = x + x*(one(x) + x*(one(x) + x * (one(x)+ 2x)))\ndf1b(x) = one(x) + 2x + 3x^2 + 4x^3 + 10x^4\n\n```\n\n\n```julia\ndf1(0.5) - df1b(0.5)\n```\n\n\n```julia\nusing BenchmarkTools\n\n@btime f1(0.5)\n@btime f1b(0.6)\n@btime df1(0.7)\n@btime df1b(0.7)\n```\n\n# Sistema de equa\u00e7\u00f5es diferenciais ordin\u00e1rias:\n\nEqua\u00e7\u00f5es de Lorenz\n$$\n\\begin{align}\n\\frac{dx}{dt} &= \u03c3(y-x) \\\\\n\\frac{dy}{dt} &= x(\u03c1-z) - y \\\\\n\\frac{dz}{dt} &= xy - \u03b2z \\\\\n\\end{align}\n$$\n\n\n```julia\nusing DifferentialEquations\nusing Plots\ngr()\n```\n\n\n```julia\nfunction lorenz(du,u,p,t)\n du[1] = 10.0*(u[2]-u[1])\n du[2] = u[1]*(28.0-u[3]) - u[2]\n du[3] = u[1]*u[2] - (8/3)*u[3]\nend\n```\n\n\n```julia\nu0 = [1.0;0.0;0.0]\ntspan = (0.0,100.0)\nprob = ODEProblem(lorenz,u0,tspan)\nsol = solve(prob);\n```\n\n\n```julia\ngr()\nplot(sol,vars=(1,2,3))\n```\n\n\n```julia\nusing Psychro\nusing Unitful\n```\n\n\n```julia\nprintln(volume(MoistAir, 20.0u\"\u00b0C\", DewPoint, 60.0u\"\u00b0F\", 1.0u\"atm\", u\"cm^3/lb\"))\nprintln(wetbulb(MoistAir, 20.0u\"\u00b0C\", DewPoint, 60.0u\"\u00b0F\", 1.0u\"atm\"))\nprintln(relhum(MoistAir, 20.0u\"\u00b0C\", DewPoint, 60.0u\"\u00b0F\", 1.0u\"atm\"))\n\n```\n\n\n```julia\nusing ACME\n\ncirc = @circuit begin\n j_in = voltagesource(), [-] \u27f7 gnd\n r1 = resistor(1e3), [1] \u27f7 j_in[+]\n c1 = capacitor(47e-9), [1] \u27f7 r1[2], [2] \u27f7 gnd\n d1 = diode(is=1e-15), [+] \u27f7 r1[2], [-] \u27f7 gnd\n d2 = diode(is=1.8e-15), [+] \u27f7 gnd, [-] \u27f7 r1[2]\n j_out = voltageprobe(), [+] \u27f7 r1[2], [-] \u27f7 gnd\nend\nmodel = DiscreteModel(circ, 1/4410)\nt = 1/44100 * (0:44099)'\ny = run!(model, sin.(2\u03c0*100 .* t));\nplot(t[1,:],y[1,:])\n```\n\n\n```julia\nusing PyCall\n```\n\n\n```julia\nmath = pyimport(\"math\")\nprintln(math.sin(1))\n\nnp = pyimport(\"numpy\")\nnprand = pyimport(\"numpy.random\")\nnprand.randn(3,4)\n\n```\n\n\n```julia\nusing LinearAlgebra, SpecialFunctions, Plots, ApproxFun\nx = Fun(identity,0..10)\nf = sin(x^2)\ng = cos(x)\n```\n\n\n```julia\nh = f + g^2\nr = roots(h)\nrp = roots(h')\n\nusing Plots\nplot(h)\nscatter!(r,h.(r))\nscatter!(rp,h.(rp))\n```\n", "meta": {"hexsha": "7b7052ab9417576ac65799dc9b223941a11e840b", "size": 14167, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "apresent.ipynb", "max_stars_repo_name": "iptscicomp/aulas", "max_stars_repo_head_hexsha": "41a5ac19dcf4e16118f0e5ecf9b389353e8de347", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "apresent.ipynb", "max_issues_repo_name": "iptscicomp/aulas", "max_issues_repo_head_hexsha": "41a5ac19dcf4e16118f0e5ecf9b389353e8de347", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "apresent.ipynb", "max_forks_repo_name": "iptscicomp/aulas", "max_forks_repo_head_hexsha": "41a5ac19dcf4e16118f0e5ecf9b389353e8de347", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-03-15T04:10:15.000Z", "max_forks_repo_forks_event_max_datetime": "2019-03-15T04:10:15.000Z", "avg_line_length": 24.2585616438, "max_line_length": 186, "alphanum_fraction": 0.5075880568, "converted": true, "num_tokens": 2672, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3486451353339457, "lm_q2_score": 0.2877678218692626, "lm_q1q2_score": 0.10032885120036383}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\nimport sympy\nimport math\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n# High-School Maths Exercise\n## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow\n\n### Problem 1. Markdown\nJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.\n\nFirst, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press Ctrl + Enter.\n\nSecond, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).\n\nLet me give you a...\n#### Quick Introduction to Markdown\n##### Text and Paragraphs\nThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:\n```\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n```\n**Result:**\n\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n\n##### Headings\nThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six \"#\" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:\n```\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n```\n\n**Result:**\n\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n\nIt is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.\n\n##### Emphasis\nYou can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\\*) or underscores (\\_)). In order to \"escape\" a symbol, prefix it with a backslash (\\). You can also strike thorugh your text in order to signify a correction.\n```\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not \\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n```\n\n**Result:**\n\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not\\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n\n##### Lists\nYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press Tab once (it will be converted to 4 spaces).\n\nTo create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...\n```\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n```\n\n**Result:**\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n \nTo create an unordered list, type an asterisk, plus or minus at the beginning:\n```\n* This is\n* An\n + Unordered\n - list\n```\n\n**Result:**\n* This is\n* An\n + Unordered\n - list\n \n##### Links\nThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:\n```\nThis is [a link](http://google.com) to Google.\n```\n\n**Result:**\n\nThis is [a link](http://google.com) to Google.\n\n##### Images\nThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):\n```\n\n```\n\n**Result:**\n\n\n\nIf you want to resize images or do some more advanced stuff, just use HTML. \n\nDid I mention these cells support HTML, CSS and JavaScript? Now I did.\n\n##### Tables\nThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.\n```\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n```\n\n**Result:**\n\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n
\n```python\ndef square(x):\n    return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n
\n\n**Result:**\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n\n**Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).\n\n1. # Heading\n2. ## Heading\n3. ### Heading\n4. #### Heading\n5. ##### Heading\n6. ###### Heading\n\nNew Paragraph:\nI would like to be **bold** but I am just *italic*.\nSo I am ~~**bold**~~ *italic* and that's it.\n\n```python\ndef doMath(hard = true):\n if hard:\n studyHardForHours()\n else:\n goAndPlayOutside()\n```\n\n[GitHub](https://github.com/StanDimitroff/Math-Concepts)\n\n\n\n### Problem 2. Formulas and LaTeX\nWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.\n\nThere are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.\n\nMost commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \\frac{a}{b} $$`: $$ \\frac{a}{b} $$.\n\n[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.\n\nYou're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.\n\nNote that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.\n\n\n\nEquation of a line: $ y = ax + b $\n\nRoots of the quadratic equation $ ax^2 + bx + c = 0 $: $ x_{1,2} = \\frac{-b\\pm \\sqrt{b^2 - 4ac}}{2a} $\n\nTaylor series expansion: $ f(x)|_{x=a} = f(a) + f'(a)(x-a) + \\frac{f^n(a)}{2!}(x-a)^2 + \\dots + \\frac{f^n(a)}{n!}(x-a)^n + \\dots $\n\nBionomial theorem: $ (x+y)^2 = \\binom{n}{0}x^ny^0 + \\binom{n}{1}x^1y^{n-1} + \\dots + \\binom{n}{n}x^0y^n = \\sum\\limits^n_{k=0} \\binom{n}{k}x^{n-k}y^k$ \n\nAn integral (this one is a lot of fun to solve :D): $ \\int^{+\\infty}_{-\\infty} e^{-x^2}dx = \\sqrt{\\pi} $\n\nA short matrix: $ \\begin{pmatrix} 2 && 1 && 3 \\\\ 2 && 6 && 8 \\\\ 6 && 8 && 18 \\end{pmatrix} $\n\nA long matrix: $ A = \\begin{pmatrix} a_{11} && a_{12} && \\dots && a_{1n} \\\\ a_{21} && a_{22} && \\dots && a_{2n} \\\\ \\vdots && \\vdots && \\ddots && \\vdots \\\\ a_{m1} && a_{m2} && \\dots && a_{mn} \\end{pmatrix}$ \n\n### Problem 3. Solving with Python\nLet's first do some symbolic computation. We need to import `sympy` first. \n\n**Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**\n\nLet's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): \n```python \nimport sympy \n```\n\nNext, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:\n```python \nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n```\n\nNow solve:\n```python \nsympy.solve(a * x**2 + b * x + c)\n```\n\nHmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:\n```python \nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nFinally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.\n\n\n```python\nx, a, b, c = sympy.symbols('x a b c')\nsympy.init_printing()\nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nHow about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?\n\nRemember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.\n\nIf $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$\n\nIf $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$\n\nIf $b^2 - 4ac < 0$, the equation has zero real roots\n\nWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.\n\n\n```python\ndef solve_quadratic_equation(a, b, c):\n \"\"\"\n Returns the real solutions of the quadratic equation ax^2 + bx + c = 0\n \"\"\"\n d = b**2-4*a*c # discriminant\n \n if d < 0:\n return []\n \n elif d == 0:\n x = (-b + math.sqrt(d)) / (2 * a)\n return [x]\n \n elif d > 0:\n x1 = (-b + math.sqrt(d)) / (2 * a)\n x2 = (-b - math.sqrt(d)) / (2 * a)\n \n return [x1, x2]\n```\n\n\n```python\n# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests\nprint(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]\nprint(solve_quadratic_equation(1, -8, 16)) # [4.0]\nprint(solve_quadratic_equation(1, 1, 1)) # []\n```\n\n [2.0, -1.0]\n [4.0]\n []\n\n\n**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).\n\n### Problem 4. Equation of a Line\nLet's go back to our linear equations and systems. There are many ways to define what \"linear\" means, but they all boil down to the same thing.\n\nThe equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).\n\nThe function produces a straight line and we can see it.\n\nHow do we plot functions in general? We know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.\n\nNow, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:\n* All elements in it must be of the same type\n* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.\n\nThere's one more thing: it's blazingly fast because all computations are done in C, instead of Python.\n\nFirst let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:\n```python\nimport numpy as np\n```\n\nImport that at the top cell and don't forget to re-run it.\n\nNext, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).\n```python\nx = np.linspace(-3, 5, 1000)\n```\nNow, let's generate our function variable\n```python\ny = 2 * x + 3\n```\n\nWe can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.\n```python\nimport matplotlib.pyplot as plt\n```\n\nNow, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a \"magic string\": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.\n```python\nplt.plot(x, y)\nplt.show()\n```\n\n\n```python\nx = np.linspace(-3, 5, 1000)\ny = 2 * x + 3\nplt.plot(x, y)\nplt.show()\n```\n\nIt doesn't look too bad but we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the \"spines\" of the plot (i.e. the borders).\n\nAll `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for \"axis\".\nLet's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n```\n\n**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.\n\nThis should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).\n\n\n```python\nx = np.linspace(-3, 5, 1000)\ny = 2 * x + 3\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\nplt.plot(x, y)\nplt.show()\n```\n\n### * Problem 5. Linearizing Functions\nWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. \n\nA commonly used method for linearizing functions is through algebraic transformations. Try to linearize \n$$ y = ae^{bx} $$\n\nHint: The inverse operation of $e^{x}$ is $\\ln(x)$. Start by taking $\\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).\n\n

Write your result here.

\n\n### * Problem 6. Generalizing the Plotting Function\nLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.\n\nNote: We can also pass *lambda expressions* (anonymous functions) like this: \n```python\nlambda x: x + 2```\nThis is a shorter way to write\n```python\ndef some_anonymous_function(x):\n return x + 2\n```\n\nWe'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.\n\nWrite a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.\n\n**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):\n```python\nf_vectorized = np.vectorize(f)\ny = f_vectorized(x)\n```\n\n\n```python\ndef plot_math_function(f, min_x, max_x, num_points):\n x = np.linspace(min_x, max_x, num_points)\n f_vectorized = np.vectorize(f)\n y = f_vectorized(x)\n \n ax = plt.gca()\n ax.spines[\"bottom\"].set_position(\"zero\")\n ax.spines[\"left\"].set_position(\"zero\")\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n \n plt.plot(x, y)\n plt.show()\n```\n\n\n```python\nplot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)\nplot_math_function(lambda x: -x + 8, -1, 10, 1000)\nplot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)\nplot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)\nplot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)\n```\n\n### * Problem 7. Solving Equations Graphically\nNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the \"=\" sign and seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.\n\nTo do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.\n\n```python\nvectorized_fs = [np.vectorize(f) for f in functions]\nys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n```\n\n\n```python\ndef plot_math_functions(functions, min_x, max_x, num_points):\n x = np.linspace(min_x, max_x, num_points)\n vectorized_fs = [np.vectorize(f) for f in functions]\n ys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n \n for y in ys:\n plt.plot(x, y)\n plt.show()\n```\n\n\n```python\nplot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)\nplot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)\n```\n\nThis is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.\n\n\n```python\nplot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)\n```\n\n### Problem 8. Trigonometric Functions\nWe already saw the graph of the function $y = \\sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.\n\n\n\nThe two basic trigonometric functions are defined as the ratio of two sides:\n$$ \\sin(x) = \\frac{\\text{opposite}}{\\text{hypotenuse}} $$\n$$ \\cos(x) = \\frac{\\text{adjacent}}{\\text{hypotenuse}} $$\n\nAnd also:\n$$ \\tan(x) = \\frac{\\text{opposite}}{\\text{adjacent}} = \\frac{\\sin(x)}{\\cos(x)} $$\n$$ \\cot(x) = \\frac{\\text{adjacent}}{\\text{opposite}} = \\frac{\\cos(x)}{\\sin(x)} $$\n\nThis is fine, but using this, \"right-triangle\" definition, we're able to calculate the trigonometric functions of angles up to $90^\\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a \"unit circle\".\n\n\n\nWe can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\\cos(\\alpha)$ and the $y$-coordinate - to $\\sin(\\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\\circ$. After that, the same values repeat: these functions are **periodic**: \n$$ \\sin(k.360^\\circ + \\alpha) = \\sin(\\alpha), k = 0, 1, 2, \\dots $$\n$$ \\cos(k.360^\\circ + \\alpha) = \\cos(\\alpha), k = 0, 1, 2, \\dots $$\n\nWe can, of course, use this picture to derive other identities, such as:\n$$ \\sin(90^\\circ + \\alpha) = \\cos(\\alpha) $$\n\nA very important property of the sine and cosine is that they accept values in the range $(-\\infty; \\infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\\infty; \\infty)$ **except when their denominators are zero** and produce values in the same range. \n\n#### Radians\nA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\\text{rad}$ or without any designation, so $\\sin(2)$ means \"sine of two radians\".\n\n\nIt's defined as *the central angle of an arc with length equal to the circle's radius* and $1\\text{rad} \\approx 57.296^\\circ$.\n\nWe know that the circle circumference is $C = 2\\pi r$, therefore we can fit exactly $2\\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\\circ$ or $2\\pi\\ \\text{rad}$. Also, $\\pi rad = 180^\\circ$.\n\n(Some people prefer using $\\tau = 2\\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)\n\n**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\\text{[deg]} = 180/\\pi.\\text{[rad]}, \\text{[rad]} = \\pi/180.\\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.\n\n#### Inverse trigonometric functions\nAll trigonometric functions have their inverses. If you plug in, say $\\pi/4$ in the $\\sin(x)$ function, you get $\\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:\n$$ \\arcsin(y) = x: sin(y) = x $$\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} $$\n\nPlease note that this is NOT entirely correct. From the relations we found:\n$$\\sin(x) = sin(2k\\pi + x), k = 0, 1, 2, \\dots $$\n\nit follows that $\\arcsin(x)$ has infinitely many values, separated by $2k\\pi$ radians each:\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} + 2k\\pi, k = 0, 1, 2, \\dots $$\n\nIn most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.\n\nNote 1: There are inverse functions for all four basic trigonometric functions: $\\arcsin$, $\\arccos$, $\\arctan$, $\\text{arccot}$. These are sometimes written as $\\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent. \n\nJust notice the difference between $\\sin^{-1}(x) := \\arcsin(x)$ and $\\sin(x^{-1}) = \\sin(1/x)$.\n\n#### Exercise\nUse the plotting function you wrote above to plot the inverse trigonometric functions.\n\n\n```python\nplot_math_function(lambda x: np.arcsin(x), -3, 5, 1000)\nplot_math_function(lambda x: np.arccos(x), -3, 5, 1000)\nplot_math_function(lambda x: np.arctan(x), -3, 5, 1000)\n```\n\n\n```python\ndef plot_circle(x_c, y_c, r):\n \"\"\"\n Plots the circle with center C(x_c; y_c) and radius r.\n This corresponds to plotting the equation x^2 + y^2 = r^2\n \"\"\"\n circle = plt.Circle((x_c, y_c), r)\n ax=plt.gca()\n ax.add_patch(circle)\n plt.axis('scaled')\n plt.show()\n```\n\n\n```python\nplot_circle(0, 0, 2)\n```\n\n### ** Problem 9. Perlin Noise\nThis algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).\n#### Noise\nNoise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.\nWe can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.\n\n$$ \\text{noise}(x, y) = N, N \\in [n_{min}, n_{max}] $$\n\nThis function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a \"scalar field\").\n\nRandom variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have \"uniform noise\" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.\n\n#### Perlin noise\nThere are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.\n\n#### Algorithm\n... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).\n\n#### Your task\n1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created\n2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using\n3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created\n4. Test and improve the algorithm\n5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)\n6. Communicate the results (e.g. in the Softuni forum)\n\nHint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.\n", "meta": {"hexsha": "a164b5fb01023751f1c323a3426939abf963aeaa", "size": 274587, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "High-School-Maths/High-School Maths Exercise.ipynb", "max_stars_repo_name": "StanDimitroff/Math-Concepts", "max_stars_repo_head_hexsha": "ebbecde56fde319f5269d35da775482b8ea5aeb3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "High-School-Maths/High-School Maths Exercise.ipynb", "max_issues_repo_name": "StanDimitroff/Math-Concepts", "max_issues_repo_head_hexsha": "ebbecde56fde319f5269d35da775482b8ea5aeb3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "High-School-Maths/High-School Maths Exercise.ipynb", "max_forks_repo_name": "StanDimitroff/Math-Concepts", "max_forks_repo_head_hexsha": "ebbecde56fde319f5269d35da775482b8ea5aeb3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 265.0453667954, "max_line_length": 18210, "alphanum_fraction": 0.8992304807, "converted": true, "num_tokens": 7704, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4186969238628498, "lm_q2_score": 0.23934935817440722, "lm_q1q2_score": 0.10021483999617174}} {"text": "# Recording of class\nBackground: people join from Americas/East Asia\n\nSolution: recording of class\n- Will be put on secret youtube links - not searchable\n- Delete mid June\n\nConsent: via poll\n\n\n# Session 5:\n## Growing Causal Trees \n### - *Causal Forests and Generalized Random Forests*\n\n*Andreas Bjerre-Nielsen*\n\n## Agenda\n\n1. [Causality](#Causality)\n1. [Potential outcomes](#Potential-outcomes)\n2. [Experiments](#Experiments)\n3. [Matching](#Matching)\n - [Covariate based matching](#Covariate-matching)\n - [Propensity score matching](#Propensity-score-matching)\n \n4. [Heterogeneous treatment effects with causal trees](#Causal-trees)\n - [Causal Forest](#Causal-Forest)\n - [Generalized Random Forest](#Generalized-Random-Forest) \n - [Application of causal forest](#Application-of-causal-forest)\n\n\n# Buckle up... \n\n\n```python\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n%matplotlib inline\n```\n\n# Causality\n\n\n\n## Correlation does not imply causation\n\nSpurious or causal?\n\n
\n
\n\n
\n\n\nFigure below is adapted from chapter 5 in Judea Pearl's book titled \"Book of Why\"\n\n\n```python\nimport networkx as nx\n\nplt.rcParams.update({'axes.titlesize': 21})\n\nf_lung_cancer, ax = plt.subplots(1,2,figsize=(17,6.7))\nax[0].set_title('Richard Doll and Austin Bradford Hill')\nax[1].set_title('Ronald A. Fisher')\n\ns,l,g= 'Smoking', 'Lung\\ncancer', 'Unobserved factors'\nfor i in range(2):\n G = nx.DiGraph()\n G.graph['dpi'] = 120\n G.add_nodes_from([s,l,g])\n \n G1 = G.copy()\n G1.add_edges_from([(s,l)])\n nx.draw_networkx_edges(G1,arrowsize=30,ax=ax[i],edge_color='blue',\n pos = {g: [1,1], s: [0.2,0], l: [1.8,0]})\n \n if i>0:\n G2 = G.copy()\n G2.add_edges_from([(g,l),(g,s)])\n nx.draw_networkx_edges(G2,arrowsize=30,ax=ax[i],edge_color='red',\n pos = {g: [1,1], s: [0.2,0], l: [1.8,0]})\n \n nx.draw_networkx_nodes(G,node_color='white',node_size=1000,alpha=0,ax=ax[i],\n pos = {g: [1,1], s: [0,0], l: [2,0]})\n nx.draw_networkx_labels(G,ax=ax[i],font_size=16, \n pos = {g: [1,1.1], s: [-.05,0], l: [2,0]})\n ax[i].axis('off')\n ax[i].set_xlim([-.3,2.3])\n ax[i].set_ylim([-.2,1.4])\n \n```\n\n## What is causality?\n\nRelationship between two or more variables such that whereby a change in one or more variable(s) ***affect(s)*** the distribution of one or more other variable(s).\n\nWe can draw these relationships (from The Book of Why, Judea Pearl), e.g. smoking example.\n- Ronald Fisher argued that unobserved confounders could cause smoking and lung cancer\n\n\n```python\nf_lung_cancer\n```\n\n## Establishing causality\n\nCurrently there are two broad approaches for establishing causal relationships:\n\n- Experiment and quasi-experiments\n - Corresponds to what is taught in *Mostly Harmless Econometrics*\n- Structural equation models \n - Used for structural econometric choice models etc.\n - Also used estimating causal graphs, e.g. as by Judea Pearl\n \n\n# Potential outcomes\n\n## The aim\n\nWe are interested in the effect of some treatment, e.g. \n- getting admitted to a certain educaton on wages, life-expectancy \n- access to paternity leave on wages (husband and wife)\n\n\n\n\n\n\n## The Rubin Causal Model \n\nDenote the treatment variable as $T_i$ where $T_i=1$ corresponds to unit $i$ being treated, while $T_i=0$ is not treated. Define the potential outcomes:\n\n$Y_i=\\begin{cases}\nY_i(1), & T_i=1;\\\\\nY_i(0), & T_i=0.\n\\end{cases}$\n\n\n\nThe observed outcome $Y_i$ can be written in terms of potential outcomes as\n$$ Y_i = Y_{i}(0) + [Y_{i}(1)-Y_{i}(0)]\\cdot T_i$$\n\n$Y_{i}(1)-Y_{i}(0)$ is the *causal* effect of $T_i$ on $Y_i$. \n\nBut we never observe the same individual $i$ in both states. This is the **fundamental problem of causal inference**. \n\n## Selection Bias\n\nWe need some way of estimating the state we do not observe (the ***counterfactual***)\n\nUsually, our sample contains individuals from both states - treated and untreated.\n\nSo why not do a naive comparison of averages by treatment status? i.e. $E[Y_i|T_i = 1] - E[Y_i|T_i = 0]$\n\n## Selection Bias II\nWe can rewrite into:\n\\begin{align}\n\\nonumber E[Y_i|T_i = 1] - E[Y_i|T_i = 0] = &E[Y_i(1)|T_i = 1] - E[Y_i(0)|T_i = 1] + \\\\\n \\nonumber &E[Y_i(0)|T_i = 1] - E[Y_i(0)|T_i = 0] \n\\end{align}\n\n\nThe decomposition:\n\n - $E[Y_i(1)|T_i = 1] - E[Y_i(0)|T_i = 1] = E[Y_i(1) - Y_i(0)|T_i = 1]$: the average *causal* effect of $T_i$ on $Y$. \n\n- $E[Y_i(0)|T_i = 1] - E[Y_i(0)|T_i = 0]$: difference in average $Y_i(0)$ between the two groups. Likely to be different from 0 when individuals are allowed to self-select into treatment. Often referred to as ***selection bias***. \n\n# Experiments\n\n## Random assignment solves the problem\n\nRandom assignment implies $T_i$ is independent of potential outcomes\n\n- Selection bias term is zero: $E[Y_{i}(0)|T_i = 1] = E[Y_{i}(0)|T_i = 0]$ \n\n- Intuition: non-treated individuals can be used as counterfactuals for treated (*what would have happened to individual $i$ had he not received the treatment*?)\n\n- Overcome the fundamental problem of causal inference\n\n\n## Randomization\n\nHolland and Rubin (1986)\n\n> no causation without manipulation\n\n\nAs mentioned, we need to worry when individuals are allowed to self-select\n\n- A lot of thought has to go into the *randomization phase*.\n\n- Randomization into treatment groups has to be manipulated by someone.\n\n\n## Randomized Controlled Trials\n\n*Randomized controlled trials (RCT)*: randomization done by researcher\n\n- Survey experiments\n- Field experiments\n\nNote: difficult to say one is strictly better than the other. Randomization can be impractical and/or unethical. \n\n\n## An alternative to experiments\n\n*Quasi-experiments*: randomization happens by \"accident\"\n\n- Matching (*today*)\n- Differences in Differences\n- Regression Discontinuity Design\n- Instrument variables\n\n\n# Matching\n\n\n## The what and why of matching\n\n**What** - we construct counterfactual potential treated and control units. \n- We *match* observations across treatment and control based on similarity. \n\n**Why** - matching control for used covariates \n- excludes (observable) confounders \n- may improve precision of treatment estimate of experiments (less variance)\n\nNote: An alternative to matching is to using regression - basically same idea.\n\nProblem: \n- matching does not unconfound generally!!\n- unobserved factors may still confound \n\n## The how of matching\n\nWe use a set of covariates $X$ for matching.\n\nTwo core ideas:\n- We match on covariates \n - We require sufficient similarity by some metric over covarities\n- We match on propensity \n - We require sufficient similar probability of treatment (prediction)\n\n\n# Covariate based matching\n\n\n\n## Exact matching\n\n\n We match a treatment $i$ obs. with control obs. $j$ if \n - $X_i=X_j$, i.e. they are exactly identical, \n - $||X_i-X_j||_2=0$, i.e. zero Euclidian distance\n \n\n## Treatment effects\n\n\nWe can compute the Average Treatment Effect (ATE) \n\n- For treatment obs. $i$ the counterfactual outcomes $Y_i(0)$ are the average of control $j$ where $X_j=X_i$.\n- For control obs. $i$ the counterfactual outcomes $Y_i(1)$ are the average of treatment $j$ where $X_j=X_i$.\n\nWe can also compute treatment effects only for treament observations, known as Average Treatment Effect on the Treated (**ATT** or **ATET**).\n\n## Balance of match\n\nWhat happens if some observations are not matched?\n- We get biased estimates!\n- We not to check whether the match is balanced\n - Problem, exact matching usually leads to very few matches.\n\n## Example of exact matching \n\n\nAim: understand whether traning program affects wages. \n\nWe have covariates and outcomes treatment and controls. (synthetic data from Scott Cunninghams's \"Causal Inference - The Mixtape\" book)\n\n\n```python\nscuse = 'https://storage.googleapis.com/causal-inference-mixtape.appspot.com/{0}.dta'\ndf = pd.read_stata(scuse.format('training_example')).replace('',np.nan)\narr = df.values[:20].astype('float')\nX_cntrl, y_cntrl = arr[:20,4:5], arr[:20,5]\nX_treat, y_treat = arr[:10,1:2], arr[:10,2]\n```\n\n\n```python\nf,ax = plt.subplots()\nsns.distplot(X_treat, bins=10, label='Treatment', ax=ax)\nsns.distplot(X_cntrl, bins=10, label='Control', ax=ax)\nax.legend()\nax.set_xlim(15,55)\nsns.despine(f)\nf.savefig('fig/balance_labor.png',dpi=300)\n```\n\n## Example of exact matching (2)\n\nWe have only one dimension of covariate so we can easily check the balance.\n- Problem no counterfactuals for control!!\n\n
\n\n\n\n## Example of exact matching (3)\n\nWe can match exactly using `RadiusNeighborsRegressor` with zero radius.\n- OBS: in econometrics this radius is often known as a caliper \n\n\n```python\nfrom sklearn.neighbors import RadiusNeighborsRegressor as RNR\n\nimpute_t_exact = RNR(radius=0).fit(X_cntrl, y_cntrl).predict(X_treat)\nimpute_c_exact = RNR(radius=0).fit(X_treat, y_treat).predict(X_cntrl)\nimpute_c_exact\n```\n\n One or more samples have no neighbors within specified radius; predicting NaN.\n\n\n\n\n\n array([10000., 11750., 10250., nan, nan, 12250., nan, 13250.,\n 11000., 12500., 13250., nan, 10500., 9500., nan, nan,\n 9750., 12500., nan, nan])\n\n\n\n## Exact matching (4)\n\nWe can compute unbiased estimate of ATT:\n\n\n```python\nprint(y_treat.mean(), y_cntrl.mean() )\ndiff = y_treat - impute_t_exact\nprint(f'ATT: {round(diff.mean(),1)} \u00b1 {round(diff.std()*1.96,1)}')\n```\n\n 11075.0 11101.25\n ATT: 1695.0 \u00b1 646.7\n\n\n## Other covariate based matching\n\nWe can extend exact matching in several ways\n\n- Coarsened Exact Matching: \n - where continuous variables are split into blocks\n - very popular for experiments\n- Radius / Caliper matching\n- Nearest neighbor matching\n\nWe can also have different metrics:\n- Euclidian\n- Mahalanobis distance ($=(X-\\bar{X})^TCOVAR(X)(X-\\bar{X}) $)\n\nNote that approximate matching on covariates may introduce other biases, see [Abadie and Imbenes (2011)](https://doi.org/10.1198/jbes.2009.07333).\n\n\n# Propensity score matching\n\n\n## Predicting treatment status\n\nAlternative way of match on likelihood of treatment.\n\nProcedure: \n1. estimate a model that predicts treatment \n2. match with observations of similar treatment likelihood\n - (use match function, e.g. nearest neighbor, caliper)\n3. compute counterfactual outcomes for treatment and control\n4. (possibly adjust for differences in observed covariates)\n5. compute ATE\n\n## Uncoundedness property\n\n[Rosenbaum and Rubin (1983)](https://doi.org/10.1093/biomet/70.1.41) show that propensity score matching will unconfounded:\n- can serve as an unbiased estimator of the average treatment effect\n- endows non-experimental data with experimantal qualities\n\nCritical requirement - conditional independence assumption (**CIA**): \n- same as no unobserved confounders \n- often CIA is violated \n - e.g. causal effect of taking education with registry data - many unobserved factors\n \n\n## Summary - matching\n\nUseful tool, but requires that we know all relevant factors \n- can be useful to minimize variance of experimental estimates\n- problem in observational studies - often there are unobserved confounders and selection\n\nIf we think there is selection effects or endogeneity:\n- Use quasi-experimental methods which can handle this, e.g. diff-in-diff or regression discontinuity\n\n# Causal trees\n\n\n## Average Joe\nSuppose, we have credible measures of average treatment effect, $\\tau$. \n\nCan we get personalized estimates? \n\n- Measure whether certain groups are affected differently by our new school policy\n - e.g. boys vs. girls, natives vs. immigrants\n- Some react positively to one kind of information, others to another\n\n\n## Beyond average Joe\n\nConditional Average Treatment Effects (CATE)\n\n- Treatment effect for given characteristics $x$\n - $\\tau(x) = \\mathbb{E}[Y_i(1)-Y_i(0)|X=x]$\n \nMethods exist, e.g. use regression analysis. \n\nBut.. **True model is unknown**..!\n\n- May need to test model on data.\n- Can lead to conclusions based on data mining (dangerous!!)\n\n## Being dishonest with you\n\nAn adaptive, data driven approach\n\n- use all data for training decision tree\n - partitions X into categories based outcome similarity\n - enough treatment and control in each leaf\n- then estimating treatment effects in partitions\n - measure treatment effects in each partition group (=leaf in tree model)\n\n**Quiz**: is this different from propensity scores?\n\n- Propensity scores has treatment assignment $T_i$ as target.\n\n- The adaptive approach uses outcome $y_i$ as target.\n\n## Getting honest with you\n\nCould we use out-of-sample intuition? \n\n[Athey and Imbens (2016)](https://doi.org/10.1073/pnas.1510489113) suggest to let data speak **honestly**: \n- half of sample ($\\mathcal{S}^{tr}$) for training decision tree\n - partitions X into categories based outcome similarity\n - enough treatment and control in each leaf\n- other half ($\\mathcal{S}^{est}$) for estimating treatment effects \n - measure treatment effects in each partition group (=leaf in tree model)\n\n\nThis is similar to splitting into train and test\n- prevents data-leakage\n- allows honest evaluation of model performance! \n\n\n## Core assumption\n\nPotential outcomes and treatment assignment are unconfounded given covariates \n\n\\begin{equation*}\nT_i \\,\\perp\\!\\!\\perp\\, (Y_i(1), \\,Y_i(0))\\,\\, | \\,\\, X\n\\end{equation*}\n\n- where $\\perp\\!\\!\\perp$ is a symbol for conditional independence (strong assumption!!)\n- recall from earlier\n - always holds for experiments\n - or propensity scores (note: assumption cannot be tested)\n\n## Modified splitting procedure\n\nThe usual way of training decision trees is Classification And Regression Trees (CART).\n\n- Splits leaves repeatedly based on criteria (e.g. entropy, MSE)\n- We can put in restriction, e.g. depth of trees (hyperparameters)\n\nCausal trees \n- new criteria: \n - expected MSE (in hypothetical test set): $\\mathbb{E}[\\underset{=MSE}{\\underbrace{(Y_i-\\bar{Y}_i)^2}} - Y_i^2]$\n - idea: new term $Y_i^2$ penalizes small leaves \n- note: same ranking as MSE, matters for properties\n\n## Modified splitting procedure\n\nThe usual way of training decision trees is Classification And Regression Trees (CART).\n\n- Splits leaves repeatedly based on criteria (e.g. entropy, MSE)\n- We can put in restriction, e.g. depth of trees (hyperparameters)\n\nCausal trees \n- criteria: $\\mathbb{E}[(Y_i-\\bar{Y_i})^2 - Y_i^2]$\n- note: same ranking as MSE, matters for properties\n\n## Inference\n\nPartioning of the covariate data works like coarsened matching!\n\n- Estimate average treatment effects locally for each group/leaf\n- Corresponds to local matching!\n\n## Inference - validation\n\n[Athey and Imbens (2016)](https://doi.org/10.1073/pnas.1510489113) performs a simulation study under various scenarios. \n\nMain take-away: **honest** outperforms **adapative** (convential CART).\n\n
\n\n
\n\n\n\n\n## Summary - causal trees\n\nLeverage machine learning idea: \n- Heterogeneity is estimated separate from treatment effects.\n- New scoring function makes smaller leafs.\n- Outperforms adapative procedure\n\nMain advantage \n- Structure of heterogeneity from data.\n- Can be part of pre-analysis plan - only one solution (given split of data!).\n\n\n\n# Recap on Random Forest\n\n## The forest full of trees\n\nWhat is the difference between a Decision Tree and a Random Forest?\n\n- Decision tree iteratively splits data into subsets (partitions) and calculates mean outcome in leaves (end of splits)\n- Minimize on some criteria, often entropy or similar loss function\n- Collection/ensemble of decision trees\n - Subset of data by bootstrap (sampling with replacement)\n - Subset of features\n\n
\n\n\n\n\n\n\n## A special tree \n\nSo what distinguishes a Causal Tree from a Decision Tree?\n\n- Causal tree estimates partition of data where treatment effects can be computed locally \n- In order to have valid estimates we need **honesty** of trees by estimating partitions and treatment effects on different subsets of data\n - Analogy to train / test split\n\n\n
\n
\n\n
\n\n\n## A tradeoff in structure of heterogeneity\n\n\nTwo approaches? \n\n- Data driven heterogeneity\n - Based on causal trees etc.\n- A priori sensible heterogeneity \n - e.g. gender, socioeconomic, ethnicity\n - we use regression model and have interaction with desired variable\n\nWhen to choose which?\n- Choose data driven heterogeneity for policy where you want to maximize impact given data (no theory)\n- If we want to test whether certain subgroups are adversely affected\n\n## Limitations of Decision Trees \n\n\nRandom forests are nice but no asymptotic normality of prediction.\n\n- Crucial for inference! (corresponds to MLR6 in Econometrics 1)\n\n- Also holds for causal trees\n\n\n\n# Random forest for inference and treatment effecs\n\n## Causal Trees\n\nThe goal of causal trees is to establish unbiased, consistent estimates of heterogeneous treatment effects\n- also known as conditional average treatment effects (**CATE**)\n- the effect size is denoted $\\hat{\\tau}(x)$;\n- standard tools for inference, e.g. using statistical tests locally\n\n\n\n\n\n\n\n\n\n\n## Causal Forest \n\nWhat is the output from the decisions trees? Each tree produces a partitioning of the feature space $X$. Example of three trees:\n\n
\n\n(from [Athey, Wager, Tibshirani, 2019](https://doi.org/10.1214/18-aos1709))\n\n## Double Sample Trees \n\nFor Causal Trees\n\n- first half ($\\mathcal{J}$, $|\\mathcal{J}|=\\lceil s / 2\\rceil$)\n - training Decision Tree\n - minimize adjusted MSE \n - require at least $k$ observations for both treatment and control in all leaves of $\\mathcal{I}$-sample\n- other half ($\\mathcal{I}$, $|\\mathcal{I}|=\\lfloor s / 2\\rfloor$)\n - estimating treatment effects, $\\hat{\\tau}(x)$\n\n## Double Sample Trees (2)\n\nFor Regression Trees \n\n- first half ($\\mathcal{J}$, $|\\mathcal{J}|=\\lceil s / 2\\rceil$)\n - training Decision Tree\n - minimize MSE / Gini etc.\n - require at least $k$ observations in all leaves of $\\mathcal{I}$-sample\n- other half ($\\mathcal{I}$, $|\\mathcal{I}|=\\lfloor s / 2\\rfloor$)\n - estimating outcome, $\\hat{\\mu}(x)$\n\n\n**Quiz:** How is this different from normal Decision Trees for regression problems?\n\n\n- Unlike normal decision trees outcomes are estimated honestly.\n\n\n## Main results: econometric properties (1)\n\n[Wager and Athey (2017)](https://doi.org/10.1080/01621459.2017.1319839) show \n - We can estimate the variance of CATE\n - $\\hat{V}_{IJ}(x)=\\frac{n-1}{n}\\left(\\frac{n}{n-s}\\right)^{2} \\sum_{i=1}^{n} \\operatorname{Cov}_{*}\\left[\\hat{\\tau}_{b}^{*}(x), N_{i b}^{*}\\right]^{2}$\n\n## Main results: econometric properties (2)\n\nFrom Theorem 4.1 in [Wager and Athey (2017)](https://doi.org/10.1080/01621459.2017.1319839)\n\n- The conditional average treatment estimates are unbiased and consistent\n - unbiased: no systematic error of measurement\n - consistency: with more data our estimate approaches true value \n\n- Moreover, we can do inference:\n - The variance estimator $\\hat{V}_{IJ}(x)$ is consistent.\n - Treatment effect estimates are asymptotic normal and unbiased\n - $(\\hat{\\tau}(x)-\\tau(x)) / \\sqrt{\\operatorname{Var}[\\hat{\\tau}(x)]} \\Rightarrow \\mathcal{N}(0,1)$\n\nCaveat: only works for evaluating treatment effects in one point $x$! Do not perform multiple tests.\n\n## Useful forests\n\nTwo more procedures\n\n1. Double Sampled Trees\n - using Regression trees for predicting outcome (=$\\hat{\\mu}(x)$)\n1. Propensity Trees\n - using propensity trees for propensity score matching\n \n \n\nWhat is the shared procedure? \n- Each tree is estimated using repeated subsampling (**no replacement**)\n - Constrast to bootstrap aggregation in random forest (sample **with replacment**)\n- Random subsample of features \n\n## More results \n\n[Wager and Athey (2017)](https://doi.org/10.1080/01621459.2017.1319839) show that the same properties of Double Sample Trees using causal trees also hold analogously for regression trees. \n- Random forests have the property of being asymptotic normal and can thus be used for inference\n- Similar intuition as idea of nested CV where we could do inference\n\n## Simulation experiment\n\n[Wager, and Athey (2017)](https://doi.org/10.1214/18-aos1709) compare causal forest to nearest neighbor methods\n\n- random forest is kind of local nearest neighbor estimate\n- based on work by Lin and Jeon (2006).\n\n## Simulation (1) \n\n- simulation setup: no treatment effect, only confounding factors\n- method: propensity trees \n- comparison of estimated treatment effects \n - lower MSE and better coverage\n - coverage falls for increasing number of variables $d$\n\n
\n\n(from [Athey, Wager, Tibshirani, 2019](https://doi.org/10.1214/18-aos1709))\n\n## Simulation (2) \n\n- setup: heterogeneous treatment effect, **no** confounding factors\n\n- comparison of estimated treatment effects\n - lower MSE and better coverage\n - coverage falls for increasing number of variables $d$\n\n
\n\n(from [Athey, Wager, Tibshirani, 2019](https://doi.org/10.1214/18-aos1709))\n\n## Alternatives to compute heterogeneous treatment effects \n\n\nThere are some existing and new frameworks for estimating heterogeneous treatment effects. For instance BART is already quite established and often outperforms GRF.\n\n- [Chipman, George and McCulloch (2010)](https://doi.org/10.1214/09-aoas285) develops Bayesian Additive Regression Trees (BART)\n\n\n- [K\u00fcnzel et al. (2019)](https://doi.org/10.1073/pnas.1804597116) investigates more general class of prediction tools for partitioning data using \n\n - Lower EMSE in many cases relative to CF/GRF and BART \n \n- [Nie and Wager (2017)](https://arxiv.org/pdf/1712.04912.pdf) investigates another class of methods called R-learners that leverages a smart representation of CATE.\n\n## Round-up causal forest\n\nSummary of [Wager and Athey (2017)](https://doi.org/10.1080/01621459.2017.1319839) \n- builds on Causal Trees method\n- strong econometric properties\n - unbiased and consistent\n - asymptotic normality given $x$\n - causal and regression forest allows inference!\n- problem: \n - must choose focus \n - unconfounding (propensity) or \n - estimate CATE\n - coverage was not good, especially for higher $d$!\n \n\n\n\n# Generalized Random Forest\n\n## A higher aim\n\nCausal forests are pretty cool. Can we use our honest procedure more generally? \n\n- Estimate any quantity $\\theta(x)$ identified via local moment conditions, e.g.\n - simultaneously unconfound and find heterogeneity?\n - find heterogeneous treatment effects from IV estimation?\n\n## Estimating equations\nThe general estimating equation\n - $\\mathbb{E}\\left[\\psi_{\\theta(x), \\nu(x)}\\left(O_{i}\\right) | X_{i}=x\\right]=0, \\quad \\forall x.$\n \nWhere $\\psi$ estimating function, maps parameters and data into moment equations\n - Parameters\n - $\\theta$ parameter we want estimate \n - $\\nu$ is nuisance we want to \"partial out\" (optional)\n - Data \n - $O_i$ main objects we are interested in modelling, e.g. $Y_i, T_i$\n - $X_i$ covariates\n\n\n## Estimating equations\n\nWhat is a moment condition?\n\n\n\n- Similiar to solution to first order condition\n- More general - can incorporate extra restrictions (e.g. unconfounding)\n\n## Estimating equations\n\nSuppose we want to estimate conditional average treatment effects\n\nFunctional form: $\\psi_{\\beta(x),c(x)}=Y_i-\\beta(x)W_i-c(x) \\left(1 \\quad W_{i}\\right)$ where\n - $\\beta$ is treatment effect\n - $c$ is nuisance parameter\n\n## Using a kernel\n\nKernel methods can be used to unconfound and compute heterogeneous effects simulateneously\n\n\n- Problem how to decide weights? \n\n\n\n
\n\n\n(from [Athey, Wager, Tibshirani, 2019](https://doi.org/10.1214/18-aos1709))\n\n## The Generalized Random Forest\n\n[Athey, Wager, Tibshirani (2019)](https://doi.org/10.1214/18-aos1709) show that kernel weights can be estimated using forest methods\n\n- can be adapted for different purposes\n - quantile regression\n - heterogeneous treatment effects\n - instrumental variables\n\n## The Generalized Random Forest (2)\n\n[Athey, Wager, Tibshirani (2019)](https://doi.org/10.1214/18-aos1709) use a procedure as follows:\n\n1. Use estimating equation, $\\psi$ to estimate tree splits iteratively on subsample. \n2. View forests as a weights of similar neighbors\n - Amount of partitions where observations \n \\begin{equation}\\alpha_i(x)=\\frac{1}{B}\\sum_{b=1}\\frac{\\mathbb{1}(X_i\\in L_b(x))}{|L_b(x)|}\\end{equation}\n3. Re-estimate $\\psi$ using weights on entire sample.\n\nDifference from Causal Forest - trees are used for constructing weights!\n\n## Computing weights $\\alpha$\n\nGiven a subsample $\\mathcal{I}$ of data.\n\n1. Split subsample $\\mathcal{I}$ into $\\mathcal{J}_1,\\mathcal{J}_2$\n1. Estimate trees on $\\mathcal{J}_1$\n 1. Compute estimating equations on subsample at each parent node (before split)\n 1. Evaluate different splits - use approximate solutions with gradients\n1. Estimate weights using forests on $\\mathcal{J}_2$.\n \\begin{equation}\\alpha_i(x)=\\frac{1}{B}\\sum_{b=1}\\frac{\\mathbb{1}(X_i\\in L_b(x))}{|L_b(x)|}\\end{equation}\n - where $L_b(x)$ training examples falling in the same leaf as x\n\n## Overall procedure\n\n1. Repeatedly estimate trees where splits are based on estimating equation, $\\psi$ to obtain weights $\\alpha$. \n1. Re-estimate $\\psi$ using weights on entire sample where forests splits are weights.\n\n## Estimating equations\n\nGiven $x$ compute the ***local*** estimating equations using weights $\\alpha_i(x)$ on entire sample:\n\n\\begin{equation}\n(\\hat{\\theta}(x), \\hat{\\nu}(x)) \\in \\operatorname{argmin}_{\\theta, \\nu}\\left\\{\\left\\|\\sum_{i=1}^{n} \\alpha_{i}(x) \\psi_{\\theta, \\nu}\\left(O_{i}\\right)\\right\\|_{2}\\right\\}\n\\end{equation}\n\nExample of applications in Athey et al. (2019)\n- Conditional Average Treatment Effects\n- Instrumental Variables\n- Quantile Regressions\n\n\n## Estimation equations for CATE\n\nWhat does the local estimating equation look like under CATE?\n\n- the estimating equations $\\psi$ with possibly multi-dimensional treatments \n
\n
\n\\begin{align}\\psi_{\\beta(x), c(x)}\\left(Y_{i}, W_{i}\\right)=\\left(Y_{i}-\\beta(x) \\cdot W_{i}-c(x)\\right)\\left(1 \\quad W_{i}\\right)\\end{align}\n
\n- Note: $\\left(1 \\quad W_{i}\\right)$ which implies there is a vector of equations\n \n\n## Estimation equations for CATE (2)\n\nWhat is the solution?\n- Run local regression of $y_i$ on $W_i$ with weights $\\alpha$\n \n\\begin{align}\\hat{\\theta}(x)=\\xi^{\\top}\\left(\\sum_{i=1}^{n} \\alpha_{i}(x)\\left(W_{i}-\\bar{W}_{\\alpha}\\right)^{\\otimes 2}\\right)^{-1} \\sum_{i=1}^{n} \\alpha_{i}(x)\\left(W_{i}-\\bar{W}_{\\alpha}\\right)\\left(Y_{i}-\\bar{Y}_{\\alpha}\\right)\\end{align}\n\n## Main results \n\nAthey et al (2019) show that Generalized Random Forests have the following propeties:\n\n- Estimates, $\\hat{\\theta}(x)$, are consistent (Theorem 3)\n- Asymptotic normality of estimates (Theorems 5,6)\n\n# Application of causal forest\n\n## Note about software\n\nUse `econml` implementation \n\n- Not implementation of generalized random forest\n- Uses double machine learning to unconfound data\n- Caution:\n - If you want to use in research - use GRF package by Athey et al.\n\n## Make synthetic data\n\n\n```python\nimport numpy as np\n\nn, p = 1000, 10\nX = np.random.RandomState(0).randn(n,p)\nP = 0.4 + 0.2 * (X[:, 0] > 0)\nT = (np.random.rand(n) > P).astype('float')\nY = np.max([X[:, 0] * T, np.zeros(n)],0) + \\\n X[:, 1] + np.max([X[:, 2] * T, np.zeros(n)],0)\n```\n\n## Apply causal forest from `econml`\n\nNote - we use double maching learning here to unconfound treatment assignment and control for confounders of outcome\n\n\n```python\nt0,t1 = 0,1\nfrom econml.dml import CausalForestDML\nfrom sklearn.ensemble import GradientBoostingRegressor\nest = CausalForestDML(model_y=GradientBoostingRegressor(),\n model_t=GradientBoostingRegressor())\nest.fit(Y, T, X=X, W=X)\n```\n\n\n\n\n \n\n\n\n## Getting predicted treatment effects for x range\n\n\n```python\nn_plot = 1000\nX_range = np.zeros([n_plot,p])\nX_range[:,0] = np.linspace(-2,2,n_plot)\nY_dgp = np.max([X_range[:, 0] * np.ones(n_plot), np.zeros(n_plot)],0) + X_range[:, 1] + np.max([X_range[:, 2] * np.ones(n_plot), np.zeros(n_plot)],0)\n\ntau_hat = est.effect(X_range, T0=t0, T1=t1)\ntau_lb, tau_ub = est.effect_interval(X_range, T0=t0, T1=t1, alpha=0.05)\n```\n\n## Plotting treatment effects\n\n\n```python\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nf,ax = plt.subplots(figsize=(8,6))\nax.plot(X_range[:,0], Y_dgp, color='red', label='Data Generating Process')\n# plt.plot(X_range[:,0], tau_hat)\nax.fill_between(X_range[:,0], tau_lb, tau_ub, label=\"Causal Forest 95% CI\")\nax.legend()\n```\n\n# Comparison \n### Causal Forests and Generalized Random Forests\n\n## Causal Trees and Forests\nStrong econometric properties\n- unbiased and consistent (trees and forests)\n- asymptotic normality given $x$ (forests)\n\n- weaknesses: \n - either unconfounding or heterogeneity\n - we \"use\" data to buy honesty at the price of statistical power\n \n\n## Generalized Random Forest \nDifference from Causal Forest - trees are used for constructing weights!\n- strengths: \n - unconfounding (propensity) *AND* heterogeneity\n - additional uses \n - quantile regression\n - instrumental variables \n - clustered standard errors\n - and more\n- weakness: \n - we \"waste\" data on honesty\n", "meta": {"hexsha": "8f0f71ef1d572dad698ceac5921d61fa0246aef8", "size": 149561, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "session_5/lecture_5.ipynb", "max_stars_repo_name": "carolineespegren/mle_phd_oslo", "max_stars_repo_head_hexsha": "0b74203553cd4dd841a0186c999d3dfc59722000", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:42:00.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-17T07:10:56.000Z", "max_issues_repo_path": "session_5/lecture_5.ipynb", "max_issues_repo_name": "carolineespegren/mle_phd_oslo", "max_issues_repo_head_hexsha": "0b74203553cd4dd841a0186c999d3dfc59722000", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "session_5/lecture_5.ipynb", "max_forks_repo_name": "carolineespegren/mle_phd_oslo", "max_forks_repo_head_hexsha": "0b74203553cd4dd841a0186c999d3dfc59722000", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2021-05-04T12:31:35.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-15T06:26:24.000Z", "avg_line_length": 70.7813535258, "max_line_length": 26496, "alphanum_fraction": 0.8122304611, "converted": true, "num_tokens": 7893, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.20181322226037882, "lm_q1q2_score": 0.10011829426901715}}